WorldWideScience

Sample records for improved computational detection

  1. Improved cancer detection in automated breast ultrasound by radiologists using Computer Aided Detection

    Zelst, J.C.M. van, E-mail: Jan.vanZelst@radboudumc.nl [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Tan, T.; Platel, B. [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Jong, M. de [Jeroen Bosch Medical Centre, Department of Radiology, ‘s-Hertogenbosch (Netherlands); Steenbakkers, A. [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Mourits, M. [Jeroen Bosch Medical Centre, Department of Radiology, ‘s-Hertogenbosch (Netherlands); Grivegnee, A. [Jules Bordet Institute, Department of Radiology, Brussels (Belgium); Borelli, C. [Catholic University of the Sacred Heart, Department of Radiological Sciences, Rome (Italy); Karssemeijer, N.; Mann, R.M. [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands)

    2017-04-15

    Objective: To investigate the effect of dedicated Computer Aided Detection (CAD) software for automated breast ultrasound (ABUS) on the performance of radiologists screening for breast cancer. Methods: 90 ABUS views of 90 patients were randomly selected from a multi-institutional archive of cases collected between 2010 and 2013. This dataset included normal cases (n = 40) with >1 year of follow up, benign (n = 30) lesions that were either biopsied or remained stable, and malignant lesions (n = 20). Six readers evaluated all cases with and without CAD in two sessions. CAD-software included conventional CAD-marks and an intelligent minimum intensity projection of the breast tissue. Readers reported using a likelihood-of-malignancy scale from 0 to 100. Alternative free-response ROC analysis was used to measure the performance. Results: Without CAD, the average area-under-the-curve (AUC) of the readers was 0.77 and significantly improved with CAD to 0.84 (p = 0.001). Sensitivity of all readers improved (range 5.2–10.6%) by using CAD but specificity decreased in four out of six readers (range 1.4–5.7%). No significant difference was observed in the AUC between experienced radiologists and residents both with and without CAD. Conclusions: Dedicated CAD-software for ABUS has the potential to improve the cancer detection rates of radiologists screening for breast cancer.

  2. Improved cancer detection in automated breast ultrasound by radiologists using Computer Aided Detection

    Zelst, J.C.M. van; Tan, T.; Platel, B.; Jong, M. de; Steenbakkers, A.; Mourits, M.; Grivegnee, A.; Borelli, C.; Karssemeijer, N.; Mann, R.M.

    2017-01-01

    Objective: To investigate the effect of dedicated Computer Aided Detection (CAD) software for automated breast ultrasound (ABUS) on the performance of radiologists screening for breast cancer. Methods: 90 ABUS views of 90 patients were randomly selected from a multi-institutional archive of cases collected between 2010 and 2013. This dataset included normal cases (n = 40) with >1 year of follow up, benign (n = 30) lesions that were either biopsied or remained stable, and malignant lesions (n = 20). Six readers evaluated all cases with and without CAD in two sessions. CAD-software included conventional CAD-marks and an intelligent minimum intensity projection of the breast tissue. Readers reported using a likelihood-of-malignancy scale from 0 to 100. Alternative free-response ROC analysis was used to measure the performance. Results: Without CAD, the average area-under-the-curve (AUC) of the readers was 0.77 and significantly improved with CAD to 0.84 (p = 0.001). Sensitivity of all readers improved (range 5.2–10.6%) by using CAD but specificity decreased in four out of six readers (range 1.4–5.7%). No significant difference was observed in the AUC between experienced radiologists and residents both with and without CAD. Conclusions: Dedicated CAD-software for ABUS has the potential to improve the cancer detection rates of radiologists screening for breast cancer.

  3. An improved computing method for the image edge detection

    Gang Wang; Liang Xiao; Anzhi He

    2007-01-01

    The framework of detecting the image edge based on the sub-pixel multi-fractal measure (SPMM) is presented. The measure is defined, which gives the sub-pixel local distribution of the image gradient. The more precise singularity exponent of every pixel can be obtained by performing the SPMM analysis on the image. Using the singularity exponents and the multi-fractal spectrum of the image, the image can be segmented into a series of sets with different singularity exponents, thus the image edge can be detected automatically and easily. The simulation results show that the SPMM has higher quality factor in the image edge detection.

  4. Computed tomography in the detection of pulmonary metastases. Improvement by application of spiral technology

    Kauczor, H.U.; Hansen, M.; Schweden, F.; Strunk, H.; Mildenberger, P.; Thelen, M.

    1994-01-01

    Computed tomography is the imaging modality of choice for detection or exclusion of pulmonary metastases. In most cases these are spheric, multiple, bilateral, and located in the peripheral areas of the middle and lower fields of the lungs. Differential diagnosis of solitary pulmonary nodules is difficult. Evaluating whether they are malignant or benign is insufficient despite the application of multiple CT criteria. Spiral computed tomography acquiring an imaging volume in a breathhold has led to significant improvement in the sensitivity of detecting pulmonary nodules. Imaging protocols are presented, and the influence of the different parameters is discussed. Although not all pulmonary metastases may be detected with spiral computed tomography, it is the most important examination when considering pulmonary metastasectomy. Computed tomography is the imaging modality of choice when monitoring pulmonary metastases during systemic therapeutic regimens by measuring all nodules or 'indicator lesions'. (orig.) [de

  5. Computing Adaptive Feature Weights with PSO to Improve Android Malware Detection

    Yanping Xu

    2017-01-01

    Full Text Available Android malware detection is a complex and crucial issue. In this paper, we propose a malware detection model using a support vector machine (SVM method based on feature weights that are computed by information gain (IG and particle swarm optimization (PSO algorithms. The IG weights are evaluated based on the relevance between features and class labels, and the PSO weights are adaptively calculated to result in the best fitness (the performance of the SVM classification model. Moreover, to overcome the defects of basic PSO, we propose a new adaptive inertia weight method called fitness-based and chaotic adaptive inertia weight-PSO (FCAIW-PSO that improves on basic PSO and is based on the fitness and a chaotic term. The goal is to assign suitable weights to the features to ensure the best Android malware detection performance. The results of experiments indicate that the IG weights and PSO weights both improve the performance of SVM and that the performance of the PSO weights is better than that of the IG weights.

  6. Development and using computer codes for improvement of defect assembly detection on Russian WWER NPPs

    Likhanskii, V.; Evdokimov, I.; Zborovskii, V.; Kanukova, V.; Sorokin, A.; Taran, M.; Ugrumov, A.; Riabinin, Y.

    2009-01-01

    Diagnostic methods of fuel failure detection for improving the radiation safety and shortening of fuel reload time at Russian WWERs are currently in development . The works include creation new computer means for increase of effectiveness of fuel monitoring and reliability of leakage tests. Reliability of failure detection can be noticeably improved when we apply an integrated approach including the following methods. The first is fuel failure analysis under operating conditions. Analysis is performed with the pilot version of the expert system, which has been developed on the basis of the mechanistic code RTOP-CA. The second stage of failure monitoring is 'sipping' tests in the mast of the refueling machine. The leakage tests are the final stage of failure monitoring. A new technique with pressure cycling in the specialized casks was introduced to meet the requirements of higher reliability in detection/confirmation of the leakages. Measurements of the activity release kinetics during the pressure cycling and handling of the acquired data with the RTOP-LT code enable to evaluate a defect size in leaking fuel assembly. The mechanistic codes RTOP-CA and RTOP-LT were verified on a base of specialized experimental data and currently the code were certified by Russian authorities Rostechnadzor. Now the pressure cycling method in the specialized casks has official status and is utilized at the all Russian WWER units. Some results of application of the integrated approach to fuel failure monitoring at several Russian NPPs with WWER units are reported in the present paper. Predictions of the current version of the expert system are compared with the results of the leakage tests and with the estimations of the defect size by the pressure cycling technique. Using the RTOP-CA code the level of activity is assessed for the following fuel campaign if the leaking fuel assembly was decided to be reloaded into the core. A project of the automated computer system on the basis of

  7. A dimension reduction strategy for improving the efficiency of computer-aided detection for CT colonography

    Song, Bowen; Zhang, Guopeng; Wang, Huafeng; Zhu, Wei; Liang, Zhengrong

    2013-02-01

    Various types of features, e.g., geometric features, texture features, projection features etc., have been introduced for polyp detection and differentiation tasks via computer aided detection and diagnosis (CAD) for computed tomography colonography (CTC). Although these features together cover more information of the data, some of them are statistically highly-related to others, which made the feature set redundant and burdened the computation task of CAD. In this paper, we proposed a new dimension reduction method which combines hierarchical clustering and principal component analysis (PCA) for false positives (FPs) reduction task. First, we group all the features based on their similarity using hierarchical clustering, and then PCA is employed within each group. Different numbers of principal components are selected from each group to form the final feature set. Support vector machine is used to perform the classification. The results show that when three principal components were chosen from each group we can achieve an area under the curve of receiver operating characteristics of 0.905, which is as high as the original dataset. Meanwhile, the computation time is reduced by 70% and the feature set size is reduce by 77%. It can be concluded that the proposed method captures the most important information of the feature set and the classification accuracy is not affected after the dimension reduction. The result is promising and further investigation, such as automatically threshold setting, are worthwhile and are under progress.

  8. A method to test the reproducibility and to improve performance of computer-aided detection schemes for digitized mammograms

    Zheng Bin; Gur, David; Good, Walter F.; Hardesty, Lara A.

    2004-01-01

    The purpose of this study is to develop a new method for assessment of the reproducibility of computer-aided detection (CAD) schemes for digitized mammograms and to evaluate the possibility of using the implemented approach for improving CAD performance. Two thousand digitized mammograms (representing 500 cases) with 300 depicted verified masses were selected in the study. Series of images were generated for each digitized image by resampling after a series of slight image rotations. A CAD scheme developed in our laboratory was applied to all images to detect suspicious mass regions. We evaluated the reproducibility of the scheme using the detection sensitivity and false-positive rates for the original and resampled images. We also explored the possibility of improving CAD performance using three methods of combining results from the original and resampled images, including simple grouping, averaging output scores, and averaging output scores after grouping. The CAD scheme generated a detection score (from 0 to 1) for each identified suspicious region. A region with a detection score >0.5 was considered as positive. The CAD scheme detected 238 masses (79.3% case-based sensitivity) and identified 1093 false-positive regions (average 0.55 per image) in the original image dataset. In eleven repeated tests using original and ten sets of rotated and resampled images, the scheme detected a maximum of 271 masses and identified as many as 2359 false-positive regions. Two hundred and eighteen masses (80.4%) and 618 false-positive regions (26.2%) were detected in all 11 sets of images. Combining detection results improved reproducibility and the overall CAD performance. In the range of an average false-positive detection rate between 0.5 and 1 per image, the sensitivity of the scheme could be increased approximately 5% after averaging the scores of the regions detected in at least four images. At low false-positive rate (e.g., ≤average 0.3 per image), the grouping method

  9. Improvement of Detection of Hypoattenuation in Acute Ischemic Stroke in Unenhanced Computed Tomography Using an Adaptive Smoothing Filter

    Takahashi, N.; Lee, Y.; Tsai, D. Y.; Ishii, K.; Kinoshita, T.; Tamura, H.; K imura, M.

    2008-01-01

    Background: Much attention has been directed toward identifying early signs of cerebral ischemia on computed tomography (CT) images. Hypoattenuation of ischemic brain parenchyma has been found to be the most frequent early sign. Purpose: To evaluate the effect of a previously proposed adaptive smoothing filter for improving detection of parenchymal hypoattenuation of acute ischemic stroke on unenhanced CT images. Material and Methods: Twenty-six patients with parenchymal hypoattenuation and 49 control subjects without hypoattenuation were retrospectively selected in this study. The adaptive partial median filter (APMF) designed for improving detectability of hypoattenuation areas on unenhanced CT images was applied. Seven radiologists, including four certified radiologists and three radiology residents, indicated their confidence level regarding the presence (or absence) of hypoattenuation on CT images, first without and then with the APMF processed images. Their performances without and with the APMF processed images were evaluated by receiver operating characteristic (ROC) analysis. Results: The mean areas under the ROC curves (AUC) for all observers increased from 0.875 to 0.929 (P=0.002) when the radiologists observed with the APMF processed images. The mean sensitivity in the detection of hypoattenuation significantly improved, from 69% (126 of 182 observations) to 89% (151 of 182 observations), when employing the APMF (P=0.012). The specificity, however, was unaffected by the APMF (P=0.41). Conclusion: The APMF has the potential to improve the detection of parenchymal hypoattenuation of acute ischemic stroke on unenhanced CT images

  10. Computer-aided detection of pulmonary embolism at CT pulmonary angiography: can it improve performance of inexperienced readers?

    Blackmon, Kevin N.; McCain, Joshua W.; Koonce, James D.; Costello, Philip; Florin, Charles; Bogoni, Luca; Salganicoff, Marcos; Lee, Heon; Bastarrika, Gorka; Thilo, Christian; Joseph Schoepf, U.

    2011-01-01

    To evaluate the effect of a computer-aided detection (CAD) algorithm on the performance of novice readers for detection of pulmonary embolism (PE) at CT pulmonary angiography (CTPA). We included CTPA examinations of 79 patients (50 female, 52 ± 18 years). Studies were evaluated by two independent inexperienced readers who marked all vessels containing PE. After 3 months all studies were reevaluated by the same two readers, this time aided by CAD prototype. A consensus read by three expert radiologists served as the reference standard. Statistical analysis used χ 2 and McNemar testing. Expert consensus revealed 119 PEs in 32 studies. For PE detection, the sensitivity of CAD alone was 78%. Inexperienced readers' initial interpretations had an average per-PE sensitivity of 50%, which improved to 71% (p < 0.001) with CAD as a second reader. False positives increased from 0.18 to 0.25 per study (p = 0.03). Per-study, the readers initially detected 27/32 positive studies (84%); with CAD this number increased to 29.5 studies (92%; p = 0.125). Our results suggest that CAD significantly improves the sensitivity of PE detection for inexperienced readers with a small but appreciable increase in the rate of false positives. (orig.)

  11. Computer-aided detection of pulmonary embolism at CT pulmonary angiography: can it improve performance of inexperienced readers?

    Blackmon, Kevin N.; McCain, Joshua W.; Koonce, James D.; Costello, Philip [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Florin, Charles; Bogoni, Luca; Salganicoff, Marcos [Siemens AG, H IM SYNGO CAD Research and Development, Malvern, PA (United States); Lee, Heon [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Seoul Medical Center, Department of Radiology, Seoul (Korea, Republic of); Bastarrika, Gorka [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Navarra, Department of Radiology, Pamplona (Spain); Thilo, Christian; Joseph Schoepf, U. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States)

    2011-06-15

    To evaluate the effect of a computer-aided detection (CAD) algorithm on the performance of novice readers for detection of pulmonary embolism (PE) at CT pulmonary angiography (CTPA). We included CTPA examinations of 79 patients (50 female, 52 {+-} 18 years). Studies were evaluated by two independent inexperienced readers who marked all vessels containing PE. After 3 months all studies were reevaluated by the same two readers, this time aided by CAD prototype. A consensus read by three expert radiologists served as the reference standard. Statistical analysis used {chi}{sup 2} and McNemar testing. Expert consensus revealed 119 PEs in 32 studies. For PE detection, the sensitivity of CAD alone was 78%. Inexperienced readers' initial interpretations had an average per-PE sensitivity of 50%, which improved to 71% (p < 0.001) with CAD as a second reader. False positives increased from 0.18 to 0.25 per study (p = 0.03). Per-study, the readers initially detected 27/32 positive studies (84%); with CAD this number increased to 29.5 studies (92%; p = 0.125). Our results suggest that CAD significantly improves the sensitivity of PE detection for inexperienced readers with a small but appreciable increase in the rate of false positives. (orig.)

  12. Computer-aided detection of colorectal polyps: can it improve sensitivity of less-experienced readers? Preliminary findings.

    Baker, Mark E; Bogoni, Luca; Obuchowski, Nancy A; Dass, Chandra; Kendzierski, Renee M; Remer, Erick M; Einstein, David M; Cathier, Pascal; Jerebko, Anna; Lakare, Sarang; Blum, Andrew; Caroline, Dina F; Macari, Michael

    2007-10-01

    To determine whether computer-aided detection (CAD) applied to computed tomographic (CT) colonography can help improve sensitivity of polyp detection by less-experienced radiologist readers, with colonoscopy or consensus used as the reference standard. The release of the CT colonographic studies was approved by the individual institutional review boards of each institution. Institutions from the United States were HIPAA compliant. Written informed consent was waived at all institutions. The CT colonographic studies in 30 patients from six institutions were collected; 24 images depicted at least one confirmed polyp 6 mm or larger (39 total polyps) and six depicted no polyps. By using an investigational software package, seven less-experienced readers from two institutions evaluated the CT colonographic images and marked or scored polyps by using a five-point scale before and after CAD. The time needed to interpret the CT colonographic findings without CAD and then to re-evaluate them with CAD was recorded. For each reader, the McNemar test, adjusted for clustered data, was used to compare sensitivities for readers without and with CAD; a Wilcoxon signed-rank test was used to analyze the number of false-positive results per patient. The average sensitivity of the seven readers for polyp detection was significantly improved with CAD-from 0.810 to 0.908 (P=.0152). The number of false-positive results per patient without and with CAD increased from 0.70 to 0.96 (95% confidence interval for the increase: -0.39, 0.91). The mean total time for the readings was 17 minutes 54 seconds; for interpretation of CT colonographic findings alone, the mean time was 14 minutes 16 seconds; and for review of CAD findings, the mean time was 3 minutes 38 seconds. Results of this feasibility study suggest that CAD for CT colonography significantly improves per-polyp detection for less-experienced readers. Copyright (c) RSNA, 2007.

  13. Improved algorithm for computerized detection and quantification of pulmonary emphysema at high-resolution computed tomography (HRCT)

    Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik

    2001-05-01

    Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.

  14. Computer-aided detection systems to improve lung cancer early diagnosis: state-of-the-art and challenges

    Traverso, A; Lopez Torres, E; Cerello, P; Fantacci, M E

    2017-01-01

    Lung cancer is one of the most lethal types of cancer, because its early diagnosis is not good enough. In fact, the detection of pulmonary nodule, potential lung cancers, in Computed Tomography scans is a very challenging and time-consuming task for radiologists. To support radiologists, researchers have developed Computer-Aided Diagnosis (CAD) systems for the automated detection of pulmonary nodules in chest Computed Tomography scans. Despite the high level of technological developments and the proved benefits on the overall detection performance, the usage of Computer-Aided Diagnosis in clinical practice is far from being a common procedure. In this paper we investigate the causes underlying this discrepancy and present a solution to tackle it: the M5L WEB- and Cloud-based on-demand Computer-Aided Diagnosis. In addition, we prove how the combination of traditional imaging processing techniques with state-of-art advanced classification algorithms allows to build a system whose performance could be much larger than any Computer-Aided Diagnosis developed so far. This outcome opens the possibility to use the CAD as clinical decision support for radiologists. (paper)

  15. Computer-aided detection systems to improve lung cancer early diagnosis: state-of-the-art and challenges

    Traverso, A.; Lopez Torres, E.; Fantacci, M. E.; Cerello, P.

    2017-05-01

    Lung cancer is one of the most lethal types of cancer, because its early diagnosis is not good enough. In fact, the detection of pulmonary nodule, potential lung cancers, in Computed Tomography scans is a very challenging and time-consuming task for radiologists. To support radiologists, researchers have developed Computer-Aided Diagnosis (CAD) systems for the automated detection of pulmonary nodules in chest Computed Tomography scans. Despite the high level of technological developments and the proved benefits on the overall detection performance, the usage of Computer-Aided Diagnosis in clinical practice is far from being a common procedure. In this paper we investigate the causes underlying this discrepancy and present a solution to tackle it: the M5L WEB- and Cloud-based on-demand Computer-Aided Diagnosis. In addition, we prove how the combination of traditional imaging processing techniques with state-of-art advanced classification algorithms allows to build a system whose performance could be much larger than any Computer-Aided Diagnosis developed so far. This outcome opens the possibility to use the CAD as clinical decision support for radiologists.

  16. Improving computer-aided detection assistance in breast cancer screening by removal of obviously false-positive findings

    Mordang, Jan-Jurre; Gubern-Merida, Albert; Bria, Alessandro; Tortorella, Francesco; den Heeten, Gerard; Karssemeijer, Nico

    2017-01-01

    Purpose: Computer-aided detection (CADe) systems for mammography screening still mark many false positives. This can cause radiologists to lose confidence in CADe, especially when many false positives are obviously not suspicious to them. In this study, we focus on obvious false positives generated

  17. Improving computer-aided detection assistance in breast cancer screening by removal of obviously false-positive findings

    Mordang, J.J.; Gubern Merida, A.; Bria, A.; Tortorella, F.; Heeten, G.; Karssemeijer, N.

    2017-01-01

    PURPOSE: Computer-aided detection (CADe) systems for mammography screening still mark many false positives. This can cause radiologists to lose confidence in CADe, especially when many false positives are obviously not suspicious to them. In this study, we focus on obvious false positives generated

  18. Can breast MRI computer-aided detection (CAD) improve radiologist accuracy for lesions detected at MRI screening and recommended for biopsy in a high-risk population?

    Arazi-Kleinman, T.; Causer, P.A.; Jong, R.A.; Hill, K.; Warner, E.

    2009-01-01

    Aim: To evaluate the sensitivity and specificity of magnetic resonance imaging (MRI) computer-aided detection (CAD) for breast MRI screen-detected lesions recommended for biopsy in a high-risk population. Material and methods: Fifty-six consecutive Breast Imaging Reporting and Data System (BI-RADS) 3-5 lesions with histopathological correlation [nine invasive cancers, 13 ductal carcinoma in situ (DCIS) and 34 benign] were retrospectively evaluated using a breast MRI CAD prototype (CAD-Gaea). CAD evaluation was performed separately and in consensus by two radiologists specializing in breast imaging, blinded to the histopathology. Thresholds of 50, 80, and 100% and delayed enhancement were independently assessed with CAD. Lesions were rated as malignant or benign according to threshold and delayed enhancement only and in combination. Sensitivities, specificities, and negative predictive values (NPV) were determined for CAD assessments versus pathology. Initial MRI BI-RADS interpretation without CAD versus CAD assessments were compared using paired binary diagnostic tests. Results: Threshold levels for lesion enhancement were: 50% to include all malignant (and all benign) lesions; and 100% for all invasive cancer and high-grade DCIS. Combined use of threshold and enhancement patterns for CAD assessment was best (73% sensitivity, 56% specificity and 76% NPV for all cancer). Sensitivities and NPV were better for invasive cancer (100%/100%) than for all malignancies (54%/76%). Radiologists' MRI interpretation was more sensitive than CAD (p = 0.05), but less specific (p = 0.001) for cancer detection. Conclusion: The breast MRI CAD system used could not improve the radiologists' accuracy for distinguishing all malignant from benign lesions, due to the poor sensitivity for DCIS detection.

  19. "Blind spots" in forensic autopsy: improved detection of retrobulbar hemorrhage and orbital lesions by postmortem computed tomography (PMCT).

    Flach, P M; Egli, T C; Bolliger, S A; Berger, N; Ampanozi, G; Thali, M J; Schweitzer, W

    2014-09-01

    The purpose of this study was to correlate the occurrence of retrobulbar hemorrhage (RBH) with mechanism of injury, external signs and autopsy findings to postmortem computed tomography (PMCT). Six-teen subjects presented with RBH and underwent PMCT, external inspection and conventional autopsy. External inspection was evaluated for findings of the bulbs, black eye, raccoon eyes and Battle's sign. Fractures of the viscerocranium, orbital lesions and RBH were evaluated by PMCT. Autopsy and PMCT was evaluated for orbital roof and basilar skull fracture. The leading manner of death was accident with central regulatory failure in cases of RBH (31.25%). Imaging showed a high sensitivity in detection of orbital roof and basilar skull fractures (100%), but was less specific compared to autopsy. Volume of RBH (0.1-2.4ml) correlated positively to the presence of Battle's sign (pautopsy. PMCT was superior in detecting osseous lesions, scrutinizing autopsy as the gold standard. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Computer Viruses: Pathology and Detection.

    Maxwell, John R.; Lamon, William E.

    1992-01-01

    Explains how computer viruses were originally created, how a computer can become infected by a virus, how viruses operate, symptoms that indicate a computer is infected, how to detect and remove viruses, and how to prevent a reinfection. A sidebar lists eight antivirus resources. (four references) (LRW)

  1. Improved detection of pulmonary nodules on energy-subtracted chest radiographs with a commercial computer-aided diagnosis software: comparison with human observers

    Szucs-Farkas, Zsolt; Patak, Michael A.; Yuksel-Hatz, Seyran; Ruder, Thomas; Vock, Peter

    2010-01-01

    To retrospectively analyze the performance of a commercial computer-aided diagnosis (CAD) software in the detection of pulmonary nodules in original and energy-subtracted (ES) chest radiographs. Original and ES chest radiographs of 58 patients with 105 pulmonary nodules measuring 5-30 mm and images of 25 control subjects with no nodules were randomized. Five blinded readers evaluated firstly the original postero-anterior images alone and then together with the subtracted radiographs. In a second phase, original and ES images were analyzed by a commercial CAD program. CT was used as reference standard. CAD results were compared to the readers' findings. True-positive (TP) and false-positive (FP) findings with CAD on subtracted and non-subtracted images were compared. Depending on the reader's experience, CAD detected between 11 and 21 nodules missed by readers. Human observers found three to 16 lesions missed by the CAD software. CAD used with ES images produced significantly fewer FPs than with non-subtracted images: 1.75 and 2.14 FPs per image, respectively (p=0.029). The difference for the TP nodules was not significant (40 nodules on ES images and 34 lesions in non-subtracted radiographs, p = 0.142). CAD can improve lesion detection both on energy subtracted and non-subtracted chest images, especially for less experienced readers. The CAD program marked less FPs on energy-subtracted images than on original chest radiographs. (orig.)

  2. GATE: Improving the computational efficiency

    Staelens, S.; De Beenhouwer, J.; Kruecker, D.; Maigne, L.; Rannou, F.; Ferrer, L.; D'Asseler, Y.; Buvat, I.; Lemahieu, I.

    2006-01-01

    GATE is a software dedicated to Monte Carlo simulations in Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET). An important disadvantage of those simulations is the fundamental burden of computation time. This manuscript describes three different techniques in order to improve the efficiency of those simulations. Firstly, the implementation of variance reduction techniques (VRTs), more specifically the incorporation of geometrical importance sampling, is discussed. After this, the newly designed cluster version of the GATE software is described. The experiments have shown that GATE simulations scale very well on a cluster of homogeneous computers. Finally, an elaboration on the deployment of GATE on the Enabling Grids for E-Science in Europe (EGEE) grid will conclude the description of efficiency enhancement efforts. The three aforementioned methods improve the efficiency of GATE to a large extent and make realistic patient-specific overnight Monte Carlo simulations achievable

  3. Radiation Detection Computational Benchmark Scenarios

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  4. Improving engineers' performance with computers

    Purvis, E.E. III

    1984-01-01

    The problem addressed is how to improve the performance of engineers in the design, operation, and maintenance of nuclear power plants. The application of computer science to this problem offers a challenge in maximizing the use of developments outside the nuclear industry and setting priorities to address the most fruitful areas first. Areas of potential benefits include data base management through design, analysis, procurement, construction, operation maintenance, cost, schedule and interface control and planning, and quality engineering on specifications, inspection, and training

  5. Improving digital breast tomosynthesis reading time: A pilot multi-reader, multi-case study using concurrent Computer-Aided Detection (CAD).

    Balleyguier, Corinne; Arfi-Rouche, Julia; Levy, Laurent; Toubiana, Patrick R; Cohen-Scali, Franck; Toledano, Alicia Y; Boyer, Bruno

    2017-12-01

    Evaluate concurrent Computer-Aided Detection (CAD) with Digital Breast Tomosynthesis (DBT) to determine impact on radiologist performance and reading time. The CAD system detects and extracts suspicious masses, architectural distortions and asymmetries from DBT planes that are blended into corresponding synthetic images to form CAD-enhanced synthetic images. Review of CAD-enhanced images and navigation to corresponding planes to confirm or dismiss potential lesions allows radiologists to more quickly review DBT planes. A retrospective, crossover study with and without CAD was conducted with six radiologists who read an enriched sample of 80 DBT cases including 23 malignant lesions in 21 women. Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) compared the readings with and without CAD to determine the effect of CAD on overall interpretation performance. Sensitivity, specificity, recall rate and reading time were also assessed. Multi-reader, multi-case (MRMC) methods accounting for correlation and requiring correct lesion localization were used to analyze all endpoints. AUCs were based on a 0-100% probability of malignancy (POM) score. Sensitivity and specificity were based on BI-RADS scores, where 3 or higher was positive. Average AUC across readers without CAD was 0.854 (range: 0.785-0.891, 95% confidence interval (CI): 0.769,0.939) and 0.850 (range: 0.746-0.905, 95% CI: 0.751,0.949) with CAD (95% CI for difference: -0.046,0.039), demonstrating non-inferiority of AUC. Average reduction in reading time with CAD was 23.5% (95% CI: 7.0-37.0% improvement), from an average 48.2 (95% CI: 39.1,59.6) seconds without CAD to 39.1 (95% CI: 26.2,54.5) seconds with CAD. Per-patient sensitivity was the same with and without CAD (0.865; 95% CI for difference: -0.070,0.070), and there was a small 0.022 improvement (95% CI for difference: -0.046,0.089) in per-lesion sensitivity from 0.790 without CAD to 0.812 with CAD. A slight reduction in specificity with a -0

  6. Adaptively detecting changes in Autonomic Grid Computing

    Zhang, Xiangliang; Germain, Cé cile; Sebag, Michè le

    2010-01-01

    Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and gridrunning logs. Toward Autonomic Grid Computing, adaptively detecting

  7. Link failure detection in a parallel computer

    Archer, Charles J.; Blocksome, Michael A.; Megerian, Mark G.; Smith, Brian E.

    2010-11-09

    Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

  8. Improved immunocytochemical detection of daunomycin

    Ohara, Koji; Shin, Masashi; Larsson, Lars-Inge

    2007-01-01

    and mitochondria of heart muscle cells may help to improve our understanding of the cardiac toxicity of DM and related anthracyclin antibiotics. A number of ELISA tests were carried out in order to elucidate the mechanisms of H2O2-assisted antigen retrieval. A possible mechanism is that DM is reduced and converted......Improved immunocytochemical (ICC) detection of the anthracycline anticancer antibiotic daunomycin (DM) has been achieved by used of hydrogen peroxide oxidation prior to ICC staining for DM. The new method greatly enhanced the localization of DM accumulation in cardiac, smooth and skeletal muscle...... to its semiquinone and/or hydroquinone derivative in vivo. Oxidation by hydrogen peroxide acts to convert these derivatives back to the native antigen. The improved ICC methodology using oxidation to recreated native antigens from reduced metabolites may be helpful also with respect to the localization...

  9. QRS Detection Based on Improved Adaptive Threshold

    Xuanyu Lu

    2018-01-01

    Full Text Available Cardiovascular disease is the first cause of death around the world. In accomplishing quick and accurate diagnosis, automatic electrocardiogram (ECG analysis algorithm plays an important role, whose first step is QRS detection. The threshold algorithm of QRS complex detection is known for its high-speed computation and minimized memory storage. In this mobile era, threshold algorithm can be easily transported into portable, wearable, and wireless ECG systems. However, the detection rate of the threshold algorithm still calls for improvement. An improved adaptive threshold algorithm for QRS detection is reported in this paper. The main steps of this algorithm are preprocessing, peak finding, and adaptive threshold QRS detecting. The detection rate is 99.41%, the sensitivity (Se is 99.72%, and the specificity (Sp is 99.69% on the MIT-BIH Arrhythmia database. A comparison is also made with two other algorithms, to prove our superiority. The suspicious abnormal area is shown at the end of the algorithm and RR-Lorenz plot drawn for doctors and cardiologists to use as aid for diagnosis.

  10. Touchable Computing: Computing-Inspired Bio-Detection.

    Chen, Yifan; Shi, Shaolong; Yao, Xin; Nakano, Tadashi

    2017-12-01

    We propose a new computing-inspired bio-detection framework called touchable computing (TouchComp). Under the rubric of TouchComp, the best solution is the cancer to be detected, the parameter space is the tissue region at high risk of malignancy, and the agents are the nanorobots loaded with contrast medium molecules for tracking purpose. Subsequently, the cancer detection procedure (CDP) can be interpreted from the computational optimization perspective: a population of externally steerable agents (i.e., nanorobots) locate the optimal solution (i.e., cancer) by moving through the parameter space (i.e., tissue under screening), whose landscape (i.e., a prescribed feature of tissue environment) may be altered by these agents but the location of the best solution remains unchanged. One can then infer the landscape by observing the movement of agents by applying the "seeing-is-sensing" principle. The term "touchable" emphasizes the framework's similarity to controlling by touching the screen with a finger, where the external field for controlling and tracking acts as the finger. Given this analogy, we aim to answer the following profound question: can we look to the fertile field of computational optimization algorithms for solutions to achieve effective cancer detection that are fast, accurate, and robust? Along this line of thought, we consider the classical particle swarm optimization (PSO) as an example and propose the PSO-inspired CDP, which differs from the standard PSO by taking into account realistic in vivo propagation and controlling of nanorobots. Finally, we present comprehensive numerical examples to demonstrate the effectiveness of the PSO-inspired CDP for different blood flow velocity profiles caused by tumor-induced angiogenesis. The proposed TouchComp bio-detection framework may be regarded as one form of natural computing that employs natural materials to compute.

  11. An Adaptive Middleware for Improved Computational Performance

    Bonnichsen, Lars Frydendal

    , we are improving computational performance by exploiting modern hardware features, such as dynamic voltage-frequency scaling and transactional memory. Adapting software is an iterative process, requiring that we continually revisit it to meet new requirements or realities; a time consuming process......The performance improvements in computer systems over the past 60 years have been fueled by an exponential increase in energy efficiency. In recent years, the phenomenon known as the end of Dennard’s scaling has slowed energy efficiency improvements — but improving computer energy efficiency...... is more important now than ever. Traditionally, most improvements in computer energy efficiency have come from improvements in lithography — the ability to produce smaller transistors — and computer architecture - the ability to apply those transistors efficiently. Since the end of scaling, we have seen...

  12. Computer simulation of probability of detection

    Fertig, K.W.; Richardson, J.M.

    1983-01-01

    This paper describes an integrated model for assessing the performance of a given ultrasonic inspection system for detecting internal flaws, where the performance of such a system is measured by probability of detection. The effects of real part geometries on sound propagations are accounted for and the noise spectra due to various noise mechanisms are measured. An ultrasonic inspection simulation computer code has been developed to be able to detect flaws with attributes ranging over an extensive class. The detection decision is considered to be a binary decision based on one received waveform obtained in a pulse-echo or pitch-catch setup. This study focuses on the detectability of flaws using an amplitude thresholding type. Some preliminary results on the detectability of radially oriented cracks in IN-100 for bore-like geometries are given

  13. Adaptively detecting changes in Autonomic Grid Computing

    Zhang, Xiangliang

    2010-10-01

    Detecting the changes is the common issue in many application fields due to the non-stationary distribution of the applicative data, e.g., sensor network signals, web logs and gridrunning logs. Toward Autonomic Grid Computing, adaptively detecting the changes in a grid system can help to alarm the anomalies, clean the noises, and report the new patterns. In this paper, we proposed an approach of self-adaptive change detection based on the Page-Hinkley statistic test. It handles the non-stationary distribution without the assumption of data distribution and the empirical setting of parameters. We validate the approach on the EGEE streaming jobs, and report its better performance on achieving higher accuracy comparing to the other change detection methods. Meanwhile this change detection process could help to discover the device fault which was not claimed in the system logs. © 2010 IEEE.

  14. Improvement and development of automatic detection techniques

    Yamada, Kiyomi; Takai, Setsuo; Togashi, Chikako; Itami, Jun

    2000-01-01

    For detection of radiation-induced mutation, establishment of a new sample preparation method and its procedures suitable for its automation is thought to be the key step to improve the detection efficacy and save labor. In this study, an investigation was made on the sensitivity to radiation exposure in respect of the occurrence of chromosomal breakage by high precision chromosome coloring method utilizing FISH. The number of chromosome breakage per cell was determined in chromosome 1, 4, 5, 9, 11 and 13 prepared from an identical sample exposed to three different grays. The breakage number was found to increase linearly as an increase in the amount of chromosomal DNA and hotspots of the radiation-induced breakages tended to concentrate in R band and the position of R band was almost coincident with the sites of chromosomal translocation breakages specific to leukemia, showing a correlation of radiation exposure to leukemia. Chromosome 13, 14 and 15, which were different in band pattern but similar in its length taken from cells exposed to X-ray at 5 Gy were investigated in detail and it was found that the sensitivity of chromosome to radiation was depending on the quantity and the quality of R band in each chromosome. The benefits of this chromosome coloring method for the analysis of chromosome breakage were as follows: when compared with the conventional dicentric method, the kinds of chromosomal abnormalities to be detectable were much more and its detection rate as well as accuracy was higher. In addition, the time required for determination was loess than one tenth of the conventional one. A breakage site was detectable with differences in color tone and thus, any special technique was not necessary. Therefore, the chromosome coloring method by FISH was demonstrated to be much suitable for automatic image analysis by computer. (M.N.)

  15. Computer Screen Use Detection Using Smart Eyeglasses

    Florian Wahl

    2017-05-01

    Full Text Available Screen use can influence the circadian phase and cause eye strain. Smart eyeglasses with an integrated color light sensor can detect screen use. We present a screen use detection approach based on a light sensor embedded into the bridge of smart eyeglasses. By calculating the light intensity at the user’s eyes for different screens and content types, we found only computer screens to have a significant impact on the circadian phase. Our screen use detection is based on ratios between color channels and used a linear support vector machine to detect screen use. We validated our detection approach in three studies. A test bench was built to detect screen use under different ambient light sources and intensities in a controlled environment. In a lab study, we evaluated recognition performance for different ambient light intensities. By using participant-independent models, we achieved an ROC AUC above 0.9 for ambient light intensities below 200 lx. In a study of typical ADLs, screen use was detected with an average ROC AUC of 0.83 assuming screen use for 30% of the time.

  16. Improved Motion Estimation Using Early Zero-Block Detection

    Y. Lin

    2008-07-01

    Full Text Available We incorporate the early zero-block detection technique into the UMHexagonS algorithm, which has already been adopted in H.264/AVC JM reference software, to speed up the motion estimation process. A nearly sufficient condition is derived for early zero-block detection. Although the conventional early zero-block detection method can achieve significant improvement in computation reduction, the PSNR loss, to whatever extent, is not negligible especially for high quantization parameter (QP or low bit-rate coding. This paper modifies the UMHexagonS algorithm with the early zero-block detection technique to improve its coding performance. The experimental results reveal that the improved UMHexagonS algorithm greatly reduces computation while maintaining very high coding efficiency.

  17. Detecting Soft Errors in Stencil based Computations

    Sharma, V. [Univ. of Utah, Salt Lake City, UT (United States); Gopalkrishnan, G. [Univ. of Utah, Salt Lake City, UT (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  18. Computer-aided detection in computed tomography colonography. Current status and problems with detection of early colorectal cancer

    Morimoto, Tsuyoshi; Nakijima, Yasuo; Iinuma, Gen; Arai, Yasuaki; Shiraishi, Junji; Moriyama, Noriyuki; Beddoe, G.

    2008-01-01

    The aim of this study was to evaluate the usefulness of computer-aided detection (CAD) in diagnosing early colorectal cancer using computed tomography colonography (CTC). A total of 30 CTC data sets for 30 early colorectal cancers in 30 patients were retrospectively reviewed by three radiologists. After primary evaluation, a second reading was performed using CAD findings. The readers evaluated each colorectal segment for the presence or absence of colorectal cancer using five confidence rating levels. To compare the assessment results, the sensitivity and specificity with and without CAD were calculated on the basis of the confidence rating, and differences in these variables were analyzed by receiver operating characteristic (ROC) analysis. The average sensitivities for the detection without and with CAD for the three readers were 81.6% and 75.6%, respectively. Among the three readers, only one reader improved sensitivity with CAD compared to that without. CAD decreased specificity in all three readers. CAD detected 100% of protruding lesions but only 69.2% of flat lesions. On ROC analysis, the diagnostic performance of all three readers was decreased by use of CAD. Currently available CAD with CTC does not improve diagnostic performance for detecting early colorectal cancer. An improved CAD algorithm is required for detecting fiat lesions and reducing the false-positive rate. (author)

  19. Improvement in the detection of locoregional recurrence in head and neck malignancies: F-18 fluorodeoxyglucose-positron emission tomography/computed tomography compared to high-resolution contrast-enhanced computed tomography and endoscopic examination.

    Rangaswamy, Balasubramanya; Fardanesh, M Reza; Genden, Eric M; Park, Eunice E; Fatterpekar, Girish; Patel, Zara; Kim, Jongho; Som, Peter M; Kostakoglu, Lale

    2013-11-01

    To compare the diagnostic efficacy of positron emission tomography (PET) with F-18 fluorodeoxyglucose (FDG-PET)/computed tomography (CT) to that of contrast-enhanced high-resolution CT (HRCT) and assess the value of a combinatorial approach in detection of recurrent squamous cell cancer of the head and neck (HNC) and to assess the efficacy of FDG-PET/CT with and without HRCT in comparison to standard-of-care follow-up--physical examination (PE) and endoscopy (E)--in determination of locally recurrent HNC. Retrospective study. A total of 103 patients with HNC underwent FDG-PET/CT and neck HRCT. There were two groups of patients: Group A had an FDG-PET study acquired with low-dose CT, and group B had an FDG-PET study acquired with HRCT. The PET data obtained with or without HRCT were compared on a lesion and patient basis with the results of the PE/E. On a lesion basis, both groups combined had higher sensitivity and negative predictive value (NPV) than the HRCT. Specificity and positive predictive value (PPV) for group B were higher than for group A. On a patient basis, both groups combined had a higher sensitivity and NPV than PE/E, respectively, although specificity of PE/E was higher than that of either group. PET data obtained with either protocol directly influenced treatment. HRCT increases the specificity and PPV of PET/CT when acquired simultaneously with PET. FDG-PET/CT acquired with either LDCT or HRCT has higher accuracy than HRCT alone and increases the sensitivity and NPV of PE/E. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Efficacy of computer-aided detection system for screening mammography

    Saito, Mioko; Ohnuki, Koji; Yamada, Takayuki; Saito, Haruo; Ishibashi, Tadashi; Ohuchi, Noriaki; Takahashi, Shoki

    2002-01-01

    A study was conducted to evaluate the efficacy of a computer-aided detection (CAD) system for screening mammography (MMG). Screening mammograms of 2,231 women aged over 50 yr were examined. Medio-lateral oblique (MLO) images were obtained, and two expert observers interpreted the mammograms by consensus. First, each mammogram was interpreted without the assistance of CAD, followed immediately by a re-evaluation of areas marked by the CAD system. Data were recorded to measure the effect of CAD on the recall rate, cancer detection rate and detection rate of masses, microcalcifications and other findings. The CAD system increased the recall rate from 2.3% to 2.6%. Six recalled cases were diagnosed as breast cancer pathologically, and CAD detected all of these lesions. Seven additional cases in which CAD detected abnormal findings had no malignancy. The detection rate of CAD for microcalcifications was high (95.0%). However, the detection rate for mass lesions and other findings was low (29.2% and 25.0% respectively). The false positivity rate was 0.13/film for microcalcifications, and 0.25/film for mass lesions. The efficacy of the CAD system for detecting microcalcifications on screening mammograms was confirmed. However, the low detection rate of mass lesions and relatively high rate of false positivity need to be further improved. (author)

  1. Improving HOG with image segmentation: application to human detection

    Salas, Y.S.; Bermudez, D.V.; Peña, A.M.L.; Gomez, D.G.; Gevers, T.

    2012-01-01

    In this paper we improve the histogram of oriented gradients (HOG), a core descriptor of state-of-the-art object detection, by the use of higher-level information coming from image segmentation. The idea is to re-weight the descriptor while computing it without increasing its size. The benefits of

  2. Abstracting massive data for lightweight intrusion detection in computer networks

    Wang, Wei

    2016-10-15

    Anomaly intrusion detection in big data environments calls for lightweight models that are able to achieve real-time performance during detection. Abstracting audit data provides a solution to improve the efficiency of data processing in intrusion detection. Data abstraction refers to abstract or extract the most relevant information from the massive dataset. In this work, we propose three strategies of data abstraction, namely, exemplar extraction, attribute selection and attribute abstraction. We first propose an effective method called exemplar extraction to extract representative subsets from the original massive data prior to building the detection models. Two clustering algorithms, Affinity Propagation (AP) and traditional . k-means, are employed to find the exemplars from the audit data. . k-Nearest Neighbor (k-NN), Principal Component Analysis (PCA) and one-class Support Vector Machine (SVM) are used for the detection. We then employ another two strategies, attribute selection and attribute extraction, to abstract audit data for anomaly intrusion detection. Two http streams collected from a real computing environment as well as the KDD\\'99 benchmark data set are used to validate these three strategies of data abstraction. The comprehensive experimental results show that while all the three strategies improve the detection efficiency, the AP-based exemplar extraction achieves the best performance of data abstraction.

  3. IMPROVING CAUSE DETECTION SYSTEMS WITH ACTIVE LEARNING

    National Aeronautics and Space Administration — IMPROVING CAUSE DETECTION SYSTEMS WITH ACTIVE LEARNING ISAAC PERSING AND VINCENT NG Abstract. Active learning has been successfully applied to many natural language...

  4. Delamination detection using methods of computational intelligence

    Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata

    2012-11-01

    Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.

  5. Improving Face Detection with TOE Cameras

    Hansen, Dan Witzner; Larsen, Rasmus; Lauze, F

    2007-01-01

    A face detection method based on a boosted classifier using images from a time-of-flight sensor is presented. We show that the performance of face detection can be improved when using both depth and gray scale images and that the common use of integration of hypotheses for verification can...... be relaxed. Based on the detected face we employ an active contour method on depth images for full head segmentation....

  6. USSR orders computers to improve nuclear safety

    Anon.

    1990-01-01

    Control Data Corp (CDC) has received an order valued at $32-million from the Soviet Union for six Cyber 962 mainframe computer systems to be used to increase the safety of civilian nuclear powerplants. The firm is now waiting for approval of the contract by the US government and Western Allies. The computers, ordered by the Soviet Research and Development Institute of Power Engineering (RDIPE), will analyze safety factors in the operation of nuclear reactors over a wide range of conditions. The Soviet Union's civilian nuclear program is one of the largest in the world, with over 50 plants in operation. Types of safety analyses the computers perform include: neutron-physics calculations, radiation-protection studies, stress analysis, reliability analysis of equipment and systems, ecological-impact calculations, transient analysis, and support activities for emergency response. They also include a simulator with realistic mathematical models of Soviet nuclear powerplants to improve operator training

  7. Cloud Computing for Maintenance Performance Improvement

    Kour, Ravdeep; Karim, Ramin; Parida, Aditya

    2013-01-01

    Cloud Computing is an emerging research area. It can be utilised for acquiring an effective and efficient information logistics. This paper uses cloud-based technology for the establishment of information logistics for railway system which requires information based on data from different data sources (e.g. railway maintenance, railway operation, and railway business data). In order to improve the performance of the maintenance process relevant data from various sources need to be acquired, f...

  8. Applying improved instrumentation and computer control systems

    Bevilacqua, F.; Myers, J.E.

    1977-01-01

    In-core and out-of-core instrumentation systems for the Cherokee-I reactor are described. The reactor has 61m-core instrument assemblies. Continuous computer monitoring and processing of data from over 300 fixed detectors will be used to improve the manoeuvering of core power. The plant protection system is a standard package for the Combustion Engineering System 80, consisting of two independent systems, the reactor protection system and the engineering safety features activation system, both of which are designed to meet NRC, ANS and IEEE design criteria or standards. The plants protection system has its own computer which provides plant monitoring, alarming, logging and performance calculations. (U.K.)

  9. Evaluation of computer-aided detection and diagnosis systems.

    Petrick, Nicholas; Sahiner, Berkman; Armato, Samuel G; Bert, Alberto; Correale, Loredana; Delsanto, Silvia; Freedman, Matthew T; Fryd, David; Gur, David; Hadjiiski, Lubomir; Huo, Zhimin; Jiang, Yulei; Morra, Lia; Paquerault, Sophie; Raykar, Vikas; Samuelson, Frank; Summers, Ronald M; Tourassi, Georgia; Yoshida, Hiroyuki; Zheng, Bin; Zhou, Chuan; Chan, Heang-Ping

    2013-08-01

    Computer-aided detection and diagnosis (CAD) systems are increasingly being used as an aid by clinicians for detection and interpretation of diseases. Computer-aided detection systems mark regions of an image that may reveal specific abnormalities and are used to alert clinicians to these regions during image interpretation. Computer-aided diagnosis systems provide an assessment of a disease using image-based information alone or in combination with other relevant diagnostic data and are used by clinicians as a decision support in developing their diagnoses. While CAD systems are commercially available, standardized approaches for evaluating and reporting their performance have not yet been fully formalized in the literature or in a standardization effort. This deficiency has led to difficulty in the comparison of CAD devices and in understanding how the reported performance might translate into clinical practice. To address these important issues, the American Association of Physicists in Medicine (AAPM) formed the Computer Aided Detection in Diagnostic Imaging Subcommittee (CADSC), in part, to develop recommendations on approaches for assessing CAD system performance. The purpose of this paper is to convey the opinions of the AAPM CADSC members and to stimulate the development of consensus approaches and "best practices" for evaluating CAD systems. Both the assessment of a standalone CAD system and the evaluation of the impact of CAD on end-users are discussed. It is hoped that awareness of these important evaluation elements and the CADSC recommendations will lead to further development of structured guidelines for CAD performance assessment. Proper assessment of CAD system performance is expected to increase the understanding of a CAD system's effectiveness and limitations, which is expected to stimulate further research and development efforts on CAD technologies, reduce problems due to improper use, and eventually improve the utility and efficacy of CAD in

  10. Improved GLR method to instrument failure detection

    Jeong, Hak Yeoung; Chang, Soon Heung

    1985-01-01

    The generalized likehood radio(GLR) method performs statistical tests on the innovations sequence of a Kalman-Buchy filter state estimator for system failure detection and its identification. However, the major drawback of the convensional GLR is to hypothesize particular failure type in each case. In this paper, a method to solve this drawback is proposed. The improved GLR method is applied to a PWR pressurizer and gives successful results in detection and identification of any failure. Furthmore, some benefit on the processing time per each cycle of failure detection and its identification can be accompanied. (Author)

  11. Improving ATLAS computing resource utilization with HammerCloud

    Schovancova, Jaroslava; The ATLAS collaboration

    2018-01-01

    HammerCloud is a framework to commission, test, and benchmark ATLAS computing resources and components of various distributed systems with realistic full-chain experiment workflows. HammerCloud contributes to ATLAS Distributed Computing (ADC) Operations and automation efforts, providing the automated resource exclusion and recovery tools, that help re-focus operational manpower to areas which have yet to be automated, and improve utilization of available computing resources. We present recent evolution of the auto-exclusion/recovery tools: faster inclusion of new resources in testing machinery, machine learning algorithms for anomaly detection, categorized resources as master vs. slave for the purpose of blacklisting, and a tool for auto-exclusion/recovery of resources triggered by Event Service job failures that is being extended to other workflows besides the Event Service. We describe how HammerCloud helped commissioning various concepts and components of distributed systems: simplified configuration of qu...

  12. Cheater detection in SPDZ multiparty computation

    G. Spini (Gabriele); S. Fehr (Serge); A. Nascimento; P. Barreto

    2016-01-01

    textabstractIn this work we revisit the SPDZ multiparty computation protocol by Damgård et al. for securely computing a function in the presence of an unbounded number of dishonest parties. The SPDZ protocol is distinguished by its fast performance. A downside of the SPDZ protocol is that one single

  13. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  14. Improving student retention in computer engineering technology

    Pierozinski, Russell Ivan

    The purpose of this research project was to improve student retention in the Computer Engineering Technology program at the Northern Alberta Institute of Technology by reducing the number of dropouts and increasing the graduation rate. This action research project utilized a mixed methods approach of a survey and face-to-face interviews. The participants were male and female, with a large majority ranging from 18 to 21 years of age. The research found that participants recognized their skills and capability, but their capacity to remain in the program was dependent on understanding and meeting the demanding pace and rigour of the program. The participants recognized that curriculum delivery along with instructor-student interaction had an impact on student retention. To be successful in the program, students required support in four domains: academic, learning management, career, and social.

  15. Computer aided detection system for lung cancer using computer tomography scans

    Mahesh, Shanthi; Rakesh, Spoorthi; Patil, Vidya C.

    2018-04-01

    Lung Cancer is a disease can be defined as uncontrolled cell growth in tissues of the lung. If we detect the Lung Cancer in its early stage, then that could be the key of its cure. In this work the non-invasive methods are studied for assisting in nodule detection. It supplies a Computer Aided Diagnosis System (CAD) for early detection of lung cancer nodules from the Computer Tomography (CT) images. CAD system is the one which helps to improve the diagnostic performance of radiologists in their image interpretations. The main aim of this technique is to develop a CAD system for finding the lung cancer using the lung CT images and classify the nodule as Benign or Malignant. For classifying cancer cells, SVM classifier is used. Here, image processing techniques have been used to de-noise, to enhance, for segmentation and edge detection of an image is used to extract the area, perimeter and shape of nodule. The core factors of this research are Image quality and accuracy.

  16. A Sensitivity Analysis of a Computer Model-Based Leak Detection System for Oil Pipelines

    Zhe Lu; Yuntong She; Mark Loewen

    2017-01-01

    Improving leak detection capability to eliminate undetected releases is an area of focus for the energy pipeline industry, and the pipeline companies are working to improve existing methods for monitoring their pipelines. Computer model-based leak detection methods that detect leaks by analyzing the pipeline hydraulic state have been widely employed in the industry, but their effectiveness in practical applications is often challenged by real-world uncertainties. This study quantitatively ass...

  17. Improving computer security by health smart card.

    Nisand, Gabriel; Allaert, François-André; Brézillon, Régine; Isphording, Wilhem; Roeslin, Norbert

    2003-01-01

    The University hospitals of Strasbourg have worked for several years on the computer security of the medical data and have of this fact be the first to use the Health Care Professional Smart Card (CPS). This new tool must provide security to the information processing systems and especially to the medical data exchanges between the partners who collaborate to the care of the Beyond the purely data-processing aspects of the functions of safety offered by the CPS, safety depends above all on the practices on the users, their knowledge concerning the legislation, the risks and the stakes, of their adhesion to the procedures and protections installations. The aim of this study is to evaluate this level of knowledge, the practices and the feelings of the users concerning the computer security of the medical data, to check the relevance of the step taken, and if required, to try to improve it. The survey by questionnaires involved 648 users. The practices of users in terms of data security are clearly improved by the implementation of the security server and the use of the CPS system, but security breaches due to bad practices are not however completely eliminated. That confirms that is illusory to believe that data security is first and foremost a technical issue. Technical measures are of course indispensable, but the greatest efforts are required after their implementation and consist in making the key players [2], i.e. users, aware and responsible. However, it must be stressed that the user-friendliness of the security interface has a major effect on the results observed. For instance, it is highly probable that the bad practices continued or introduced upon the implementation of the security server and CPS scheme are due to the complicated nature or functional defects of the proposed solution, which must therefore be improved. Besides, this is only the pilot phase and card holders can be expected to become more responsible as time goes by, along with the gradual

  18. Computer vision as an alternative for collision detection

    Drangsholt, Marius Aarvik

    2015-01-01

    The goal of this thesis was to implement a computer vision system on a low power platform, to see if that could be an alternative for a collision detection system. To achieve this, research into fundamentals in computer vision were performed, and both hardware and software implementation were carried out. To create the computer vision system, a stereo rig were constructed using low cost Logitech webcameras, and connected to a Raspberry Pi 2 development board. The computer vision library Op...

  19. Computer Detection of Low Contrast Targets.

    1982-06-18

    computed from the Hessian and the gradient and is given by the formula W) = - U Hf( IVf (M), Vf()) IVfj 3 Because of the amount of noise present in these...IT (nz + 1 + Zn cost ) 1/2 and this integral is a maximum for n=1 and decreases as n increases, exactly what a good measure of curvature should do

  20. Computers are stepping stones to improved imaging.

    Freiherr, G

    1991-02-01

    Never before has the radiology industry embraced the computer with such enthusiasm. Graphics supercomputers as well as UNIX- and RISC-based computing platforms are turning up in every digital imaging modality and especially in systems designed to enhance and transmit images, says author Greg Freiherr on assignment for Computers in Healthcare at the Radiological Society of North America conference in Chicago.

  1. Lung nodule detection on chest CT: evaluation of a computer-aided detection (CAD) system

    Lee, In Jae; Gamsu, Gordon; Czum, Julianna; Johnson, Rebecca; Chakrapani, Sanjay; Wu, Ning

    2005-01-01

    To evaluate the capacity of a computer-aided detection (CAD) system to detect lung nodules in clinical chest CT. A total of 210 consecutive clinical chest CT scans and their reports were reviewed by two chest radiologists and 70 were selected (33 without nodules and 37 with 1-6 nodules, 4-15.4 mm in diameter). The CAD system (ImageChecker CT LN-1000) developed by R2 Technology, Inc. (Sunnyvale, CA) was used. Its algorithm was designed to detect nodules with a diameter of 4-20 mm. The two chest radiologists working with the CAD system detected a total of 78 nodules. These 78 nodules form the database for this study. Four independent observers interpreted the studies with and without the CAD system. The detection rates of the four independent observers without CAD were 81% (63/78), 85% (66/78), 83% (65/78), and 83% (65/78), respectively. With CAD their rates were 87% (68/78), 85% (66/78), 86% (67/78), and 85% (66/78), respectively. The differences between these two sets of detection rates did not reach statistical significance. In addition, CAD detected eight nodules that were not mentioned in the original clinical radiology reports. The CAD system produced 1.56 false-positive nodules per CT study. The four test observers had 0, 0.1, 0.17, and 0.26 false-positive results per study without CAD and 0.07, 0.2, 0.23, and 0.39 with CAD, respectively. The CAD system can assist radiologists in detecting pulmonary nodules in chest CT, but with a potential increase in their false positive rates. Technological improvements to the system could increase the sensitivity and specificity for the detection of pulmonary nodules and reduce these false-positive results

  2. Improvements in the detection of airborne plutonium

    Ryden, D.J.

    1981-02-01

    It is shown how it is possible to compensate individually for each of the background components on the filter paper used to collect samples. Experimentally it has been shown that the resulting compensated background count-rate averages zero with a standard deviation very close to the fundamental limit set by random statistical variations. Considerable improvements in the sensitivity of detecting airborne plutonium have been achieved. Two new plutonium-in-air monitors which use the compensation schemes described in this report are now available. Both have operated successfully in high concentrations of radon daughters. (author)

  3. Detection of Mild Emphysema by Computed Tomography Density Measurements

    Vikgren, J.; Friman, O.; Borga, M.; Boijsen, M.; Gustavsson, S.; Bake, B.; Tylen, U.; Ekberg-Jansson, A.

    2005-01-01

    Purpose: To assess the ability of a conventional density mask method to detect mild emphysema by high-resolution computed tomography (HRCT); to analyze factors influencing quantification of mild emphysema; and to validate a new algorithm for detection of mild emphysema. Material and Methods: Fifty-five healthy male smokers and 34 never-smokers, 61-62 years of age, were examined. Emphysema was evaluated visually, by the conventional density mask method, and by a new algorithm compensating for the effects of gravity and artifacts due to motion and the reconstruction algorithm. Effects of the reconstruction algorithm, slice thickness, and various threshold levels on the outcome of the density mask area were evaluated. Results: Forty-nine percent of the smokers had mild emphysema. The density mask area was higher the thinner the slice irrespective of the reconstruction algorithm and threshold level. The sharp algorithm resulted in increased density mask area. The new reconstruction algorithm could discriminate between smokers with and those without mild emphysema, whereas the density mask method could not. The diagnostic ability of the new algorithm was dependent on lung level. At about 90% specificity, sensitivity was 65-100% in the apical levels, but low in the rest of the lung. Conclusion: The conventional density mask method is inadequate for detecting mild emphysema, while the new algorithm improves the diagnostic ability but is nevertheless still imperfect

  4. Computer Aided Detection of Breast Masses in Digital Tomosynthesis

    Singh, Swatee; Lo, Joseph

    2008-01-01

    The purpose of this study was to investigate feasibility of computer-aided detection of masses and calcification clusters in breast tomosynthesis images and obtain reliable estimates of sensitivity...

  5. Using the Computer to Improve Basic Skills.

    Bozeman, William; Hierstein, William J.

    These presentations offer information on the benefits of using computer-assisted instruction (CAI) for remedial education. First, William J. Hierstein offers a summary of the Computer Assisted Basic Skills Project conducted by Southeastern Community College at the Iowa State Penitentiary. Hierstein provides background on the funding for the…

  6. Quantum computing. Defining and detecting quantum speedup.

    Rønnow, Troels F; Wang, Zhihui; Job, Joshua; Boixo, Sergio; Isakov, Sergei V; Wecker, David; Martinis, John M; Lidar, Daniel A; Troyer, Matthias

    2014-07-25

    The development of small-scale quantum devices raises the question of how to fairly assess and detect quantum speedup. Here, we show how to define and measure quantum speedup and how to avoid pitfalls that might mask or fake such a speedup. We illustrate our discussion with data from tests run on a D-Wave Two device with up to 503 qubits. By using random spin glass instances as a benchmark, we found no evidence of quantum speedup when the entire data set is considered and obtained inconclusive results when comparing subsets of instances on an instance-by-instance basis. Our results do not rule out the possibility of speedup for other classes of problems and illustrate the subtle nature of the quantum speedup question. Copyright © 2014, American Association for the Advancement of Science.

  7. Computer-aided detection system for lung cancer in computed tomography scans: Review and future prospects

    2014-01-01

    Introduction The goal of this paper is to present a critical review of major Computer-Aided Detection systems (CADe) for lung cancer in order to identify challenges for future research. CADe systems must meet the following requirements: improve the performance of radiologists providing high sensitivity in the diagnosis, a low number of false positives (FP), have high processing speed, present high level of automation, low cost (of implementation, training, support and maintenance), the ability to detect different types and shapes of nodules, and software security assurance. Methods The relevant literature related to “CADe for lung cancer” was obtained from PubMed, IEEEXplore and Science Direct database. Articles published from 2009 to 2013, and some articles previously published, were used. A systemic analysis was made on these articles and the results were summarized. Discussion Based on literature search, it was observed that many if not all systems described in this survey have the potential to be important in clinical practice. However, no significant improvement was observed in sensitivity, number of false positives, level of automation and ability to detect different types and shapes of nodules in the studied period. Challenges were presented for future research. Conclusions Further research is needed to improve existing systems and propose new solutions. For this, we believe that collaborative efforts through the creation of open source software communities are necessary to develop a CADe system with all the requirements mentioned and with a short development cycle. In addition, future CADe systems should improve the level of automation, through integration with picture archiving and communication systems (PACS) and the electronic record of the patient, decrease the number of false positives, measure the evolution of tumors, evaluate the evolution of the oncological treatment, and its possible prognosis. PMID:24713067

  8. A new fault detection method for computer networks

    Lu, Lu; Xu, Zhengguo; Wang, Wenhai; Sun, Youxian

    2013-01-01

    Over the past few years, fault detection for computer networks has attracted extensive attentions for its importance in network management. Most existing fault detection methods are based on active probing techniques which can detect the occurrence of faults fast and precisely. But these methods suffer from the limitation of traffic overhead, especially in large scale networks. To relieve traffic overhead induced by active probing based methods, a new fault detection method, whose key is to divide the detection process into multiple stages, is proposed in this paper. During each stage, only a small region of the network is detected by using a small set of probes. Meanwhile, it also ensures that the entire network can be covered after multiple detection stages. This method can guarantee that the traffic used by probes during each detection stage is small sufficiently so that the network can operate without severe disturbance from probes. Several simulation results verify the effectiveness of the proposed method

  9. Advances in computers improving the web

    Zelkowitz, Marvin

    2010-01-01

    This is volume 78 of Advances in Computers. This series, which began publication in 1960, is the oldest continuously published anthology that chronicles the ever- changing information technology field. In these volumes we publish from 5 to 7 chapters, three times per year, that cover the latest changes to the design, development, use and implications of computer technology on society today.Covers the full breadth of innovations in hardware, software, theory, design, and applications.Many of the in-depth reviews have become standard references that continue to be of significant, lasting value i

  10. Improved visibility computation on massive grid terrains

    Fishman, J.; Haverkort, H.J.; Toma, L.; Wolfson, O.; Agrawal, D.; Lu, C.-T.

    2009-01-01

    This paper describes the design and engineering of algorithms for computing visibility maps on massive grid terrains. Given a terrain T, specified by the elevations of points in a regular grid, and given a viewpoint v, the visibility map or viewshed of v is the set of grid points of T that are

  11. Improving Euler computations at low Mach numbers

    Koren, B.; Leer, van B.; Deconinck, H.; Koren, B.

    1997-01-01

    The paper consists of two parts, both dealing with conditioning techniques for lowMach-number Euler-flow computations, in which a multigrid technique is applied. In the first part, for subsonic flows and upwind-discretized, linearized 1-D Euler equations, the smoothing behavior of

  12. Improving Euler computations at low Mach numbers

    Koren, B.

    1996-01-01

    This paper consists of two parts, both dealing with conditioning techniques for low-Mach-number Euler-flow computations, in which a multigrid technique is applied. In the first part, for subsonic flows and upwind-discretized linearized 1-D Euler equations, the smoothing behavior of

  13. Improving Undergraduates' Critique via Computer Mediated Communication

    Mohamad, Maslawati; Musa, Faridah; Amin, Maryam Mohamed; Mufti, Norlaila; Latiff, Rozmel Abdul; Sallihuddin, Nani Rahayu

    2014-01-01

    Our current university students, labeled as "Generation Y" or Millennials, are different from previous generations due to wide exposure to media. Being technologically savvy, they are accustomed to Internet for information and social media for socializing. In line with this current trend, teaching through computer mediated communication…

  14. Novel computed tomographic chest metrics to detect pulmonary hypertension

    Chan, Andrew L; Juarez, Maya M; Shelton, David K; MacDonald, Taylor; Li, Chin-Shang; Lin, Tzu-Chun; Albertson, Timothy E

    2011-01-01

    Early diagnosis of pulmonary hypertension (PH) can potentially improve survival and quality of life. Detecting PH using echocardiography is often insensitive in subjects with lung fibrosis or hyperinflation. Right heart catheterization (RHC) for the diagnosis of PH adds risk and expense due to its invasive nature. Pre-defined measurements utilizing computed tomography (CT) of the chest may be an alternative non-invasive method of detecting PH. This study retrospectively reviewed 101 acutely hospitalized inpatients with heterogeneous diagnoses, who consecutively underwent CT chest and RHC during the same admission. Two separate teams, each consisting of a radiologist and pulmonologist, blinded to clinical and RHC data, individually reviewed the chest CT's. Multiple regression analyses controlling for age, sex, ascending aortic diameter, body surface area, thoracic diameter and pulmonary wedge pressure showed that a main pulmonary artery (PA) diameter ≥29 mm (odds ratio (OR) = 4.8), right descending PA diameter ≥19 mm (OR = 7.0), true right descending PA diameter ≥ 16 mm (OR = 4.1), true left descending PA diameter ≥ 21 mm (OR = 15.5), right ventricular (RV) free wall ≥ 6 mm (OR = 30.5), RV wall/left ventricular (LV) wall ratio ≥0.32 (OR = 8.8), RV/LV lumen ratio ≥1.28 (OR = 28.8), main PA/ascending aorta ratio ≥0.84 (OR = 6.0) and main PA/descending aorta ratio ≥ 1.29 (OR = 5.7) were significant predictors of PH in this population of hospitalized patients. This combination of easily measured CT-based metrics may, upon confirmatory studies, aid in the non-invasive detection of PH and hence in the determination of RHC candidacy in acutely hospitalized patients

  15. Improving Seroreactivity-Based Detection of Glioma

    Nicole Ludwig

    2009-12-01

    Full Text Available Seroreactivity profiling emerges as valuable technique for minimal invasive cancer detection. Recently, we provided first evidence for the applicability of serum profiling of glioma using a limited number of immunogenic antigens. Here, we screened 57 glioma and 60 healthy sera for autoantibodies against 1827 Escherichia coli expressed clones, including 509 in-frame peptide sequences. By a linear support vector machine approach, we calculated mean specificity, sensitivity, and accuracy of 100 repetitive classifications. We were able to differentiate glioma sera from sera of the healthy controls with a specificity of 90.28%, a sensitivity of 87.31% and an accuracy of 88.84%. We were also able to differentiate World Health Organization grade IV glioma sera from healthy sera with a specificity of 98.45%, a sensitivity of 80.93%, and an accuracy of 92.88%. To rank the antigens according to their information content, we computed the area under the receiver operator characteristic curve value for each clone. Altogether, we found 46 immunogenic clones including 16 in-frame clones that were informative for the classification of glioma sera versus healthy sera. For the separation of glioblastoma versus healthy sera, we found 91 informative clones including 26 in-frame clones. The best-suited in-frame clone for the classification glioma sera versus healthy sera corresponded to the vimentin gene (VIM that was previously associated with glioma. In the future, autoantibody signatures in glioma not only may prove useful for diagnosis but also offer the prospect for a personalized immune-based therapy.

  16. Computer-assisted detection (CAD) methodology for early detection of response to pharmaceutical therapy in tuberculosis patients

    Lieberman, Robert; Kwong, Heston; Liu, Brent; Huang, H. K.

    2009-02-01

    The chest x-ray radiological features of tuberculosis patients are well documented, and the radiological features that change in response to successful pharmaceutical therapy can be followed with longitudinal studies over time. The patients can also be classified as either responsive or resistant to pharmaceutical therapy based on clinical improvement. We have retrospectively collected time series chest x-ray images of 200 patients diagnosed with tuberculosis receiving the standard pharmaceutical treatment. Computer algorithms can be created to utilize image texture features to assess the temporal changes in the chest x-rays of the tuberculosis patients. This methodology provides a framework for a computer-assisted detection (CAD) system that may provide physicians with the ability to detect poor treatment response earlier in pharmaceutical therapy. Early detection allows physicians to respond with more timely treatment alternatives and improved outcomes. Such a system has the potential to increase treatment efficacy for millions of patients each year.

  17. Securing Cloud Computing from Different Attacks Using Intrusion Detection Systems

    Omar Achbarou

    2017-03-01

    Full Text Available Cloud computing is a new way of integrating a set of old technologies to implement a new paradigm that creates an avenue for users to have access to shared and configurable resources through internet on-demand. This system has many common characteristics with distributed systems, hence, the cloud computing also uses the features of networking. Thus the security is the biggest issue of this system, because the services of cloud computing is based on the sharing. Thus, a cloud computing environment requires some intrusion detection systems (IDSs for protecting each machine against attacks. The aim of this work is to present a classification of attacks threatening the availability, confidentiality and integrity of cloud resources and services. Furthermore, we provide literature review of attacks related to the identified categories. Additionally, this paper also introduces related intrusion detection models to identify and prevent these types of attacks.

  18. Computer-Aided Detection of Kidney Tumor on Abdominal Computed Tomography Scans

    Kim, D.Y.; Park, J.W.

    2004-01-01

    Purpose: To implement a computer-aided detection system for kidney segmentation and kidney tumor detection on abdominal computed tomography (CT) scans. Material and Methods: Abdominal CT images were digitized with a film digitizer, and a gray-level threshold method was used to segment the kidney. Based on texture analysis performed on sample images of kidney tumors, a portion of the kidney tumor was selected as seed region for start point of the region-growing process. The average and standard deviations were used to detect the kidney tumor. Starting at the detected seed region, the region-growing method was used to segment the kidney tumor with intensity values used as an acceptance criterion for a homogeneous test. This test was performed to merge the neighboring region as kidney tumor boundary. These methods were applied on 156 transverse images of 12 cases of kidney tumors scanned using a G.E. Hispeed CT scanner and digitized with a Lumisys LS-40 film digitizer. Results: The computer-aided detection system resulted in a kidney tumor detection sensitivity of 85% and no false-positive findings. Conclusion: This computer-aided detection scheme was useful for kidney tumor detection and gave the characteristics of detected kidney tumors

  19. Continue service improvement at CERN computing centre

    Lopez, M Barroso; Everaerts, L; Meinhard, H; Baehler, P; Haimyr, N; Guijarro, J M

    2014-01-01

    Using the framework of ITIL best practises, the service managers within CERN-IT have engaged into a continuous improvement process, mainly focusing on service operation. This implies an explicit effort to understand and improve all service management aspects in order to increase efficiency and effectiveness. We will present the requirements, how they were addressed and share our experiences. We will describe how we measure, report and use the data to continually improve both the processes and the services being provided. The focus is not the tool or the process, but the results of the continuous improvement effort from a large team of IT experts providing services to thousands of users, supported by the tool and its local team. This is not an initiative to address user concerns in the way the services are managed but rather an on-going working habit of continually reviewing, analysing and improving the service management processes and the services themselves, having in mind the currently agreed service levels and whose results also improve the experience of the users about the current services.

  20. Comparison of computer workstation with film for detecting setup errors

    Fritsch, D.S.; Boxwala, A.A.; Raghavan, S.; Coffee, C.; Major, S.A.; Muller, K.E.; Chaney, E.L.

    1997-01-01

    Purpose/Objective: Workstations designed for portal image interpretation by radiation oncologists provide image displays and image processing and analysis tools that differ significantly compared with the standard clinical practice of inspecting portal films on a light box. An implied but unproved assumption associated with the clinical implementation of workstation technology is that patient care is improved, or at least not adversely affected. The purpose of this investigation was to conduct observer studies to test the hypothesis that radiation oncologists can detect setup errors using a workstation at least as accurately as when following standard clinical practice. Materials and Methods: A workstation, PortFolio, was designed for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools to enhance images; align cross-hairs, field edges, and anatomic structures on reference and acquired images; measure distances and angles; and view registered images superimposed on one another. In a well designed and carefully controlled observer study, nine radiation oncologists, including attendings and residents, used PortFolio to detect setup errors in realistic digitally reconstructed portal (DRPR) images computed from the NLM visible human data using a previously described approach † . Compared with actual portal images where absolute truth is ill defined or unknown, the DRPRs contained known translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Twenty DRPRs with randomly induced errors were computed for each site. The induced errors were constrained to a plane at the isocenter of the target volume and perpendicular to the central axis of the treatment beam. Images used in the study were also printed on film. Observers interpreted the film-based images using standard clinical practice. The images were reviewed in eight sessions. During each session five images were

  1. Improvement of the reactivity computer for HANARO research reactor

    Kim, Min Jin; Park, S. J.; Jung, H. S.; Choi, Y. S.; Lee, K. H.; Seo, S. G

    2001-04-01

    The reactivity computer in HANARO has a dedicated neutron detection system for experiments and measurements of the reactor characteristics. This system consists of a personal computer and a multi-function I/O board, and collects the signals from the various neutron detectors. The existing hardware and software are developed under the DOS environment so that they are very inconvenient to use and have difficulties in finding replacement parts. Since the continuity of the signal is often lost when we process the wide rang signal, the need for its improvement has been an issue. The purpose of this project is to upgrade the hardware and software for data collection and processing in order for them to be compatible with Windows{sup TM} operating system and to solve the known issue. We have replaced the existing system with new multi-function I/O board and Pentium III class PC, and the application program for the wide range reactivity measurement and multi-function signal counter have been developed. The newly replaced multi-function I/O board has seven times fast A/D conversion rate and collects sufficient amount of data in a short time. The new application program is user-friendly and provides various useful information on its display screen so that the ability of data processing and storage has been very much enhanced.

  2. [Accuracy of computer aided measurement for detecting dental proximal caries lesions in images of cone-beam computed tomography].

    Zhang, Z L; Li, J P; Li, G; Ma, X C

    2017-02-09

    Objective: To establish and validate a computer program used to aid the detection of dental proximal caries in the images cone beam computed tomography (CBCT) images. Methods: According to the characteristics of caries lesions in X-ray images, a computer aided detection program for proximal caries was established with Matlab and Visual C++. The whole process for caries lesion detection included image import and preprocessing, measuring average gray value of air area, choosing region of interest and calculating gray value, defining the caries areas. The program was used to examine 90 proximal surfaces from 45 extracted human teeth collected from Peking University School and Hospital of Stomatology. The teeth were then scanned with a CBCT scanner (Promax 3D). The proximal surfaces of the teeth were respectively detected by caries detection program and scored by human observer for the extent of lesions with 6-level-scale. With histologic examination serving as the reference standard, the caries detection program and the human observer performances were assessed with receiver operating characteristic (ROC) curves. Student t -test was used to analyze the areas under the ROC curves (AUC) for the differences between caries detection program and human observer. Spearman correlation coefficient was used to analyze the detection accuracy of caries depth. Results: For the diagnosis of proximal caries in CBCT images, the AUC values of human observers and caries detection program were 0.632 and 0.703, respectively. There was a statistically significant difference between the AUC values ( P= 0.023). The correlation between program performance and gold standard (correlation coefficient r (s)=0.525) was higher than that of observer performance and gold standard ( r (s)=0.457) and there was a statistically significant difference between the correlation coefficients ( P= 0.000). Conclusions: The program that automatically detects dental proximal caries lesions could improve the

  3. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  4. Improving cyberbullying detection with user context

    Dadvar, M.; Trieschnigg, Rudolf Berend; Ordelman, Roeland J.F.; de Jong, Franciska M.G.

    The negative consequences of cyberbullying are becoming more alarming every day and technical solutions that allow for taking appropriate action by means of automated detection are still very limited. Up until now, studies on cyberbullying detection have focused on individual comments only,

  5. Edge detection based on computational ghost imaging with structured illuminations

    Yuan, Sheng; Xiang, Dong; Liu, Xuemei; Zhou, Xin; Bing, Pibin

    2018-03-01

    Edge detection is one of the most important tools to recognize the features of an object. In this paper, we propose an optical edge detection method based on computational ghost imaging (CGI) with structured illuminations which are generated by an interference system. The structured intensity patterns are designed to make the edge of an object be directly imaged from detected data in CGI. This edge detection method can extract the boundaries for both binary and grayscale objects in any direction at one time. We also numerically test the influence of distance deviations in the interference system on edge extraction, i.e., the tolerance of the optical edge detection system to distance deviation. Hopefully, it may provide a guideline for scholars to build an experimental system.

  6. Local pulmonary structure classification for computer-aided nodule detection

    Bahlmann, Claus; Li, Xianlin; Okada, Kazunori

    2006-03-01

    We propose a new method of classifying the local structure types, such as nodules, vessels, and junctions, in thoracic CT scans. This classification is important in the context of computer aided detection (CAD) of lung nodules. The proposed method can be used as a post-process component of any lung CAD system. In such a scenario, the classification results provide an effective means of removing false positives caused by vessels and junctions thus improving overall performance. As main advantage, the proposed solution transforms the complex problem of classifying various 3D topological structures into much simpler 2D data clustering problem, to which more generic and flexible solutions are available in literature, and which is better suited for visualization. Given a nodule candidate, first, our solution robustly fits an anisotropic Gaussian to the data. The resulting Gaussian center and spread parameters are used to affine-normalize the data domain so as to warp the fitted anisotropic ellipsoid into a fixed-size isotropic sphere. We propose an automatic method to extract a 3D spherical manifold, containing the appropriate bounding surface of the target structure. Scale selection is performed by a data driven entropy minimization approach. The manifold is analyzed for high intensity clusters, corresponding to protruding structures. Techniques involve EMclustering with automatic mode number estimation, directional statistics, and hierarchical clustering with a modified Bhattacharyya distance. The estimated number of high intensity clusters explicitly determines the type of pulmonary structures: nodule (0), attached nodule (1), vessel (2), junction (>3). We show accurate classification results for selected examples in thoracic CT scans. This local procedure is more flexible and efficient than current state of the art and will help to improve the accuracy of general lung CAD systems.

  7. Improving the Accuracy of Cloud Detection Using Machine Learning

    Craddock, M. E.; Alliss, R. J.; Mason, M.

    2017-12-01

    show 97% accuracy during the daytime, 94% accuracy at night, and 95% accuracy for all times. The total time to train, tune and test was approximately one week. The improved performance and reduced time to produce results is testament to improved computer technology and the use of machine learning as a more efficient and accurate methodology of cloud detection.

  8. New or improved computational methods and advanced reactor design

    Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi

    1997-01-01

    Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)

  9. Computer aided detection of surgical retained foreign object for prevention

    Hadjiiski, Lubomir; Marentis, Theodore C.; Rondon, Lucas; Chan, Heang-Ping; Chaudhury, Amrita R.; Chronis, Nikolaos

    2015-01-01

    Purpose: Surgical retained foreign objects (RFOs) have significant morbidity and mortality. They are associated with approximately $1.5 × 10 9 annually in preventable medical costs. The detection accuracy of radiographs for RFOs is a mediocre 59%. The authors address the RFO problem with two complementary technologies: a three-dimensional (3D) gossypiboma micro tag, the μTag that improves the visibility of RFOs on radiographs, and a computer aided detection (CAD) system that detects the μTag. It is desirable for the CAD system to operate in a high specificity mode in the operating room (OR) and function as a first reader for the surgeon. This allows for fast point of care results and seamless workflow integration. The CAD system can also operate in a high sensitivity mode as a second reader for the radiologist to ensure the highest possible detection accuracy. Methods: The 3D geometry of the μTag produces a similar two dimensional (2D) depiction on radiographs regardless of its orientation in the human body and ensures accurate detection by a radiologist and the CAD. The authors created a data set of 1800 cadaver images with the 3D μTag and other common man-made surgical objects positioned randomly. A total of 1061 cadaver images contained a single μTag and the remaining 739 were without μTag. A radiologist marked the location of the μTag using an in-house developed graphical user interface. The data set was partitioned into three independent subsets: a training set, a validation set, and a test set, consisting of 540, 560, and 700 images, respectively. A CAD system with modules that included preprocessing μTag enhancement, labeling, segmentation, feature analysis, classification, and detection was developed. The CAD system was developed using the training and the validation sets. Results: On the training set, the CAD achieved 81.5% sensitivity with 0.014 false positives (FPs) per image in a high specificity mode for the surgeons in the OR and 96

  10. The Case for Improving U.S. Computer Science Education

    Nager, Adams; Atkinson, Robert

    2016-01-01

    Despite the growing use of computers and software in every facet of our economy, not until recently has computer science education begun to gain traction in American school systems. The current focus on improving science, technology, engineering, and mathematics (STEM) education in the U.S. School system has disregarded differences within STEM…

  11. How Intrusion Detection Can Improve Software Decoy Applications

    Monteiro, Valter

    2003-01-01

    This research concerns information security and computer-network defense. It addresses how to handle the information of log files and intrusion-detection systems to recognize when a system is under attack...

  12. Computer-assisted detection of epileptiform focuses on SPECT images

    Grzegorczyk, Dawid; Dunin-Wąsowicz, Dorota; Mulawka, Jan J.

    2010-09-01

    Epilepsy is a common nervous system disease often related to consciousness disturbances and muscular spasm which affects about 1% of the human population. Despite major technological advances done in medicine in the last years there was no sufficient progress towards overcoming it. Application of advanced statistical methods and computer image analysis offers the hope for accurate detection and later removal of an epileptiform focuses which are the cause of some types of epilepsy. The aim of this work was to create a computer system that would help to find and diagnose disorders of blood circulation in the brain This may be helpful for the diagnosis of the epileptic seizures onset in the brain.

  13. Improved biosensor-based detection system

    2015-01-01

    Described is a new biosensor-based detection system for effector compounds, useful for in vivo applications in e.g. screening and selecting of cells which produce a small molecule effector compound or which take up a small molecule effector compound from its environment. The detection system...... comprises a protein or RNA-based biosensor for the effector compound which indirectly regulates the expression of a reporter gene via two hybrid proteins, providing for fewer false signals or less 'noise', tuning of sensitivity or other advantages over conventional systems where the biosensor directly...

  14. Computational neural network regression model for Host based Intrusion Detection System

    Sunil Kumar Gautam

    2016-09-01

    Full Text Available The current scenario of information gathering and storing in secure system is a challenging task due to increasing cyber-attacks. There exists computational neural network techniques designed for intrusion detection system, which provide security to single machine and entire network's machine. In this paper, we have used two types of computational neural network models, namely, Generalized Regression Neural Network (GRNN model and Multilayer Perceptron Neural Network (MPNN model for Host based Intrusion Detection System using log files that are generated by a single personal computer. The simulation results show correctly classified percentage of normal and abnormal (intrusion class using confusion matrix. On the basis of results and discussion, we found that the Host based Intrusion Systems Model (HISM significantly improved the detection accuracy while retaining minimum false alarm rate.

  15. Computed tomography with energy-resolved detection: a feasibility study

    Shikhaliev, Polad M.

    2008-03-01

    The feasibility of computed tomography (CT) with energy-resolved x-ray detection has been investigated. A breast CT design with multi slit multi slice (MSMS) data acquisition was used for this study. The MSMS CT includes linear arrays of photon counting detectors separated by gaps. This CT configuration allows for efficient scatter rejection and 3D data acquisition. The energy-resolved CT images were simulated using a digital breast phantom and the design parameters of the proposed MSMS CT. The phantom had 14 cm diameter and 50/50 adipose/glandular composition, and included carcinoma, adipose, blood, iodine and CaCO3 as contrast elements. The x-ray technique was 90 kVp tube voltage with 660 mR skin exposure. Photon counting, charge (energy) integrating and photon energy weighting CT images were generated. The contrast-to-noise (CNR) improvement with photon energy weighting was quantified. The dual energy subtracted images of CaCO3 and iodine were generated using a single CT scan at a fixed x-ray tube voltage. The x-ray spectrum was electronically split into low- and high-energy parts by a photon counting detector. The CNR of the energy weighting CT images of carcinoma, blood, adipose, iodine, and CaCO3 was higher by a factor of 1.16, 1.20, 1.21, 1.36 and 1.35, respectively, as compared to CT with a conventional charge (energy) integrating detector. Photon energy weighting was applied to CT projections prior to dual energy subtraction and reconstruction. Photon energy weighting improved the CNR in dual energy subtracted CT images of CaCO3 and iodine by a factor of 1.35 and 1.33, respectively. The combination of CNR improvements due to scatter rejection and energy weighting was in the range of 1.71-2 depending on the type of the contrast element. The tilted angle CZT detector was considered as the detector of choice. Experiments were performed to test the effect of the tilting angle on the energy spectrum. Using the CZT detector with 20° tilting angle decreased the

  16. Improved prenatal detection of chromosomal anomalies

    Frøslev-Friis, Christina; Hjort-Pedersen, Karina; Henriques, Carsten U

    2011-01-01

    Prenatal screening for karyotype anomalies takes place in most European countries. In Denmark, the screening method was changed in 2005. The aim of this study was to study the trends in prevalence and prenatal detection rates of chromosome anomalies and Down syndrome (DS) over a 22-year period....

  17. Compact Gaussian quantum computation by multi-pixel homodyne detection

    Ferrini, G; Fabre, C; Treps, N; Gazeau, J P; Coudreau, T

    2013-01-01

    We study the possibility of producing and detecting continuous variable cluster states in an extremely compact optical setup. This method is based on a multi-pixel homodyne detection system recently demonstrated experimentally, which includes classical data post-processing. It allows the incorporation of the linear optics network, usually employed in standard experiments for the production of cluster states, in the stage of the measurement. After giving an example of cluster state generation by this method, we further study how this procedure can be generalized to perform Gaussian quantum computation. (paper)

  18. Object detection based on improved color and scale invariant features

    Chen, Mengyang; Men, Aidong; Fan, Peng; Yang, Bo

    2009-10-01

    A novel object detection method which combines color and scale invariant features is presented in this paper. The detection system mainly adopts the widely used framework of SIFT (Scale Invariant Feature Transform), which consists of both a keypoint detector and descriptor. Although SIFT has some impressive advantages, it is not only computationally expensive, but also vulnerable to color images. To overcome these drawbacks, we employ the local color kernel histograms and Haar Wavelet Responses to enhance the descriptor's distinctiveness and computational efficiency. Extensive experimental evaluations show that the method has better robustness and lower computation costs.

  19. Education Improves Plagiarism Detection by Biology Undergraduates

    Holt, Emily A.

    2012-01-01

    Regrettably, the sciences are not untouched by the plagiarism affliction that threatens the integrity of budding professionals in classrooms around the world. My research, however, suggests that plagiarism training can improve students' recognition of plagiarism. I found that 148 undergraduate ecology students successfully identified plagiarized…

  20. An automated computer misuse detection system for UNICOS

    Jackson, K.A.; Neuman, M.C.; Simmonds, D.D.; Stallings, C.A.; Thompson, J.L.; Christoph, G.G.

    1994-09-27

    An effective method for detecting computer misuse is the automatic monitoring and analysis of on-line user activity. This activity is reflected in the system audit record, in the system vulnerability posture, and in other evidence found through active testing of the system. During the last several years we have implemented an automatic misuse detection system at Los Alamos. This is the Network Anomaly Detection and Intrusion Reporter (NADIR). We are currently expanding NADIR to include processing of the Cray UNICOS operating system. This new component is called the UNICOS Realtime NADIR, or UNICORN. UNICORN summarizes user activity and system configuration in statistical profiles. It compares these profiles to expert rules that define security policy and improper or suspicious behavior. It reports suspicious behavior to security auditors and provides tools to aid in follow-up investigations. The first phase of UNICORN development is nearing completion, and will be operational in late 1994.

  1. Automatic Solitary Lung Nodule Detection in Computed Tomography Images Slices

    Sentana, I. W. B.; Jawas, N.; Asri, S. A.

    2018-01-01

    Lung nodule is an early indicator of some lung diseases, including lung cancer. In Computed Tomography (CT) based image, nodule is known as a shape that appears brighter than lung surrounding. This research aim to develop an application that automatically detect lung nodule in CT images. There are some steps in algorithm such as image acquisition and conversion, image binarization, lung segmentation, blob detection, and classification. Data acquisition is a step to taking image slice by slice from the original *.dicom format and then each image slices is converted into *.tif image format. Binarization that tailoring Otsu algorithm, than separated the background and foreground part of each image slices. After removing the background part, the next step is to segment part of the lung only so the nodule can localized easier. Once again Otsu algorithm is use to detect nodule blob in localized lung area. The final step is tailoring Support Vector Machine (SVM) to classify the nodule. The application has succeed detecting near round nodule with a certain threshold of size. Those detecting result shows drawback in part of thresholding size and shape of nodule that need to enhance in the next part of the research. The algorithm also cannot detect nodule that attached to wall and Lung Chanel, since it depend the searching only on colour differences.

  2. Foundations of computer vision computational geometry, visual image structures and object shape detection

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  3. Failure detection in high-performance clusters and computers using chaotic map computations

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  4. Improved mammographic interpretation of masses using computer-aided diagnosis

    Leichter, I.; Fields, S.; Novak, B.; Nirel, R.; Bamberger, P.; Lederman, R.; Buchbinder, S.

    2000-01-01

    The aim of this study was to evaluate the effectiveness of computerized image enhancement, to investigate criteria for discriminating benign from malignant mammographic findings by computer-aided diagnosis (CAD), and to test the role of quantitative analysis in improving the accuracy of interpretation of mass lesions. Forty sequential mammographically detected mass lesions referred for biopsy were digitized at high resolution for computerized evaluation. A prototype CAD system which included image enhancement algorithms was used for a better visualization of the lesions. Quantitative features which characterize the spiculation were automatically extracted by the CAD system for a user-defined region of interest (ROI). Reference ranges for malignant and benign cases were acquired from data generated by 214 known retrospective cases. The extracted parameters together with the reference ranges were presented to the radiologist for the analysis of 40 prospective cases. A pattern recognition scheme based on discriminant analysis was trained on the 214 retrospective cases, and applied to the prospective cases. Accuracy of interpretation with and without the CAD system, as well as the performance of the pattern recognition scheme, were analyzed using receiver operating characteristics (ROC) curves. A significant difference (p z ) increased significantly (p z for the results of the pattern recognition scheme was higher (0.95). The results indicate that there is an improved accuracy of diagnosis with the use of the mammographic CAD system above that of the unassisted radiologist. Our findings suggest that objective quantitative features extracted from digitized mammographic findings may help in differentiating between benign and malignant masses, and can assist the radiologist in the interpretation of mass lesions. (orig.)

  5. COMPUTER-AIDED DETECTION OF ACINAR SHADOWS IN CHEST RADIOGRAPHS

    Tao Xu

    2013-05-01

    Full Text Available Despite the technological advances in medical diagnosis, accurate detection of infectious tuberculosis (TB still poses challenges due to complex image features and thus infectious TB continues to be a public health problem of global proportions. Currently, the detection of TB is mainly conducted visually by radiologists examining chest radiographs (CXRs. To reduce the backlog of CXR examination and provide more precise quantitative assessment, computer-aided detection (CAD systems for potential lung lesions have been increasingly adopted and commercialized for clinical practice. CADs work as supporting tools to alert radiologists on suspected features that could have easily been neglected. In this paper, an effective CAD system aimed for acinar shadow regions detection in CXRs is proposed. This system exploits textural and photometric features analysis techniques which include local binary pattern (LBP, grey level co-occurrence matrix (GLCM and histogram of oriented gradients (HOG to analyze target regions in CXRs. Classification of acinar shadows using Adaboost is then deployed to verify the performance of a combination of these techniques. Comparative study in different image databases shows that the proposed CAD system delivers consistent high accuracy in detecting acinar shadows.

  6. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    INSPIRE-00416173; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machin...

  7. Attacks and Intrusion Detection in Cloud Computing Using Neural Networks and Particle Swarm Optimization Algorithms

    Ahmad Shokuh Saljoughi

    2018-01-01

    Full Text Available Today, cloud computing has become popular among users in organizations and companies. Security and efficiency are the two major issues facing cloud service providers and their customers. Since cloud computing is a virtual pool of resources provided in an open environment (Internet, cloud-based services entail security risks. Detection of intrusions and attacks through unauthorized users is one of the biggest challenges for both cloud service providers and cloud users. In the present study, artificial intelligence techniques, e.g. MLP Neural Network sand particle swarm optimization algorithm, were used to detect intrusion and attacks. The methods were tested for NSL-KDD, KDD-CUP datasets. The results showed improved accuracy in detecting attacks and intrusions by unauthorized users.

  8. Genomecmp: computer software to detect genomic rearrangements using markers

    Kulawik, Maciej; Nowak, Robert M.

    2017-08-01

    Detection of genomics rearrangements is a tough task, because of the size of data to be processed. As genome sequences may consist of hundreds of millions symbols, it is not only practically impossible to compare them by hand, but it is also complex problem for computer software. The way to significantly accelerate the process is to use rearrangement detection algorithm based on unique short sequences called markers. The algorithm described in this paper develops markers using base genome and find the markers positions on other genome. The algorithm has been extended by support for ambiguity symbols. Web application with graphical user interface has been created using three-layer architecture, where users could run the task simultaneously. The accuracy and efficiency of proposed solution has been studied using generated and real data.

  9. Improved mammographic interpretation of masses using computer-aided diagnosis

    Leichter, I. [Dept. of Electro-Optics, Jerusalem College of Technology (Israel); Fields, S.; Novak, B. [Dept. of Radiology, Hadassah University Hospital, Mt. Scopus Jerusalem (Israel); Nirel, R. [Dept. of Statistics, Hebrew University of Jerusalem, Mt. Scopus, Jerusalem (Israel); Bamberger, P. [Dept. of Electronics, Jerusalem College of Technology, Jerusalem (Israel); Lederman, R. [Department of Radiology, Hadassah University Hospital, Ein Kerem, Jerusalem (Israel); Buchbinder, S. [Department of Radiology, Montefiore Medical Center, University Hospital for the Albert Einstein College of Medicine, Bronx, New York (United States)

    2000-02-01

    The aim of this study was to evaluate the effectiveness of computerized image enhancement, to investigate criteria for discriminating benign from malignant mammographic findings by computer-aided diagnosis (CAD), and to test the role of quantitative analysis in improving the accuracy of interpretation of mass lesions. Forty sequential mammographically detected mass lesions referred for biopsy were digitized at high resolution for computerized evaluation. A prototype CAD system which included image enhancement algorithms was used for a better visualization of the lesions. Quantitative features which characterize the spiculation were automatically extracted by the CAD system for a user-defined region of interest (ROI). Reference ranges for malignant and benign cases were acquired from data generated by 214 known retrospective cases. The extracted parameters together with the reference ranges were presented to the radiologist for the analysis of 40 prospective cases. A pattern recognition scheme based on discriminant analysis was trained on the 214 retrospective cases, and applied to the prospective cases. Accuracy of interpretation with and without the CAD system, as well as the performance of the pattern recognition scheme, were analyzed using receiver operating characteristics (ROC) curves. A significant difference (p < 0.005) was found between features extracted by the CAD system for benign and malignant cases. Specificity of the CAD-assisted diagnosis improved significantly (p < 0.02) from 14 % for the conventional assessment to 50 %, and the positive predictive value increased from 0.47 to 0.62 (p < 0.04). The area under the ROC curve (A{sub z}) increased significantly (p < 0.001) from 0.66 for the conventional assessment to 0.81 for the CAD-assisted analysis. The A{sub z} for the results of the pattern recognition scheme was higher (0.95). The results indicate that there is an improved accuracy of diagnosis with the use of the mammographic CAD system above that

  10. Detection of Organophosphorus Pesticides with Colorimetry and Computer Image Analysis.

    Li, Yanjie; Hou, Changjun; Lei, Jincan; Deng, Bo; Huang, Jing; Yang, Mei

    2016-01-01

    Organophosphorus pesticides (OPs) represent a very important class of pesticides that are widely used in agriculture because of their relatively high-performance and moderate environmental persistence, hence the sensitive and specific detection of OPs is highly significant. Based on the inhibitory effect of acetylcholinesterase (AChE) induced by inhibitors, including OPs and carbamates, a colorimetric analysis was used for detection of OPs with computer image analysis of color density in CMYK (cyan, magenta, yellow and black) color space and non-linear modeling. The results showed that there was a gradually weakened trend of yellow intensity with the increase of the concentration of dichlorvos. The quantitative analysis of dichlorvos was achieved by Artificial Neural Network (ANN) modeling, and the results showed that the established model had a good predictive ability between training sets and predictive sets. Real cabbage samples containing dichlorvos were detected by colorimetry and gas chromatography (GC), respectively. The results showed that there was no significant difference between colorimetry and GC (P > 0.05). The experiments of accuracy, precision and repeatability revealed good performance for detection of OPs. AChE can also be inhibited by carbamates, and therefore this method has potential applications in real samples for OPs and carbamates because of high selectivity and sensitivity.

  11. An improved computer controlled triple-axis neutron spectrometer

    Cooper, M.J.; Hall, J.W.; Hutchings, M.T.

    1975-07-01

    A description is given of the computer-controlled triple-axis neutron spectrometer installed at the PLUTO reactor at Harwell. The reasons for an nature of recent major improvements are discussed. Following a general description of the spectrometer, details are then given of the new computerised control system, including the functions of the various programs which are now available to the user. (author)

  12. Using computer simulations to improve concept formation in chemistry

    The goal of this research project was to investigate whether computer simulations used as a visually-supporting teaching strategy, can improve concept formation with regard to molecules and chemical bonding, as found in water. Both the qualitative and quantitative evaluation of responses supported the positive outcome ...

  13. Computer-aided detection of early interstitial lung diseases using low-dose CT images

    Park, Sang Cheol; Kim, Soo Hyung [School of Electronics and Computer Engineering, Chonnam National University, Gwangju 500-757 (Korea, Republic of); Tan, Jun; Wang Xingwei; Lederman, Dror; Leader, Joseph K; Zheng Bin, E-mail: zhengb@upmc.edu [Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213 (United States)

    2011-02-21

    This study aims to develop a new computer-aided detection (CAD) scheme to detect early interstitial lung disease (ILD) using low-dose computed tomography (CT) examinations. The CAD scheme classifies each pixel depicted on the segmented lung areas into positive or negative groups for ILD using a mesh-grid-based region growth method and a multi-feature-based artificial neural network (ANN). A genetic algorithm was applied to select optimal image features and the ANN structure. In testing each CT examination, only pixels selected by the mesh-grid region growth method were analyzed and classified by the ANN to improve computational efficiency. All unselected pixels were classified as negative for ILD. After classifying all pixels into the positive and negative groups, CAD computed a detection score based on the ratio of the number of positive pixels to all pixels in the segmented lung areas, which indicates the likelihood of the test case being positive for ILD. When applying to an independent testing dataset of 15 positive and 15 negative cases, the CAD scheme yielded the area under receiver operating characteristic curve (AUC = 0.884 {+-} 0.064) and 80.0% sensitivity at 85.7% specificity. The results demonstrated the feasibility of applying the CAD scheme to automatically detect early ILD using low-dose CT examinations.

  14. A Sensitivity Analysis of a Computer Model-Based Leak Detection System for Oil Pipelines

    Zhe Lu

    2017-08-01

    Full Text Available Improving leak detection capability to eliminate undetected releases is an area of focus for the energy pipeline industry, and the pipeline companies are working to improve existing methods for monitoring their pipelines. Computer model-based leak detection methods that detect leaks by analyzing the pipeline hydraulic state have been widely employed in the industry, but their effectiveness in practical applications is often challenged by real-world uncertainties. This study quantitatively assessed the effects of uncertainties on leak detectability of a commonly used real-time transient model-based leak detection system. Uncertainties in fluid properties, field sensors, and the data acquisition system were evaluated. Errors were introduced into the input variables of the leak detection system individually and collectively, and the changes in leak detectability caused by the uncertainties were quantified using simulated leaks. This study provides valuable quantitative results contributing towards a better understanding of how real-world uncertainties affect leak detection. A general ranking of the importance of the uncertainty sources was obtained: from high to low it is time skew, bulk modulus error, viscosity error, and polling time. It was also shown that inertia-dominated pipeline systems were less sensitive to uncertainties compared to friction-dominated systems.

  15. Computed Tomographic Perfusion Improves Diagnostic Power of Coronary Computed Tomographic Angiography in Women

    Penagaluri, Ashritha; Higgins, Angela Y.; Vavere, Andrea L

    2016-01-01

    laboratories. Prevalence of flow-limiting CAD defined by invasive coronary angiography equal to 50% or greater with an associated single-photon emission computed tomography myocardial perfusion imaging defect was 45% (114/252) and 23% (30/129) in males and females, respectively. Patient-based diagnostic......Background-Coronary computed tomographic angiography (CTA) and myocardial perfusion imaging (CTP) is a validated approach for detection and exclusion of flow-limiting coronary artery disease (CAD), but little data are available on gender-specific performance of these modalities. In this study, we...... aimed to evaluate the diagnostic accuracy of combined coronary CTA and CTP in detecting flow-limiting CAD in women compared with men.  Methods and Results-Three hundred and eighty-one patients who underwent both CTA-CTP and single-photon emission computed tomography myocardial perfusion imaging...

  16. Performance of computer-aided diagnosis for detection of lacunar infarcts on brain MR images: ROC analysis of radiologists' detection

    Uchiyama, Y.; Yokoyama, R.; Hara, T.; Fujita, H.; Asano, T.; Kato, H.; Hoshi, H.; Yamakawa, H.; Iwama, T.; Ando, H.; Yamakawa, H.

    2007-01-01

    The detection and management of asymptomatic lacunar infarcts on magnetic resonance (MR) images are important tasks for radiologists to ensure the prevention of sever cerebral infarctions. However, accurate identification of lacunar infarcts is a difficult. Therefore, we developed a computer-aided diagnosis (CAD) scheme for detection of lacunar infarcts. The purpose of this study was to evaluate radiologists' performance in detection of lacunar infarcts without and with use of CAD scheme. 30 T1- and 30 T2- weighted images obtained from 30 patients were used for an observer study, which were consisted of 15 cases with a single lacunar infarct and 15 cases without any lacunar infarct. Six radiologists participated in the observer study. They interpreted lacunar infarcts first without and then with use of the scheme. For all six observers, average area under the receiver operating characteristic curve value was increased from 0.920 to 0.965 when they used the computer output. This CAD scheme might have the potential to improve the accuracy of radiologists' performance in the detection of lacunar infarcts on MR images. (orig.)

  17. Defect Detectability Improvement for Conventional Friction Stir Welds

    Hill, Chris

    2013-01-01

    This research was conducted to evaluate the effects of defect detectability via phased array ultrasound technology in conventional friction stir welds by comparing conventionally prepped post weld surfaces to a machined surface finish. A machined surface is hypothesized to improve defect detectability and increase material strength.

  18. Detectability in the presence of computed tomographic reconstruction noise

    Hanson, K.M.

    1977-01-01

    The multitude of commercial computed tomographic (CT) scanners which have recently been introduced for use in diagnostic radiology has given rise to a need to compare these different machines in terms of image quality and dose to the patient. It is therefore desirable to arrive at a figure of merit for a CT image which gives a measure of the diagnostic efficacy of that image. This figure of merit may well be dependent upon the specific visual task being performed. It is clearly important that the capabilities and deficiencies of the human observer as well as the interface between man and machine, namely the viewing system, be taken into account in formulating the figure of merit. Since the CT reconstruction is the result of computer processing, it is possible to use this processing to alter the characteristics of the displayed images. This image processing may improve or degrade the figure of merit

  19. 77 FR 39498 - Guidances for Industry and Food and Drug Administration Staff: Computer-Assisted Detection...

    2012-07-03

    ...] Guidances for Industry and Food and Drug Administration Staff: Computer-Assisted Detection Devices Applied... Clinical Performance Assessment: Considerations for Computer-Assisted Detection Devices Applied to... guidance, entitled ``Computer-Assisted Detection Devices Applied to Radiology Images and Radiology Device...

  20. Computer-aided detection of pulmonary nodules: influence of nodule characteristics on detection performance

    Marten, K.; Engelke, C.; Seyfarth, T.; Grillhoesl, A.; Obenauer, S.; Rummeny, E.J.

    2005-01-01

    AIM: To evaluate prospectively the influence of pulmonary nodule characteristics on detection performances of a computer-aided diagnosis (CAD) tool and experienced chest radiologists using multislice CT (MSCT). MATERIALS AND METHODS: MSCT scans of 20 consecutive patients were evaluated by a CAD system and two independent chest radiologists for presence of pulmonary nodules. Nodule size, position, margin, matrix characteristics, vascular and pleural attachments and reader confidence were recorded and data compared with an independent standard of reference. Statistical analysis for predictors influencing nodule detection or reader performance included chi-squared, retrograde stepwise conditional logistic regression with odds ratios and nodule detection proportion estimates (DPE), and ROC analysis. RESULTS: For 135 nodules, detection rates for CAD and readers were 76.3, 52.6 and 52.6%, respectively; false-positive rates were 0.55, 0.25 and 0.15 per examination, respectively. In consensus with CAD the reader detection rate increased to 93.3%, and the false-positive rate dropped to 0.1/scan. DPEs for nodules ≤5 mm were significantly higher for ICAD than for the readers (p<0.05). Absence of vascular attachment was the only significant predictor of nodule detection by CAD (p=0.0006-0.008). There were no predictors of nodule detection for reader consensus with CAD. In contrast, vascular attachment predicted nodule detection by the readers (p=0.0001-0.003). Reader sensitivity was higher for nodules with vascular attachment than for unattached nodules (sensitivities 0.768 and 0.369; 95% confidence intervals=0.651-0.861 and 0.253-0.498, respectively). CONCLUSION: CAD increases nodule detection rates, decreases false-positive rates and compensates for deficient reader performance in detection of smallest lesions and of nodules without vascular attachment

  1. Microchannel electron multiplier: improvement in gain performances and detection dynamics

    Audier, M.; Delmotte, J.C.; Boutot, J.P.

    1978-01-01

    The performances of an MCP are a function of its geometrical characteristics (diameter d and ratio 1/d of a channel, useful area) and of the applied voltage. Gain and mean output current are limited by saturation phenomena. By using a particular cascaded MCP's configuration, it is possible to simultaneously improve the gain, its associated fluctuations and the detection dynamics (detected level, counting rate). For gains 10 6 7 , the fluctuations, can be kept as low as 20% and an improvement by a factor > 10 can be obtained on the detection dynamics [fr

  2. Role of Computer Aided Diagnosis (CAD) in the detection of pulmonary nodules on 64 row multi detector computed tomography.

    Prakashini, K; Babu, Satish; Rajgopal, K V; Kokila, K Raja

    2016-01-01

    To determine the overall performance of an existing CAD algorithm with thin-section computed tomography (CT) in the detection of pulmonary nodules and to evaluate detection sensitivity at a varying range of nodule density, size, and location. A cross-sectional prospective study was conducted on 20 patients with 322 suspected nodules who underwent diagnostic chest imaging using 64-row multi-detector CT. The examinations were evaluated on reconstructed images of 1.4 mm thickness and 0.7 mm interval. Detection of pulmonary nodules, initially by a radiologist of 2 years experience (RAD) and later by CAD lung nodule software was assessed. Then, CAD nodule candidates were accepted or rejected accordingly. Detected nodules were classified based on their size, density, and location. The performance of the RAD and CAD system was compared with the gold standard that is true nodules confirmed by consensus of senior RAD and CAD together. The overall sensitivity and false-positive (FP) rate of CAD software was calculated. Of the 322 suspected nodules, 221 were classified as true nodules on the consensus of senior RAD and CAD together. Of the true nodules, the RAD detected 206 (93.2%) and 202 (91.4%) by the CAD. CAD and RAD together picked up more number of nodules than either CAD or RAD alone. Overall sensitivity for nodule detection with the CAD program was 91.4%, and FP detection per patient was 5.5%. The CAD showed comparatively higher sensitivity for nodules of size 4-10 mm (93.4%) and nodules in hilar (100%) and central (96.5%) location when compared to RAD's performance. CAD performance was high in detecting pulmonary nodules including the small size and low-density nodules. CAD even with relatively high FP rate, assists and improves RAD's performance as a second reader, especially for nodules located in the central and hilar region and for small nodules by saving RADs time.

  3. Improvement of detection limits of PIXE by substrate signal reduction

    Beaulieu, S.; Nejedly, Z.; Campbell, J.L.; Edwards, G.C.; Dias, G.M.

    2002-01-01

    Limits of detection (LODs) for aerosol samples collected using PIXE International cascade impactors, were improved approximately 50% after reducing the cross-sectional area of the analytical beam based on results obtained from microscope photographs of aerosol deposits. Improvements in LODs were most noticeable for selected elements collected on the smaller stages of the impactor (stages 1-3)

  4. An Improved Wavelet‐Based Multivariable Fault Detection Scheme

    Harrou, Fouzi

    2017-07-06

    Data observed from environmental and engineering processes are usually noisy and correlated in time, which makes the fault detection more difficult as the presence of noise degrades fault detection quality. Multiscale representation of data using wavelets is a powerful feature extraction tool that is well suited to denoising and decorrelating time series data. In this chapter, we combine the advantages of multiscale partial least squares (MSPLSs) modeling with those of the univariate EWMA (exponentially weighted moving average) monitoring chart, which results in an improved fault detection system, especially for detecting small faults in highly correlated, multivariate data. Toward this end, we applied EWMA chart to the output residuals obtained from MSPLS model. It is shown through simulated distillation column data the significant improvement in fault detection can be obtained by using the proposed methods as compared to the use of the conventional partial least square (PLS)‐based Q and EWMA methods and MSPLS‐based Q method.

  5. Improved QRD-M Detection Algorithm for Generalized Spatial Modulation Scheme

    Xiaorong Jing

    2017-01-01

    Full Text Available Generalized spatial modulation (GSM is a spectral and energy efficient multiple-input multiple-output (MIMO transmission scheme. It will lead to imperfect detection performance with relatively high computational complexity by directly applying the original QR-decomposition with M algorithm (QRD-M to the GSM scheme. In this paper an improved QRD-M algorithm is proposed for GSM signal detection, which achieves near-optimal performance but with relatively low complexity. Based on the QRD, the improved algorithm firstly transforms the maximum likelihood (ML detection of the GSM signals into searching an inverted tree structure. Then, in the searching process of the M branches, the branches corresponding to the illegitimate transmit antenna combinations (TACs and related to invalid number of active antennas are cut in order to improve the validity of the resultant branches at each level by taking advantage of characteristics of GSM signals. Simulation results show that the improved QRD-M detection algorithm provides similar performance to maximum likelihood (ML with the reduced computational complexity compared to the original QRD-M algorithm, and the optimal value of parameter M of the improved QRD-M algorithm for detection of the GSM scheme is equal to modulation order plus one.

  6. Autonomic intrusion detection: Adaptively detecting anomalies over unlabeled audit data streams in computer networks

    Wang, Wei; Guyet, Thomas; Quiniou, René ; Cordier, Marie-Odile; Masseglia, Florent; Zhang, Xiangliang

    2014-01-01

    In this work, we propose a novel framework of autonomic intrusion detection that fulfills online and adaptive intrusion detection over unlabeled HTTP traffic streams in computer networks. The framework holds potential for self-managing: self-labeling, self-updating and self-adapting. Our framework employs the Affinity Propagation (AP) algorithm to learn a subject’s behaviors through dynamical clustering of the streaming data. It automatically labels the data and adapts to normal behavior changes while identifies anomalies. Two large real HTTP traffic streams collected in our institute as well as a set of benchmark KDD’99 data are used to validate the framework and the method. The test results show that the autonomic model achieves better results in terms of effectiveness and efficiency compared to adaptive Sequential Karhunen–Loeve method and static AP as well as three other static anomaly detection methods, namely, k-NN, PCA and SVM.

  7. Autonomic intrusion detection: Adaptively detecting anomalies over unlabeled audit data streams in computer networks

    Wang, Wei

    2014-06-22

    In this work, we propose a novel framework of autonomic intrusion detection that fulfills online and adaptive intrusion detection over unlabeled HTTP traffic streams in computer networks. The framework holds potential for self-managing: self-labeling, self-updating and self-adapting. Our framework employs the Affinity Propagation (AP) algorithm to learn a subject’s behaviors through dynamical clustering of the streaming data. It automatically labels the data and adapts to normal behavior changes while identifies anomalies. Two large real HTTP traffic streams collected in our institute as well as a set of benchmark KDD’99 data are used to validate the framework and the method. The test results show that the autonomic model achieves better results in terms of effectiveness and efficiency compared to adaptive Sequential Karhunen–Loeve method and static AP as well as three other static anomaly detection methods, namely, k-NN, PCA and SVM.

  8. Does Computer-aided Detection Assist in the Early Detection of Breast Cancer?

    Hukkinen, K.; Pamilo, M.

    2005-01-01

    Purpose: To evaluate whether breast cancers detected at screening are visible in previous mammograms, and to assess the performance of a computer-aided detection (CAD) system in detecting lesions in preoperative and previous mammograms. Material and Methods: Initial screening detected 67 women with 69 surgically verified breast cancers (Group A). An experienced screening radiologist retrospectively analyzed previous mammograms for visible lesions (Group B), noting in particular their size and morphology. Preoperative and previous mammograms were analyzed with CAD; a relatively inexperienced resident also analyzed previous mammograms. The performances of CAD and resident were then compared. Results: Of the 69 lesions identified, 36 were visible in previous mammograms. Of these 36 'missed' lesions, 14 were under 10 mm in diameter and 29 were mass lesions. The sensitivity of CAD was 81% in Group A and 64% in Group B. Small mass lesions were harder for CAD to detect. The specificity of CAD was 3% in Group A and 9% in Group B. Together, CAD and the resident found more 'missed' lesions than separately. Conclusion: Of the 69 breast cancers, 36 were visible in previous mammograms. CAD's sensitivity in detecting cancer lesions ranged from 64% to 81%, while specificity ranged from 9% to as low as 3%. CAD may be helpful if the radiologist is less subspecialized in mammography

  9. [Computed tomography with computer-assisted detection of pulmonary nodules in dogs and cats].

    Niesterok, C; Piesnack, S; Köhler, C; Ludewig, E; Alef, M; Kiefer, I

    2015-01-01

    The aim of this study was to assess the potential benefit of computer-assisted detection (CAD) of pulmonary nodules in veterinary medicine. Therefore, the CAD rate was compared to the detection rates of two individual examiners in terms of its sensitivity and false-positive findings. We included 51 dogs and 16 cats with pulmonary nodules previously diagnosed by computed tomography. First, the number of nodules ≥ 3 mm was recorded for each patient by two independent examiners. Subsequently, each examiner used the CAD software for automated nodule detection. With the knowledge of the CAD results, a final consensus decision on the number of nodules was achieved. The software used was a commercially available CAD program. The sensitivity of examiner 1 was 89.2%, while that of examiner 2 reached 87.4%. CAD had a sensitivity of 69.4%. With CAD, the sensitivity of examiner 1 increased to 94.7% and that of examiner 2 to 90.8%. The CAD-system, which we used in our study, had a moderate sensitivity of 69.4%. Despite its severe limitations, with a high level of false-positive and false-negative results, CAD increased the examiners' sensitivity. Therefore, its supportive role in diagnostics appears to be evident.

  10. Computational methods for ab initio detection of microRNAs

    Malik eYousef

    2012-10-01

    Full Text Available MicroRNAs are small RNA sequences of 18-24 nucleotides in length, which serve as templates to drive post transcriptional gene silencing. The canonical microRNA pathway starts with transcription from DNA and is followed by processing via the Microprocessor complex, yielding a hairpin structure. Which is then exported into the cytosol where it is processed by Dicer and then incorporated into the RNA induced silencing complex. All of these biogenesis steps add to the overall specificity of miRNA production and effect. Unfortunately, their modes of action are just beginning to be elucidated and therefore computational prediction algorithms cannot model the process but are usually forced to employ machine learning approaches. This work focuses on ab initio prediction methods throughout; and therefore homology-based miRNA detection methods are not discussed. Current ab initio prediction algorithms, their ties to data mining, and their prediction accuracy are detailed.

  11. Engineering of an Extreme Rainfall Detection System using Grid Computing

    Olivier Terzo

    2012-10-01

    Full Text Available This paper describes a new approach for intensive rainfall data analysis. ITHACA's Extreme Rainfall Detection System (ERDS is conceived to provide near real-time alerts related to potential exceptional rainfalls worldwide, which can be used by WFP or other humanitarian assistance organizations to evaluate the event and understand the potentially floodable areas where their assistance is needed. This system is based on precipitation analysis and it uses rainfall data from satellite at worldwide extent. This project uses the Tropical Rainfall Measuring Mission Multisatellite Precipitation Analysis dataset, a NASA-delivered near real-time product for current rainfall condition monitoring over the world. Considering the great deal of data to process, this paper presents an architectural solution based on Grid Computing techniques. Our focus is on the advantages of using a distributed architecture in terms of performances for this specific purpose.

  12. PACS-Based Computer-Aided Detection and Diagnosis

    Huang, H. K. (Bernie); Liu, Brent J.; Le, Anh HongTu; Documet, Jorge

    The ultimate goal of Picture Archiving and Communication System (PACS)-based Computer-Aided Detection and Diagnosis (CAD) is to integrate CAD results into daily clinical practice so that it becomes a second reader to aid the radiologist's diagnosis. Integration of CAD and Hospital Information System (HIS), Radiology Information System (RIS) or PACS requires certain basic ingredients from Health Level 7 (HL7) standard for textual data, Digital Imaging and Communications in Medicine (DICOM) standard for images, and Integrating the Healthcare Enterprise (IHE) workflow profiles in order to comply with the Health Insurance Portability and Accountability Act (HIPAA) requirements to be a healthcare information system. Among the DICOM standards and IHE workflow profiles, DICOM Structured Reporting (DICOM-SR); and IHE Key Image Note (KIN), Simple Image and Numeric Report (SINR) and Post-processing Work Flow (PWF) are utilized in CAD-HIS/RIS/PACS integration. These topics with examples are presented in this chapter.

  13. Fossa navicularis magna detection on cone-beam computed tomography

    Syed, Ali Z. [Dept. of Oral and Maxillofacial Medicine and Diagnostic Sciences, School of Dental Medicine, Case Western Reserve University, Cleveland(United States); Mupparapu, Mel [Div. of Radiology, University of Pennsylvania School of Dental Medicine, Philadelphia (United States)

    2016-03-15

    Herein, we report and discuss the detection of fossa navicularis magna, a close radiographic anatomic variant of canalis basilaris medianus of the basiocciput, as an incidental finding in cone-beam computed tomography (CBCT) imaging. The CBCT data of the patients in question were referred for the evaluation of implant sites and to rule out pathology in the maxilla and mandible. CBCT analysis showed osseous, notch-like defects on the inferior aspect of the clivus in all four cases. The appearance of fossa navicularis magna varied among the cases. In some, it was completely within the basiocciput and mimicked a small rounded, corticated, lytic defect, whereas it appeared as a notch in others. Fossa navicularis magna is an anatomical variant that occurs on the inferior aspect of the clivus. The pertinent literature on the anatomical variations occurring in this region was reviewed.

  14. Computational optimisation of targeted DNA sequencing for cancer detection

    Martinez, Pierre; McGranahan, Nicholas; Birkbak, Nicolai Juul

    2013-01-01

    Despite recent progress thanks to next-generation sequencing technologies, personalised cancer medicine is still hampered by intra-tumour heterogeneity and drug resistance. As most patients with advanced metastatic disease face poor survival, there is need to improve early diagnosis. Analysing...... detection. Dividing 4,467 samples into one discovery and two independent validation cohorts, we show that up to 76% of 10 cancer types harbour at least one mutation in a panel of only 25 genes, with high sensitivity across most tumour types. Our analyses demonstrate that targeting "hotspot" regions would...

  15. Improvement of level-1 PSA computer code package

    Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.

    1997-07-01

    This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on `The improvement of level-1 PSA Computer Codes` is divided into two main activities : (1) improvement of level-1 PSA methodology, (2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs.

  16. Improvement of level-1 PSA computer code package

    Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.

    1997-07-01

    This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on 'The improvement of level-1 PSA Computer Codes' is divided into two main activities : 1) improvement of level-1 PSA methodology, 2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs

  17. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-12-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware.

  18. New Computer Assisted Diagnostic to Detect Alzheimer Disease

    Ben Rabeh Amira

    2016-08-01

    Full Text Available We describe a new Computer Assisted Diagnosis (CAD to automatically detect Alzheimer Patients (AD, Mild Cognitive Impairment (MCI and elderly Controls, based on the segmentation and classification of the Hippocampus (H and Corpus Calosum (CC from Magnetic Resonance Images (MRI. For the segmentation we used a new method based on a deformable model to extract the area wishes, and then we computed the geometric and texture features. For the classification we proposed a new supervised method. We evaluated the accuracy of our method in a group of 25 patients with AD (age±standard-deviation (SD =70±6 years, 25 patients with MCI (age±SD=65±8 years and 25 elderly healthy controls (age±SD=60±8 years. For the AD patients we found an accuracy of the classification of 92%, for the MCI we found 88% and for the elderly patients we found 96%. Overall, we found our method to be 92% accurate. Our method can be a useful tool for diagnosing Alzheimer’s Disease in any of these Steps.

  19. Intrusion Prevention and Detection in Grid Computing - The ALICE Case

    Gomez, Andres; Lara, Camilo; Kebschull, Udo

    2015-01-01

    Grids allow users flexible on-demand usage of computing resources through remote communication networks. A remarkable example of a Grid in High Energy Physics (HEP) research is used in the ALICE experiment at European Organization for Nuclear Research CERN. Physicists can submit jobs used to process the huge amount of particle collision data produced by the Large Hadron Collider (LHC). Grids face complex security challenges. They are interesting targets for attackers seeking for huge computational resources. Since users can execute arbitrary code in the worker nodes on the Grid sites, special care should be put in this environment. Automatic tools to harden and monitor this scenario are required. Currently, there is no integrated solution for such requirement. This paper describes a new security framework to allow execution of job payloads in a sandboxed context. It also allows process behavior monitoring to detect intrusions, even when new attack methods or zero day vulnerabilities are exploited, by a Machine Learning approach. We plan to implement the proposed framework as a software prototype that will be tested as a component of the ALICE Grid middleware. (paper)

  20. Computer Aided Diagnosis System for Early Lung Cancer Detection

    Fatma Taher

    2015-11-01

    Full Text Available Lung cancer continues to rank as the leading cause of cancer deaths worldwide. One of the most promising techniques for early detection of cancerous cells relies on sputum cell analysis. This was the motivation behind the design and the development of a new computer aided diagnosis (CAD system for early detection of lung cancer based on the analysis of sputum color images. The proposed CAD system encompasses four main processing steps. First is the preprocessing step which utilizes a Bayesian classification method using histogram analysis. Then, in the second step, mean shift segmentation is applied to segment the nuclei from the cytoplasm. The third step is the feature analysis. In this step, geometric and chromatic features are extracted from the nucleus region. These features are used in the diagnostic process of the sputum images. Finally, the diagnosis is completed using an artificial neural network and support vector machine (SVM for classifying the cells into benign or malignant. The performance of the system was analyzed based on different criteria such as sensitivity, specificity and accuracy. The evaluation was carried out using Receiver Operating Characteristic (ROC curve. The experimental results demonstrate the efficiency of the SVM classifier over other classifiers, with 97% sensitivity and accuracy as well as a significant reduction in the number of false positive and false negative rates.

  1. Safety Computer Vision Rules for Improved Sensor Certification

    Mogensen, Johann Thor Ingibergsson; Kraft, Dirk; Schultz, Ulrik Pagh

    2017-01-01

    Mobile robots are used across many domains from personal care to agriculture. Working in dynamic open-ended environments puts high constraints on the robot perception system, which is critical for the safety of the system as a whole. To achieve the required safety levels the perception system needs...... to be certified, but no specific standards exist for computer vision systems, and the concept of safe vision systems remains largely unexplored. In this paper we present a novel domain-specific language that allows the programmer to express image quality detection rules for enforcing safety constraints...

  2. Improved flaw detection and characterization with difference thermography

    Winfree, William P.; Zalameda, Joseph N.; Howell, Patricia A.

    2011-05-01

    Flaw detection and characterization with thermographic techniques in graphite polymer composites is often limited by localized variations in the thermographic response. Variations in properties such as acceptable porosity, variations in fiber volume content and surface polymer thickness result in variations in the thermal response that in general cause significant variations in the initial thermal response. These variations result in a noise floor that increases the difficulty of detecting and characterizing deeper flaws. The paper investigates comparing thermographic responses taken before and after a change in state in a composite to improve the detection of subsurface flaws. A method is presented for registration of the responses before finding the difference. A significant improvement in the detectability is achieved by comparing the differences in response. Examples of changes in state due to application of a load and impact are presented.

  3. Effect of computer-aided detection as a second reader in multidetector-row CT colonography

    Mang, Thomas; Peloschek, Philipp; Plank, Christina; Maier, Andrea; Weber, Michael; Herold, Christian; Schima, Wolfgang; Graser, Anno; Bogoni, Luca

    2007-01-01

    Our purpose was to assess the effect of computer-aided detection (CAD) on lesion detection as a second reader in computed tomographic colonography, and to compare the influence of CAD on the performance of readers with different levels of expertise. Fifty-two CT colonography patient data-sets (37 patients: 55 endoscopically confirmed polyps ≥0.5 cm, seven cancers; 15 patients: no abnormalities) were retrospectively reviewed by four radiologists (two expert, two nonexpert). After primary data evaluation, a second reading augmented with findings of CAD (polyp-enhanced view, Siemens) was performed. Sensitivities and reading time were calculated for each reader without CAD and supported by CAD findings. The sensitivity of expert readers was 91% each, and of nonexpert readers, 76% and 75%, respectively, for polyp detection. CAD increased the sensitivity of expert readers to 96% (P = 0.25) and 93% (P = 1), and that of nonexpert readers to 91% (P = 0.008) and 95% (P = 0.001), respectively. All four readers diagnosed 100% of cancers, but CAD alone only 43%. CAD increased reading time by 2.1 min (mean). CAD as a second reader significantly improves sensitivity for polyp detection in a high disease prevalence population for nonexpert readers. CAD causes a modest increase in reading time. CAD is of limited value in the detection of cancer. (orig.)

  4. Improved detection of fill-in using sublingual nitroglycerin in technetium-99m tetrofosmin exercise/rest single photon emission computed tomography one day protocol for old myocardial infarction

    Miyanaga, Hajime; Kunieda, Yasufumi; Oguni, Atsuhiko; Kamitani, Tadaaki; Kawasaki, Shingo; Takahashi, Toru

    1999-01-01

    Twenty-one patients with old myocardial infarction underwent repeated 99m Tc-tetrofosmin ( 99m Tc) exercise/rest same day protocols with and without the administration of sublingual nitroglycerin (NTG) 5 min before the second injection of 99m Tc for rest SPECT. Twelve of these patients also underwent ordinary exercise/redistribution 201 Tl SPECT. The control study protocol images showed decreased uptake of 99m Tc on exercise in 157 of 420 segments and the presence of fill-in at rest in 58 segments. Images obtained with administration of NTG showed decreased uptake of 99m Tc on exercise in 163 of 420 segments and fill-in in 74 segments at rest. The frequency of fill-in was greater in the NTG protocol than in the control protocol. The segments were scored as different grades according to 99m Tc uptake between 2 protocols. Fill-in was only present or more remarkable in 31 segments in the NTG protocol than in the control protocol. Fill-in was only present or more remarkable in 10 segments in the control protocol than in the NTG protocol. In the NTG protocol, the mean defect score of the exercise images, calculated from the bull's eye image automatically, was higher than that of the rest images. The mean severity score of the exercise images, also calculated from the bull's eye image automatically, was likewise higher than that of the rest images, whereas the mean severity score of the stress images and rest images in the control protocol was not significantly different. Moreover, the mean defect score and severity score of the rest images from the NTG protocol were significantly lower than those obtained from the control protocol. Sublingual NTG administration before the injection of 99m Tc-tetrofosmin at the rest study in the one day exercise/rest studies enhanced fill-in, so may enhance the detection of viable myocardium, allowing more informed decisions regarding cardiac revascularization in patients with chronic coronary artery disease. (K.H.)

  5. Sonar Image Enhancements for Improved Detection of Sea Mines

    Jespersen, Karl; Sørensen, Helge Bjarup Dissing; Zerr, Benoit

    1999-01-01

    In this paper, five methods for enhancing sonar images prior to automatic detection of sea mines are investigated. Two of the methods have previously been published in connection with detection systems and serve as reference. The three new enhancement approaches are variance stabilizing log...... transform, nonlinear filtering, and pixel averaging for speckle reduction. The effect of the enhancement step is tested by using the full prcessing chain i.e. enhancement, detection and thresholding to determine the number of detections and false alarms. Substituting different enhancement algorithms...... in the processing chain gives a precise measure of the performance of the enhancement stage. The test is performed using a sonar image database with images ranging from very simple to very complex. The result of the comparison indicates that the new enhancement approaches improve the detection performance....

  6. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.

    Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K

    2010-03-21

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.

  7. Improved Conflict Detection for Graph Transformation with Attributes

    Géza Kulcsár

    2015-04-01

    Full Text Available In graph transformation, a conflict describes a situation where two alternative transformations cannot be arbitrarily serialized. When enriching graphs with attributes, existing conflict detection techniques typically report a conflict whenever at least one of two transformations manipulates a shared attribute. In this paper, we propose an improved, less conservative condition for static conflict detection of graph transformation with attributes by explicitly taking the semantics of the attribute operations into account. The proposed technique is based on symbolic graphs, which extend the traditional notion of graphs by logic formulas used for attribute handling. The approach is proven complete, i.e., any potential conflict is guaranteed to be detected.

  8. An Improved Saliency Detection Approach for Flying Apsaras in the Dunhuang Grotto Murals, China

    Zhong Chen

    2015-01-01

    Full Text Available Saliency can be described as the ability of an item to be detected from its background in any particular scene, and it aims to estimate the probable location of the salient objects. Due to the salient map that computed by local contrast features can extract and highlight the edge parts including painting lines of Flying Apsaras, in this paper, we proposed an improved approach based on a frequency-tuned method for visual saliency detection of Flying Apsaras in the Dunhuang Grotto Murals, China. This improved saliency detection approach comprises three important steps: (1 image color and gray channel decomposition; (2 gray feature value computation and color channel convolution; (3 visual saliency definition based on normalization of previous visual saliency and spatial attention function. Unlike existing approaches that rely on many complex image features, this proposed approach only used local contrast and spatial attention information to simulate human’s visual attention stimuli. This improved approach resulted in a much more efficient salient map in the aspect of computing performance. Furthermore, experimental results on the dataset of Flying Apsaras in the Dunhuang Grotto Murals showed that the proposed visual saliency detection approach is very effective when compared with five other state-of-the-art approaches.

  9. Intrusion detection in cloud computing based attack patterns and risk assessment

    Ben Charhi Youssef

    2017-05-01

    Full Text Available This paper is an extension of work originally presented in SYSCO CONF.We extend our previous work by presenting the initial results of the implementation of intrusion detection based on risk assessment on cloud computing. The idea focuses on a novel approach for detecting cyber-attacks on the cloud environment by analyzing attacks pattern using risk assessment methodologies. The aim of our solution is to combine evidences obtained from Intrusion Detection Systems (IDS deployed in a cloud with risk assessment related to each attack pattern. Our approach presents a new qualitative solution for analyzing each symptom, indicator and vulnerability analyzing impact and likelihood of distributed and multi-steps attacks directed to cloud environments. The implementation of this approach will reduce the number of false alerts and will improve the performance of the IDS.

  10. Improving Intrusion Detection System Based on Snort Rules for Network Probe Attacks Detection with Association Rules Technique of Data Mining

    Nattawat Khamphakdee

    2015-07-01

    Full Text Available The intrusion detection system (IDS is an important network security tool for securing computer and network systems. It is able to detect and monitor network traffic data. Snort IDS is an open-source network security tool. It can search and match rules with network traffic data in order to detect attacks, and generate an alert. However, the Snort IDS  can detect only known attacks. Therefore, we have proposed a procedure for improving Snort IDS rules, based on the association rules data mining technique for detection of network probe attacks.  We employed the MIT-DARPA 1999 data set for the experimental evaluation. Since behavior pattern traffic data are both normal and abnormal, the abnormal behavior data is detected by way of the Snort IDS. The experimental results showed that the proposed Snort IDS rules, based on data mining detection of network probe attacks, proved more efficient than the original Snort IDS rules, as well as icmp.rules and icmp-info.rules of Snort IDS.  The suitable parameters for the proposed Snort IDS rules are defined as follows: Min_sup set to 10%, and Min_conf set to 100%, and through the application of eight variable attributes. As more suitable parameters are applied, higher accuracy is achieved.

  11. Improved materials management through client/server computing

    Brooks, D.; Neilsen, E.; Reagan, R.; Simmons, D.

    1992-01-01

    This paper reports that materials management and procurement impacts every organization within an electric utility from power generation to customer service. An efficient material management and procurement system can help improve productivity and minimize operating costs. It is no longer sufficient to simply automate materials management using inventory control systems. Smart companies are building centralized data warehouses and use the client/server style of computing to provide real time data access. This paper describes how Alabama Power Company, Southern Company Services and Digital Equipment Corporation transformed two existing applications, a purchase order application within DEC's ALL-IN-1 environment and a materials management application within an IBM CICS environment, into a data warehouse - client/server application. An application server is used to overcome incompatibilities between computing environments and provide easy, real-time access to information residing in multi-vendor environments

  12. Computer-Aided Detection of Polyps in CT Colonography Using Logistic Regression

    Van Ravesteijn, V.F.; Van Wijk, C.; Vos, F.M.; Truyen, R.; Peters, J.F.; Stoker, J.; Van Vliet, L.J.

    2010-01-01

    We present a computer-aided detection (CAD) system for computed tomography colonography that orders the polyps according to clinical relevance. TheCADsystem consists of two steps: candidate detection and supervised classification. The characteristics of the detection step lead to specific choices

  13. Computer-aided Detection of Lung Cancer on Chest Radiographs: Effect on Observer Performance

    de Hoop, Bartjan; de Boo, Diederik W.; Gietema, Hester A.; van Hoorn, Frans; Mearadji, Banafsche; Schijf, Laura; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia

    2010-01-01

    Purpose: To assess how computer-aided detection (CAD) affects reader performance in detecting early lung cancer on chest radiographs. Materials and Methods: In this ethics committee-approved study, 46 individuals with 49 computed tomographically (CT)-detected and histologically proved lung cancers

  14. Detection of simulated pulmonary nodules by single-exposure dual-energy computed radiography of the chest: effect of a computer-aided diagnosis system (Part 2)

    Kido, Shoji; Kuriyama, Keiko; Kuroda, Chikazumi; Nakamura, Hironobu; Ito, Wataru; Shimura, Kazuo; Kato, Hisatoyo

    2002-01-01

    Objective: To evaluate the performance of the computer-aided diagnosis (CAD) scheme on the detection of pulmonary nodules (PNs) in single-exposure dual-energy subtraction computed radiography (CR) images of the chest, and to evaluate the effect of this CAD scheme on radiologists' detectabilities. Methods and material: We compared the detectability by the CAD scheme with the detectability by 12 observers by using conventional CR (C-CR) and bone-subtracted CR (BS-CR) images of 25 chest phantoms with a low-contrast nylon nodule. Results: Both in the CAD scheme and for the observers, the detectability of BS-CR images was superior to that of C-CR images (P<0.005). The detection performance of the CAD scheme was equal to that of the observers. The nodules detected by the CAD did not necessarily coincide with those by the observers. Thus, if observers can use the results of the CAD system as a 'second opinion', their detectabilities increase. Conclusion: The CAD system for detection of PNs in the single-exposure dual-energy subtraction method is promising for improving radiologists' detectabilities of PNs

  15. Computer-aided detection (CAD) in mammography: Does it help the junior or the senior radiologist?

    Balleyguier, Corinne; Kinkel, Karen; Fermanian, Jacques; Malan, Sebastien; Djen, Germaine; Taourel, Patrice; Helenon, Olivier

    2005-01-01

    Objectives: To evaluate the impact of a computer-aided detection (CAD) system on the ability of a junior and senior radiologist to detect breast cancers on mammograms, and to determine the potential of CAD as a teaching tool in mammography. Methods: Hundred biopsy-proven cancers and 100 normal mammograms were randomly analyzed by a CAD system. The sensitivity (Se) and specificity (Sp) of the CAD system were calculated. In the second phase, to simulate daily practice, 110 mammograms (97 normal or with benign lesions, and 13 cancers) were examined independently by a junior and a senior radiologist, with and without CAD. Interpretations were standardized according to BI-RADS classification. Sensitivity, Specificity, positive and negative predictive values (PPV, NPV) were calculated for each session. Results: For the senior radiologist, Se slightly improved from 76.9 to 84.6% after CAD analysis (NS) (one case of clustered microcalcifications case overlooked by the senior radiologist was detected by CAD). Sp, PPV and PNV did not change significantly. For the junior radiologist, Se improved from 61.9 to 84.6% (significant change). Three cancers overlooked by the junior radiologist were detected by CAD. Sp was unchanged. Conclusion: CAD mammography proved more useful for the junior than for the senior radiologist, improving sensitivity. The CAD system may represent a useful educational tool for mammography

  16. Standalone computer-aided detection compared to radiologists' performance for the detection of mammographic masses

    Hupse, Rianne; Samulski, Maurice; Imhof-Tas, Mechli W.; Karssemeijer, Nico; Lobbes, Marc; Boetes, Carla; Heeten, Ard den; Beijerinck, David; Pijnappel, Ruud

    2013-01-01

    We developed a computer-aided detection (CAD) system aimed at decision support for detection of malignant masses and architectural distortions in mammograms. The effect of this system on radiologists' performance depends strongly on its standalone performance. The purpose of this study was to compare the standalone performance of this CAD system to that of radiologists. In a retrospective study, nine certified screening radiologists and three residents read 200 digital screening mammograms without the use of CAD. Performances of the individual readers and of CAD were computed as the true-positive fraction (TPF) at a false-positive fraction of 0.05 and 0.2. Differences were analysed using an independent one-sample t-test. At a false-positive fraction of 0.05, the performance of CAD (TPF = 0.487) was similar to that of the certified screening radiologists (TPF = 0.518, P = 0.17). At a false-positive fraction of 0.2, CAD performance (TPF = 0.620) was significantly lower than the radiologist performance (TPF = 0.736, P <0.001). Compared to the residents, CAD performance was similar for all false-positive fractions. The sensitivity of CAD at a high specificity was comparable to that of human readers. These results show potential for CAD to be used as an independent reader in breast cancer screening. (orig.)

  17. An improved thermal model for the computer code NAIAD

    Rainbow, M.T.

    1982-12-01

    An improved thermal model, based on the concept of heat slabs, has been incorporated as an option into the thermal hydraulic computer code NAIAD. The heat slabs are one-dimensional thermal conduction models with temperature independent thermal properties which may be internal and/or external to the fluid. Thermal energy may be added to or removed from the fluid via heat slabs and passed across the external boundary of external heat slabs at a rate which is a linear function of the external surface temperatures. The code input for the new option has been restructured to simplify data preparation. A full description of current input requirements is presented

  18. Noise and contrast detection in computed tomography images

    Faulkner, K.; Moores, B.M.

    1984-01-01

    A discrete representation of the reconstruction process is used in an analysis of noise in computed tomography (CT) images. This model is consistent with the method of data collection in actual machines. An expression is derived which predicts the variance on the measured linear attenuation coefficient of a single pixel in an image. The dependence of the variance on various CT scanner design parameters such as pixel size, slice width, scan time, number of detectors, etc., is then described. The variation of noise with sampling area is theoretically explained. These predictions are in good agreement with a set of experimental measurements made on a range of CT scanners. The equivalent sampling aperture of the CT process is determined and the effect of the reconstruction filter on the variance of the linear attenuation coefficient is also noted, in particular, the choice and its consequences for reconstructed images and noise behaviour. The theory has been extended to include contrast detail behaviour, and these predictions compare favourably with experimental measurements. The theory predicts that image smoothing will have little effect on the contrast-detail detectability behaviour of reconstructed images. (author)

  19. Brain lesions in congenital nystagmus as detected by computed tomography

    Lo, Chin-Ying

    1982-01-01

    Computed tomography (CT) was performed in a series of 60 cases with congenital nystagmus. The type of nystagmus was pendular in 20 and jerky in 40 cases. The age ranged from 3 months to 13 years. Abnormal CT findings of the central nervous system were detected in 31 cases (52%). There were 5 major CT findings: midline anomalies, cortical atrophy, ventricular dilatation, brain stem atrophy and low density area. The midline anomalies involved cavum septi pellucidi, cavum Vergae, cavum veli interpositi and partial agenesis of corpus callosum. The abnormal CT findings were more prominent in pendular type than in jerky type. The incidence of congenital nystagmus and positive CT findings were the same in the first and the second birth. There was a history of abnormalities during the prenatal or perinatal period in 28 out of the 60 cases (47%). This feature seemed to play a significant role in the occurrence of congenital nystagmus. The observed organic lesions in the central nervous system by CT would contribute to the elucidation of pathomechanism of congenital nystagmus. (author)

  20. Improvement and implementation for Canny edge detection algorithm

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  1. Bayesian image reconstruction for improving detection performance of muon tomography.

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  2. Improving CMS data transfers among its distributed computing facilities

    Flix, J; Magini, N; Sartirana, A

    2011-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on their usage, customizing the topologies and improving their setup in order to keep CMS transferring data at the desired levels in a reliable and robust way.

  3. Improving robustness and computational efficiency using modern C++

    Paterno, M; Kowalkowski, J; Green, C

    2014-01-01

    For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In this paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.

  4. Improving computational efficiency of Monte Carlo simulations with variance reduction

    Turner, A.; Davis, A.

    2013-01-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  5. Use of computer codes to improve nuclear power plant operation

    Misak, J.; Polak, V.; Filo, J.; Gatas, J.

    1985-01-01

    For safety and economic reasons, the scope for carrying out experiments on operational nuclear power plants (NPPs) is very limited and any changes in technical equipment and operating parameters or conditions have to be supported by theoretical calculations. In the Nuclear Power Plant Scientific Research Institute (NIIAEhS), computer codes are systematically used to analyse actual operating events, assess safety aspects of changes in equipment and operating conditions, optimize the conditions, preparation and analysis of NPP startup trials and review and amend operating instructions. In addition, calculation codes are gradually being introduced into power plant computer systems to perform real time processing of the parameters being measured. The paper describes a number of specific examples of the use of calculation codes for the thermohydraulic analysis of operating and accident conditions aimed at improving the operation of WWER-440 units at the Jaslovske Bohunice V-1 and V-2 nuclear power plants. These examples confirm that computer calculations are an effective way of solving operating problems and of further increasing the level of safety and economic efficiency of NPP operation. (author)

  6. Computer-aided detection of small pulmonary nodules in multidetector spiral computed tomography (MSCT) in children

    Honnef, D.; Behrendt, F.F.; Hohl, C.; Mahnken, A.H.; Guenther, R.W.; Das, M.; Mertens, R.; Stanzel, S.

    2008-01-01

    Purpose: Retrospective evaluation of computer-aided detection software (CAD) for automated detection (LungCAD, Siemens Medical solutions, Forchheim, Germany) and volumetry (LungCARE) of pulmonary nodules in dose-reduced pediatric MDCT. Materials and Methods: 30 scans of 24 children (10.4±5.9 years, 13 girls, 11 boys, 39.7±29.3 kg body weight) were performed on a 16-MDCT for tumor staging (n=18), inflammation (n=9), other indications (n=3). Tube voltage 120 kVp and effective mAs were adapted to body weight. Slice thickness 2 mm, increment 1 mm. A pediatric radiologist (U1), a CAD expert (U2) and an inexperienced radiologist (U3) independently analyzed the lung window images without and with the CAD as a second reader. In a consensus decision U1 and U2 were the reference standard. Results: Five examinations had to be excluded from the study due to other underlying lung disease. A total of 24 pulmonary nodules were found in all data sets with a minimal diameter of 0.35 mm to 3.81 mm (mean 1.7±0.85 mm). The sensitivities were as follows: U1 95.8% and 100% with CAD; U2 91.7% U3 66.7%. U2 and U3 did not detect further nodules with CAD. The sensitivity of CAD alone was 41.7% with 0.32 false-positive findings per examination. Interobserver agreement between U1/U2 regarding nodule detection with CAD was good (k=0.6500) and without CAD very good (k=0.8727). For the rest (U1/U3; U2/U3 with and without CAD), it was weak (k=0.0667-0.1884). Depending on the measured value (axial measurement, volume), there is a significant correlation (p=0.0026-0.0432) between nodule size and CAD detection. Undetected pulmonary nodules (mean 1.35 mm; range 0.35-2.61 mm) were smaller than the detected ones (mean 2.19 mm; range 1.35-3.81 mm). No significant correlation was found between CAD findings and patient age (p=0.9263) and body weight (p=0.9271) as well as nodule location (subpleural, intraparenchymal; p=1.0) and noise/SNR. (orig.)

  7. South Ukraine NPP: Safety improvements through Plant Computer upgrade

    Brenman, O.; Chernyshov, M. A.; Denning, R. S.; Kolesov, S. A.; Balakan, H. H.; Bilyk, B. I.; Kuznetsov, V. I.; Trosman, G.

    2006-01-01

    This paper summarizes some results of the Plant Computer upgrade at the Units 2 and 3 of South Ukraine Nuclear Power Plant (NPP). A Plant Computer, which is also called the Computer Information System (CIS), is one of the key safety-related systems at VVER-1000 nuclear plants. The main function of the CIS is information support for the plant operators during normal and emergency operational modes. Before this upgrade, South Ukraine NPP operated out-of-date and obsolete systems. This upgrade project wax founded by the U.S. DOE in the framework of the International Nuclear Safety Program (INSP). The most efficient way to improve the quality and reliability of information provided to the plant operator is to upgrade the Human-System Interface (HSI), which is the Upper Level (UL) CIS. The upgrade of the CIS data-acquisition system (DAS), which is the Lower Level (LL) CIS, would have less effect on the unit safety. Generally speaking, the lifetime of the LL CIS is much higher than one of the UL CIS. Unlike Plant Computers at the Western-designed plants, the functionality of the WER-1000 CISs includes a control function (Centralized Protection Testing) and a number of the plant equipment monitoring functions, for example, Protection and Interlock Monitoring and Turbo-Generator Temperature Monitoring. The new system is consistent with a historical migration of the format by which information is presented to the operator away from the traditional graphic displays, for example, Piping and Instrument Diagrams (P and ID's), toward Integral Data displays. The cognitive approach to information presentation is currently limited by some licensing issues, but is adapted to a greater degree with each new system. The paper provides some lessons learned on the management of the international team. (authors)

  8. Computer aided detection of ureteral stones in thin slice computed tomography volumes using Convolutional Neural Networks.

    Längkvist, Martin; Jendeberg, Johan; Thunberg, Per; Loutfi, Amy; Lidén, Mats

    2018-06-01

    Computed tomography (CT) is the method of choice for diagnosing ureteral stones - kidney stones that obstruct the ureter. The purpose of this study is to develop a computer aided detection (CAD) algorithm for identifying a ureteral stone in thin slice CT volumes. The challenge in CAD for urinary stones lies in the similarity in shape and intensity of stones with non-stone structures and how to efficiently deal with large high-resolution CT volumes. We address these challenges by using a Convolutional Neural Network (CNN) that works directly on the high resolution CT volumes. The method is evaluated on a large data base of 465 clinically acquired high-resolution CT volumes of the urinary tract with labeling of ureteral stones performed by a radiologist. The best model using 2.5D input data and anatomical information achieved a sensitivity of 100% and an average of 2.68 false-positives per patient on a test set of 88 scans. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Antibiotic Resistome: Improving Detection and Quantification Accuracy for Comparative Metagenomics.

    Elbehery, Ali H A; Aziz, Ramy K; Siam, Rania

    2016-04-01

    The unprecedented rise of life-threatening antibiotic resistance (AR), combined with the unparalleled advances in DNA sequencing of genomes and metagenomes, has pushed the need for in silico detection of the resistance potential of clinical and environmental metagenomic samples through the quantification of AR genes (i.e., genes conferring antibiotic resistance). Therefore, determining an optimal methodology to quantitatively and accurately assess AR genes in a given environment is pivotal. Here, we optimized and improved existing AR detection methodologies from metagenomic datasets to properly consider AR-generating mutations in antibiotic target genes. Through comparative metagenomic analysis of previously published AR gene abundance in three publicly available metagenomes, we illustrate how mutation-generated resistance genes are either falsely assigned or neglected, which alters the detection and quantitation of the antibiotic resistome. In addition, we inspected factors influencing the outcome of AR gene quantification using metagenome simulation experiments, and identified that genome size, AR gene length, total number of metagenomics reads and selected sequencing platforms had pronounced effects on the level of detected AR. In conclusion, our proposed improvements in the current methodologies for accurate AR detection and resistome assessment show reliable results when tested on real and simulated metagenomic datasets.

  10. Iterative reconstruction with boundary detection for carbon ion computed tomography

    Shrestha, Deepak; Qin, Nan; Zhang, You; Kalantari, Faraz; Niu, Shanzhou; Jia, Xun; Pompos, Arnold; Jiang, Steve; Wang, Jing

    2018-03-01

    In heavy ion radiation therapy, improving the accuracy in range prediction of the ions inside the patient’s body has become essential. Accurate localization of the Bragg peak provides greater conformity of the tumor while sparing healthy tissues. We investigated the use of carbon ions directly for computed tomography (carbon CT) to create the relative stopping power map of a patient’s body. The Geant4 toolkit was used to perform a Monte Carlo simulation of the carbon ion trajectories, to study their lateral and angular deflections and the most likely paths, using a water phantom. Geant4 was used to create carbonCT projections of a contrast and spatial resolution phantom, with a cone beam of 430 MeV/u carbon ions. The contrast phantom consisted of cranial bone, lung material, and PMMA inserts while the spatial resolution phantom contained bone and lung material inserts with line pair (lp) densities ranging from 1.67 lp cm-1 through 5 lp cm-1. First, the positions of each carbon ion on the rear and front trackers were used for an approximate reconstruction of the phantom. The phantom boundary was extracted from this approximate reconstruction, by using the position as well as angle information from the four tracking detectors, resulting in the entry and exit locations of the individual ions on the phantom surface. Subsequent reconstruction was performed by the iterative algebraic reconstruction technique coupled with total variation minimization (ART-TV) assuming straight line trajectories for the ions inside the phantom. The influence of number of projections was studied with reconstruction from five different sets of projections: 15, 30, 45, 60 and 90. Additionally, the effect of number of ions on the image quality was investigated by reducing the number of ions/projection while keeping the total number of projections at 60. An estimation of carbon ion range using the carbonCT image resulted in improved range prediction compared to the range calculated using a

  11. Role of Computer Aided Diagnosis (CAD in the detection of pulmonary nodules on 64 row multi detector computed tomography

    K Prakashini

    2016-01-01

    Full Text Available Aims and Objectives: To determine the overall performance of an existing CAD algorithm with thin-section computed tomography (CT in the detection of pulmonary nodules and to evaluate detection sensitivity at a varying range of nodule density, size, and location. Materials and Methods: A cross-sectional prospective study was conducted on 20 patients with 322 suspected nodules who underwent diagnostic chest imaging using 64-row multi-detector CT. The examinations were evaluated on reconstructed images of 1.4 mm thickness and 0.7 mm interval. Detection of pulmonary nodules, initially by a radiologist of 2 years experience (RAD and later by CAD lung nodule software was assessed. Then, CAD nodule candidates were accepted or rejected accordingly. Detected nodules were classified based on their size, density, and location. The performance of the RAD and CAD system was compared with the gold standard that is true nodules confirmed by consensus of senior RAD and CAD together. The overall sensitivity and false-positive (FP rate of CAD software was calculated. Observations and Results: Of the 322 suspected nodules, 221 were classified as true nodules on the consensus of senior RAD and CAD together. Of the true nodules, the RAD detected 206 (93.2% and 202 (91.4% by the CAD. CAD and RAD together picked up more number of nodules than either CAD or RAD alone. Overall sensitivity for nodule detection with the CAD program was 91.4%, and FP detection per patient was 5.5%. The CAD showed comparatively higher sensitivity for nodules of size 4-10 mm (93.4% and nodules in hilar (100% and central (96.5% location when compared to RAD′s performance. Conclusion: CAD performance was high in detecting pulmonary nodules including the small size and low-density nodules. CAD even with relatively high FP rate, assists and improves RAD′s performance as a second reader, especially for nodules located in the central and hilar region and for small nodules by saving RADs time.

  12. 3D computer-aided detection for digital breast tomosynthesis: Comparison with 2D computer-aided detection for digital mammography in the detection of calcifications

    Chu, A Jung; Cho, Nariya; Chang, Jung Min; Kim, Won Hwa; Lee, Su Hyun; Song, Sung Eun; Shin, Sung Ui; Moon, Woo Kyung [Dept. of Radiology, Seoul National University College of Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)

    2017-08-15

    To retrospectively evaluate the performance of 3D computer-aided detection (CAD) for digital breast tomosynthesis (DBT) in the detection of calcifications in comparison with 2D CAD for digital mammography (DM). Between 2012 and 2013, both 3D CAD and 2D CAD systems were retrospectively applied to the calcification data set including 69 calcifications (31 malignant calcifications and 38 benign calcifications) and the normal data set including 20 bilateral normal mammograms. Each data set consisted of paired DBT and DM images. Sensitivities for the detection of malignant calcifications were calculated from the calcification data set. False-positive mark rates were calculated from the normal data set. They were compared between the two systems. Sensitivities of 3D CAD [100% (31/31) at levels 2, 1, and 0] were same as those of the 2D CAD system [100% (31/31) at levels 2 and 1] (p = 1.0, respectively). The mean value of false-positive marks per view with 3D CAD was higher than that with 2D CAD at level 2 (0.52 marks ± 0.91 vs. 0.07 marks ± 0.26, p = 0.009). 3D CAD for DBT showed equivalent sensitivity, albeit with a higher false-positive mark rate, than 2D CAD for DM in the detection of calcifications.

  13. Nanobarcoding for improved nanoparticle detection in nanomedical biodistribution studies

    Eustaquio, Trisha

    Determination of the fate of nanoparticles (NPs) in a biological system, or NP biodistribution, is critical in evaluating a NP formulation for nanomedicine. Unlike small-molecule drugs, NPs impose unique challenges in the design of appropriate biodistribution studies due to their small size and subsequent detection signal. Current methods to determine NP biodistribution are greatly inadequate due to their limited detection thresholds. There is an overwhelming need for a sensitive and efficient imaging-based method that can (1) detect and measure small numbers of NPs of various types, ideally single NPs, (2) associate preferential NP uptake with histological cell type by preserving spatial information in samples, and (3) allow for relatively quick and accurate NP detection in in vitro (and possibly ex vivo) samples for comprehensive NP biodistribution studies. Herein, a novel method for improved NP detection is proposed, coined "nanobarcoding." Nanobarcoding utilizes a non-endogenous oligonucleotide, or "nanobarcode" (NB), conjugated to the NP surface to amplify the detection signal from a single NP via in situ polymerase chain reaction (ISPCR), and this signal amplification will facilitate rapid and precise detection of single NPs inside cells over large areas of sample such that more sophisticated studies can be performed on the NP-positive subpopulation. Moreover, nanobarcoding has the potential to be applied to the detection of more than one NP type to study the effects of physicochemical properties, targeting mechanisms, and route of entry on NP biodistribution. The nanobarcoding method was validated in vitro using NB-functionalized superparamagnetic iron oxide NPs (NB-SPIONs) as the model NP type for improved NP detection inside HeLa human cervical cancer cells, a cell line commonly used for ISPCR-mediated detection of human papilloma virus (HPV). Nanotoxicity effects of NB-SPIONs were also evaluated at the single-cell level using LEAP (Laser-Enabled Analysis

  14. Value of a Computer-aided Detection System Based on Chest Tomosynthesis Imaging for the Detection of Pulmonary Nodules.

    Yamada, Yoshitake; Shiomi, Eisuke; Hashimoto, Masahiro; Abe, Takayuki; Matsusako, Masaki; Saida, Yukihisa; Ogawa, Kenji

    2018-04-01

    Purpose To assess the value of a computer-aided detection (CAD) system for the detection of pulmonary nodules on chest tomosynthesis images. Materials and Methods Fifty patients with and 50 without pulmonary nodules underwent both chest tomosynthesis and multidetector computed tomography (CT) on the same day. Fifteen observers (five interns and residents, five chest radiologists, and five abdominal radiologists) independently evaluated tomosynthesis images of 100 patients for the presence of pulmonary nodules in a blinded and randomized manner, first without CAD, then with the inclusion of CAD marks. Multidetector CT images served as the reference standard. Free-response receiver operating characteristic analysis was used for the statistical analysis. Results The pooled diagnostic performance of 15 observers was significantly better with CAD than without CAD (figure of merit [FOM], 0.74 vs 0.71, respectively; P = .02). The average true-positive fraction and false-positive rate per all cases with CAD were 0.56 and 0.26, respectively, whereas those without CAD were 0.47 and 0.20, respectively. Subanalysis showed that the diagnostic performance of interns and residents was significantly better with CAD than without CAD (FOM, 0.70 vs 0.62, respectively; P = .001), whereas for chest radiologists and abdominal radiologists, the FOM with CAD values were greater but not significantly: 0.80 versus 0.78 (P = .38) and 0.74 versus 0.73 (P = .65), respectively. Conclusion CAD significantly improved diagnostic performance in the detection of pulmonary nodules on chest tomosynthesis images for interns and residents, but provided minimal benefit for chest radiologists and abdominal radiologists. © RSNA, 2017 Online supplemental material is available for this article.

  15. IMPROVING TACONITE PROCESSING PLANT EFFICIENCY BY COMPUTER SIMULATION, Final Report

    William M. Bond; Salih Ersayin

    2007-03-30

    This project involved industrial scale testing of a mineral processing simulator to improve the efficiency of a taconite processing plant, namely the Minorca mine. The Concentrator Modeling Center at the Coleraine Minerals Research Laboratory, University of Minnesota Duluth, enhanced the capabilities of available software, Usim Pac, by developing mathematical models needed for accurate simulation of taconite plants. This project provided funding for this technology to prove itself in the industrial environment. As the first step, data representing existing plant conditions were collected by sampling and sample analysis. Data were then balanced and provided a basis for assessing the efficiency of individual devices and the plant, and also for performing simulations aimed at improving plant efficiency. Performance evaluation served as a guide in developing alternative process strategies for more efficient production. A large number of computer simulations were then performed to quantify the benefits and effects of implementing these alternative schemes. Modification of makeup ball size was selected as the most feasible option for the target performance improvement. This was combined with replacement of existing hydrocyclones with more efficient ones. After plant implementation of these modifications, plant sampling surveys were carried out to validate findings of the simulation-based study. Plant data showed very good agreement with the simulated data, confirming results of simulation. After the implementation of modifications in the plant, several upstream bottlenecks became visible. Despite these bottlenecks limiting full capacity, concentrator energy improvement of 7% was obtained. Further improvements in energy efficiency are expected in the near future. The success of this project demonstrated the feasibility of a simulation-based approach. Currently, the Center provides simulation-based service to all the iron ore mining companies operating in northern

  16. Improving flow distribution in influent channels using computational fluid dynamics.

    Park, No-Suk; Yoon, Sukmin; Jeong, Woochang; Lee, Seungjae

    2016-10-01

    Although the flow distribution in an influent channel where the inflow is split into each treatment process in a wastewater treatment plant greatly affects the efficiency of the process, and a weir is the typical structure for the flow distribution, to the authors' knowledge, there is a paucity of research on the flow distribution in an open channel with a weir. In this study, the influent channel of a real-scale wastewater treatment plant was used, installing a suppressed rectangular weir that has a horizontal crest to cross the full channel width. The flow distribution in the influent channel was analyzed using a validated computational fluid dynamics model to investigate (1) the comparison of single-phase and two-phase simulation, (2) the improved procedure of the prototype channel, and (3) the effect of the inflow rate on flow distribution. The results show that two-phase simulation is more reliable due to the description of the free-surface fluctuations. It should first be considered for improving flow distribution to prevent a short-circuit flow, and the difference in the kinetic energy with the inflow rate makes flow distribution trends different. The authors believe that this case study is helpful for improving flow distribution in an influent channel.

  17. Service Outsourcing Character Oriented Privacy Conflict Detection Method in Cloud Computing

    Changbo Ke

    2014-01-01

    Full Text Available Cloud computing has provided services for users as a software paradigm. However, it is difficult to ensure privacy information security because of its opening, virtualization, and service outsourcing features. Therefore how to protect user privacy information has become a research focus. In this paper, firstly, we model service privacy policy and user privacy preference with description logic. Secondly, we use the pellet reasonor to verify the consistency and satisfiability, so as to detect the privacy conflict between services and user. Thirdly, we present the algorithm of detecting privacy conflict in the process of cloud service composition and prove the correctness and feasibility of this method by case study and experiment analysis. Our method can reduce the risk of user sensitive privacy information being illegally used and propagated by outsourcing services. In the meantime, the method avoids the exception in the process of service composition by the privacy conflict, and improves the trust degree of cloud service providers.

  18. Real-Time Pore Pressure Detection: Indicators and Improved Methods

    Jincai Zhang

    2017-01-01

    Full Text Available High uncertainties may exist in the predrill pore pressure prediction in new prospects and deepwater subsalt wells; therefore, real-time pore pressure detection is highly needed to reduce drilling risks. The methods for pore pressure detection (the resistivity, sonic, and corrected d-exponent methods are improved using the depth-dependent normal compaction equations to adapt to the requirements of the real-time monitoring. A new method is proposed to calculate pore pressure from the connection gas or elevated background gas, which can be used for real-time pore pressure detection. The pore pressure detection using the logging-while-drilling, measurement-while-drilling, and mud logging data is also implemented and evaluated. Abnormal pore pressure indicators from the well logs, mud logs, and wellbore instability events are identified and analyzed to interpret abnormal pore pressures for guiding real-time drilling decisions. The principles for identifying abnormal pressure indicators are proposed to improve real-time pore pressure monitoring.

  19. Computer-aided-detection marker value and breast density in the detection of invasive lobular carcinoma

    Destounis, Stamatia; Hanson, Sarah [The Elizabeth Wende Breast Clinic, Rochester, NY (United States); Roehrig, Jimmy [R2/Hologic, Inc., Santa Clara, CA (United States)

    2007-08-15

    Invasive Lobular Carcinoma (ILC) is frequently a mammographic and diagnostic dilemma; thus any additional information that CAD (Computer-Aided Detection) systems can give radiologists may be helpful. Our study was to evaluate the role of CAD numeric values as indicators of malignancy and the effect of breast density in the diagnosis of ILC. Eighty consecutive biopsy-proven ILC cases with CAD (ImageChecker {sup registered}, Hologic vertical stroke R2, Santa Clara, CA, versions 2.3, 3.1, 3.2, 5.0, 5.2) diagnosed between June 2002 and December 2004 were retrospectively reviewed. Data included: BIRADS {sup registered} breast density, whether CAD marked the cancer at diagnosis year or years prior, and lesion type. Study mammograms underwent additional CAD scans (Image Checker {sup registered} V5.3, V8.0, V8.1) to obtain a numeric value associated with each marker, low values represent increasingly suspicious features. CAD correctly marked 65% (52/80) of ILC cases, detection was found to decrease with increased breast density. Numeric values of CAD marks at sites of carcinoma showed median score of 171 (range 0 - 1121). The CAD marker may potentially be used as an additional indicator of suspicious lesion features in all breast densities and higher likelihood that an area on the mammogram requires further investigation. (orig.)

  20. Computer-aided-detection marker value and breast density in the detection of invasive lobular carcinoma

    Destounis, Stamatia; Hanson, Sarah; Roehrig, Jimmy

    2007-01-01

    Invasive Lobular Carcinoma (ILC) is frequently a mammographic and diagnostic dilemma; thus any additional information that CAD (Computer-Aided Detection) systems can give radiologists may be helpful. Our study was to evaluate the role of CAD numeric values as indicators of malignancy and the effect of breast density in the diagnosis of ILC. Eighty consecutive biopsy-proven ILC cases with CAD (ImageChecker registered , Hologic vertical stroke R2, Santa Clara, CA, versions 2.3, 3.1, 3.2, 5.0, 5.2) diagnosed between June 2002 and December 2004 were retrospectively reviewed. Data included: BIRADS registered breast density, whether CAD marked the cancer at diagnosis year or years prior, and lesion type. Study mammograms underwent additional CAD scans (Image Checker registered V5.3, V8.0, V8.1) to obtain a numeric value associated with each marker, low values represent increasingly suspicious features. CAD correctly marked 65% (52/80) of ILC cases, detection was found to decrease with increased breast density. Numeric values of CAD marks at sites of carcinoma showed median score of 171 (range 0 - 1121). The CAD marker may potentially be used as an additional indicator of suspicious lesion features in all breast densities and higher likelihood that an area on the mammogram requires further investigation. (orig.)

  1. Improving PET spatial resolution and detectability for prostate cancer imaging

    Bal, H; Guerin, L; Casey, M E; Conti, M; Eriksson, L; Michel, C; Fanti, S; Pettinato, C; Adler, S; Choyke, P

    2014-01-01

    Prostate cancer, one of the most common forms of cancer among men, can benefit from recent improvements in positron emission tomography (PET) technology. In particular, better spatial resolution, lower noise and higher detectability of small lesions could be greatly beneficial for early diagnosis and could provide a strong support for guiding biopsy and surgery. In this article, the impact of improved PET instrumentation with superior spatial resolution and high sensitivity are discussed, together with the latest development in PET technology: resolution recovery and time-of-flight reconstruction. Using simulated cancer lesions, inserted in clinical PET images obtained with conventional protocols, we show that visual identification of the lesions and detectability via numerical observers can already be improved using state of the art PET reconstruction methods. This was achieved using both resolution recovery and time-of-flight reconstruction, and a high resolution image with 2 mm pixel size. Channelized Hotelling numerical observers showed an increase in the area under the LROC curve from 0.52 to 0.58. In addition, a relationship between the simulated input activity and the area under the LROC curve showed that the minimum detectable activity was reduced by more than 23%. (paper)

  2. Improving coal flotation recovery using computational fluid dynamics

    Peter Koh [CSIRO Minerals (Australia)

    2009-06-15

    This work involves using the latest advances in computational fluid dynamics (CFD) to increase understanding of the hydrodynamics in coal flotation and to identify any opportunities to improve design and operation of both the Microcel column and Jameson cell. The CSIRO CFD model incorporates micro-processes from cell hydrodynamics that affect particle-bubble attachments and detachments. CFD simulation results include the liquid velocities, turbulent dissipation rates, gas hold-up, particle-bubble attachment rates and detachment rates. This work has demonstrated that CFD modelling is a cost effective means of developing an understanding of particle-bubble attachments and detachments, and can be used to identify and test potential cell or process modifications.

  3. A collaborative brain-computer interface for improving human performance.

    Yijun Wang

    Full Text Available Electroencephalogram (EEG based brain-computer interfaces (BCI have been studied since the 1970s. Currently, the main focus of BCI research lies on the clinical use, which aims to provide a new communication channel to patients with motor disabilities to improve their quality of life. However, the BCI technology can also be used to improve human performance for normal healthy users. Although this application has been proposed for a long time, little progress has been made in real-world practices due to technical limits of EEG. To overcome the bottleneck of low single-user BCI performance, this study proposes a collaborative paradigm to improve overall BCI performance by integrating information from multiple users. To test the feasibility of a collaborative BCI, this study quantitatively compares the classification accuracies of collaborative and single-user BCI applied to the EEG data collected from 20 subjects in a movement-planning experiment. This study also explores three different methods for fusing and analyzing EEG data from multiple subjects: (1 Event-related potentials (ERP averaging, (2 Feature concatenating, and (3 Voting. In a demonstration system using the Voting method, the classification accuracy of predicting movement directions (reaching left vs. reaching right was enhanced substantially from 66% to 80%, 88%, 93%, and 95% as the numbers of subjects increased from 1 to 5, 10, 15, and 20, respectively. Furthermore, the decision of reaching direction could be made around 100-250 ms earlier than the subject's actual motor response by decoding the ERP activities arising mainly from the posterior parietal cortex (PPC, which are related to the processing of visuomotor transmission. Taken together, these results suggest that a collaborative BCI can effectively fuse brain activities of a group of people to improve the overall performance of natural human behavior.

  4. Computer-aided detection of brain metastasis on 3D MR imaging: Observer performance study.

    Leonard Sunwoo

    Full Text Available To assess the effect of computer-aided detection (CAD of brain metastasis (BM on radiologists' diagnostic performance in interpreting three-dimensional brain magnetic resonance (MR imaging using follow-up imaging and consensus as the reference standard.The institutional review board approved this retrospective study. The study cohort consisted of 110 consecutive patients with BM and 30 patients without BM. The training data set included MR images of 80 patients with 450 BM nodules. The test set included MR images of 30 patients with 134 BM nodules and 30 patients without BM. We developed a CAD system for BM detection using template-matching and K-means clustering algorithms for candidate detection and an artificial neural network for false-positive reduction. Four reviewers (two neuroradiologists and two radiology residents interpreted the test set images before and after the use of CAD in a sequential manner. The sensitivity, false positive (FP per case, and reading time were analyzed. A jackknife free-response receiver operating characteristic (JAFROC method was used to determine the improvement in the diagnostic accuracy.The sensitivity of CAD was 87.3% with an FP per case of 302.4. CAD significantly improved the diagnostic performance of the four reviewers with a figure-of-merit (FOM of 0.874 (without CAD vs. 0.898 (with CAD according to JAFROC analysis (p < 0.01. Statistically significant improvement was noted only for less-experienced reviewers (FOM without vs. with CAD, 0.834 vs. 0.877, p < 0.01. The additional time required to review the CAD results was approximately 72 sec (40% of the total review time.CAD as a second reader helps radiologists improve their diagnostic performance in the detection of BM on MR imaging, particularly for less-experienced reviewers.

  5. Reproducibility of computer-aided detection system in digital mammograms

    Kim, Seung Ja; Cho, Nariya; Cha, Joo Hee; Chung, Hye Kyung; Lee, Sin Ho; Cho, Kyung Soo; Kim, Sun Mi; Moon, Woo Kyung

    2005-01-01

    To evaluate the reproducibility of the computer-aided detection (CAD) system for digital mammograms. We applied the CAD system (ImageChecker M1000-DM, version 3.1; R2 Technology) to full field digital mammograms. These mammograms were taken twice at an interval of 10-45 days (mean:25 days) for 34 preoperative patients (breast cancer n=27, benign disease n=7, age range:20-66 years, mean age:47.9 years). On the mammograms, lesions were visible in 19 patients and these were depicted as 15 masses and 12 calcification clusters. We analyzed the sensitivity, the false positive rate (FPR) and the reproducibility of the CAD marks. The broader sensitivities of the CAD system were 80% (12 of 15), 67%(10 of 15) for masses and those for calcification clusters were 100% (12 of 12). The strict sensitivities were 50% (15 of 30) and 50% (15 of 30) for masses and 92% (22 of 24) and 79% (19 of 24) for the clusters. The FPR for the masses was 0.21-0.22/image, the FPR for the clusters was 0.03-0.04/image and the total FPR was 0.24-0.26/image. Among 132 mammography images, the identical images regardless of the existence of CAD marks were 59% (78 of 132), and the identical images with CAD marks were 22% (15 of 69). The reproducibility of the CAD marks for the true positive mass was 67% (12 of 18) and 71% (17 of 24) for the true positive cluster. The reproducibility of CAD marks for the false positive mass was 8% (4 of 53), and the reproducibility of CAD marks for the false positive clusters was 14% (1 of 7). The reproducibility of the total mass marks was 23% (16 of 71), and the reproducibility of the total cluster marks was 58% (18 of 31). CAD system showed higher sensitivity and reproducibility of CAD marks for the calcification clusters which are related to breast cancer. Yet the overall reproducibility of CAD marks was low; therefore, the CAD system must be applied considering this limitation

  6. A Wearable Channel Selection-Based Brain-Computer Interface for Motor Imagery Detection.

    Lo, Chi-Chun; Chien, Tsung-Yi; Chen, Yu-Chun; Tsai, Shang-Ho; Fang, Wai-Chi; Lin, Bor-Shyh

    2016-02-06

    Motor imagery-based brain-computer interface (BCI) is a communication interface between an external machine and the brain. Many kinds of spatial filters are used in BCIs to enhance the electroencephalography (EEG) features related to motor imagery. The approach of channel selection, developed to reserve meaningful EEG channels, is also an important technique for the development of BCIs. However, current BCI systems require a conventional EEG machine and EEG electrodes with conductive gel to acquire multi-channel EEG signals and then transmit these EEG signals to the back-end computer to perform the approach of channel selection. This reduces the convenience of use in daily life and increases the limitations of BCI applications. In order to improve the above issues, a novel wearable channel selection-based brain-computer interface is proposed. Here, retractable comb-shaped active dry electrodes are designed to measure the EEG signals on a hairy site, without conductive gel. By the design of analog CAR spatial filters and the firmware of EEG acquisition module, the function of spatial filters could be performed without any calculation, and channel selection could be performed in the front-end device to improve the practicability of detecting motor imagery in the wearable EEG device directly or in commercial mobile phones or tablets, which may have relatively low system specifications. Finally, the performance of the proposed BCI is investigated, and the experimental results show that the proposed system is a good wearable BCI system prototype.

  7. Derivatizations for improved detection of alcohols by gas chromatography and photoionization detection (GC-PID)

    Krull, I.S.; Swartz, M.; Driscoll, J.N.

    1984-01-01

    Pentafluorophenyldimethylsilyl chloride (flophemesyl chloride, Fl) is a well known derivatization reagent for improved electron capture detection (ECD) in gas chromatography (GC)(GC-ECD), but it has never been utilized for improved detectability and sensitivity in GC-photoionization detection (GC-PID). A wide variety of flophemesyl alcohol derivatives have been used in order to show a new approach for realizing greatly reduced minimum detection limits (MDL) of virtually all alcohol derivatives in GC-PID analysis. This particular derivatization approach is inexpensive and easy to apply, leading to quantitative or near 100% conversion of the starting alcohols to the expected flophemesyl ethers (silyl ethers). Detection limits can be lowered by 2-3 orders of magnitude for such derivatives when compared with the starting alcohols, along with calibration plots that are linear over 5-7 orders of magnitude. Specific GC conditions have been developed for many flophemesyl derivatives, in all cases using packed columns. Both ECD and PID relative response factors (RRFs) and normalized RRFs have been determined, and such ratios can now be used for improved analytic identification from complex sample matrices, where appropriate. 28 references, 2 figures, 5 tables

  8. Potential contribution of multiplanar reconstruction (MPR) to computer-aided detection of lung nodules on MDCT

    Matsumoto, Sumiaki; Ohno, Yoshiharu; Yamagata, Hitoshi; Nogami, Munenobu; Kono, Atsushi; Sugimura, Kazuro

    2012-01-01

    Purpose: To evaluate potential benefits of using multiplanar reconstruction (MPR) in computer-aided detection (CAD) of lung nodules on multidetector computed tomography (MDCT). Materials and methods: MDCT datasets of 60 patients with suspected lung nodules were retrospectively collected. Using “second-read” CAD, two radiologists (Readers 1 and 2) independently interpreted these datasets for the detection of non-calcified nodules (≥4 mm) with concomitant confidence rating. They did this task twice, first without MPR (using only axial images), and then 4 weeks later with MPR (using also coronal and sagittal MPR images), where the total reading time per dataset, including the time taken to assess the detection results of CAD software (CAD assessment time), was recorded. The total reading time and CAD assessment time without MPR and those with MPR were statistically compared for each reader. The radiologists’ performance for detecting nodules without MPR and the performance with MPR were compared using jackknife free-response receiver operating characteristic (JAFROC) analysis. Results: Compared to the CAD assessment time without MPR (mean, 69 s and 57 s for Readers 1 and 2), the CAD assessment time with MPR (mean, 46 s and 45 s for Readers 1 and 2) was significantly reduced (P < 0.001). For Reader 1, the total reading time was also significantly shorter in the case with MPR. There was no significant difference between the detection performances without MPR and with MPR. Conclusion: The use of MPR has the potential to improve the workflow in CAD of lung nodules on MDCT.

  9. Strategies for improving approximate Bayesian computation tests for synchronous diversification.

    Overcast, Isaac; Bagley, Justin C; Hickerson, Michael J

    2017-08-24

    Estimating the variability in isolation times across co-distributed taxon pairs that may have experienced the same allopatric isolating mechanism is a core goal of comparative phylogeography. The use of hierarchical Approximate Bayesian Computation (ABC) and coalescent models to infer temporal dynamics of lineage co-diversification has been a contentious topic in recent years. Key issues that remain unresolved include the choice of an appropriate prior on the number of co-divergence events (Ψ), as well as the optimal strategies for data summarization. Through simulation-based cross validation we explore the impact of the strategy for sorting summary statistics and the choice of prior on Ψ on the estimation of co-divergence variability. We also introduce a new setting (β) that can potentially improve estimation of Ψ by enforcing a minimal temporal difference between pulses of co-divergence. We apply this new method to three empirical datasets: one dataset each of co-distributed taxon pairs of Panamanian frogs and freshwater fishes, and a large set of Neotropical butterfly sister-taxon pairs. We demonstrate that the choice of prior on Ψ has little impact on inference, but that sorting summary statistics yields substantially more reliable estimates of co-divergence variability despite violations of assumptions about exchangeability. We find the implementation of β improves estimation of Ψ, with improvement being most dramatic given larger numbers of taxon pairs. We find equivocal support for synchronous co-divergence for both of the Panamanian groups, but we find considerable support for asynchronous divergence among the Neotropical butterflies. Our simulation experiments demonstrate that using sorted summary statistics results in improved estimates of the variability in divergence times, whereas the choice of hyperprior on Ψ has negligible effect. Additionally, we demonstrate that estimating the number of pulses of co-divergence across co-distributed taxon

  10. Improving Software Performance in the Compute Unified Device Architecture

    Alexandru PIRJAN

    2010-01-01

    Full Text Available This paper analyzes several aspects regarding the improvement of software performance for applications written in the Compute Unified Device Architecture CUDA. We address an issue of great importance when programming a CUDA application: the Graphics Processing Unit’s (GPU’s memory management through ranspose ernels. We also benchmark and evaluate the performance for progressively optimizing a transposing matrix application in CUDA. One particular interest was to research how well the optimization techniques, applied to software application written in CUDA, scale to the latest generation of general-purpose graphic processors units (GPGPU, like the Fermi architecture implemented in the GTX480 and the previous architecture implemented in GTX280. Lately, there has been a lot of interest in the literature for this type of optimization analysis, but none of the works so far (to our best knowledge tried to validate if the optimizations can apply to a GPU from the latest Fermi architecture and how well does the Fermi architecture scale to these software performance improving techniques.

  11. Improved Ordinary Measure and Image Entropy Theory based intelligent Copy Detection Method

    Dengpan Ye

    2011-10-01

    Full Text Available Nowadays, more and more multimedia websites appear in social network. It brings some security problems, such as privacy, piracy, disclosure of sensitive contents and so on. Aiming at copyright protection, the copy detection technology of multimedia contents becomes a hot topic. In our previous work, a new computer-based copyright control system used to detect the media has been proposed. Based on this system, this paper proposes an improved media feature matching measure and an entropy based copy detection method. The Levenshtein Distance was used to enhance the matching degree when using for feature matching measure in copy detection. For entropy based copy detection, we make a fusion of the two features of entropy matrix of the entropy feature we extracted. Firstly,we extract the entropy matrix of the image and normalize it. Then, we make a fusion of the eigenvalue feature and the transfer matrix feature of the entropy matrix. The fused features will be used for image copy detection. The experiments show that compared to use these two kinds of features for image detection singly, using feature fusion matching method is apparent robustness and effectiveness. The fused feature has a high detection for copy images which have been received some attacks such as noise, compression, zoom, rotation and so on. Comparing with referred methods, the method proposed is more intelligent and can be achieved good performance.

  12. Comparison of computed tomography and radiography for detecting changes induced by malignant nasal neoplasia in dogs

    Park, R.D.; Beck, E.R.; LeCouteur, R.A.

    1992-01-01

    The ability of computed tomography and radiography to detect changes associated with nasal neoplasia was compared in dogs. Eighteen areas or anatomic structures were evaluated in 21 dogs for changes indicative of neoplasia. Computed tomography was superior (P < or = 0.05) to radiography for detecting changes in 14 of 18 areas. Radiography was not superior for detecting changes in any structure or area. Computed tomography reveals vital information not always detected radiographically to assist in providing a prognosis and in planning treatment for nasal neoplasms in dogs

  13. Computer-Aided Detection of Malignant Lung Nodules on Chest Radiographs: Effect on Observers' Performance

    Lee, Kyung Hee; Goo, Jin Mo; Park, Chang Min; Lee, Hyun Ju; Jin, Kwang Nam

    2012-01-01

    To evaluate the effect of computer-aided detection (CAD) system on observer performance in the detection of malignant lung nodules on chest radiograph. Two hundred chest radiographs (100 normal and 100 abnormal with malignant solitary lung nodules) were evaluated. With CT and histological confirmation serving as a reference, the mean nodule size was 15.4 mm (range, 7-20 mm). Five chest radiologists and five radiology residents independently interpreted both the original radiographs and CAD output images using the sequential testing method. The performances of the observers for the detection of malignant nodules with and without CAD were compared using the jackknife free-response receiver operating characteristic analysis. Fifty-nine nodules were detected by the CAD system with a false positive rate of 1.9 nodules per case. The detection of malignant lung nodules significantly increased from 0.90 to 0.92 for a group of observers, excluding one first-year resident (p = 0.04). When lowering the confidence score was not allowed, the average figure of merit also increased from 0.90 to 0.91 (p = 0.04) for all observers after a CAD review. On average, the sensitivities with and without CAD were 87% and 84%, respectively; the false positive rates per case with and without CAD were 0.19 and 0.17, respectively. The number of additional malignancies detected following true positive CAD marks ranged from zero to seven for the various observers. The CAD system may help improve observer performance in detecting malignant lung nodules on chest radiographs and contribute to a decrease in missed lung cancer.

  14. Computer-Assisted Detection of 90% of EFL Student Errors

    Harvey-Scholes, Calum

    2018-01-01

    Software can facilitate English as a Foreign Language (EFL) students' self-correction of their free-form writing by detecting errors; this article examines the proportion of errors which software can detect. A corpus of 13,644 words of written English was created, comprising 90 compositions written by Spanish-speaking students at levels A2-B2…

  15. Automatic detection of pulmonary nodules at spiral CT: clinical application of a computer-aided diagnosis system

    Wormanns, Dag; Fiebich, Martin; Saidi, Mustafa; Diederich, Stefan; Heindel, Walter

    2002-01-01

    The aim of this study was to evaluate a computer-aided diagnosis (CAD) workstation with automatic detection of pulmonary nodules at low-dose spiral CT in a clinical setting for early detection of lung cancer. Eighty-eight consecutive spiral-CT examinations were reported by two radiologists in consensus. All examinations were reviewed using a CAD workstation with a self-developed algorithm for automatic detection of pulmonary nodules. The algorithm is designed to detect nodules with diameters of at least 5 mm. A total of 153 nodules were detected with at least one modality (radiologists in consensus, CAD, 85 nodules with diameter <5 mm, 68 with diameter ≥5 mm). The results of automatic nodule detection were compared to nodules detected with any modality as gold standard. Computer-aided diagnosis correctly identified 26 of 59 (38%) nodules with diameters ≥5 mm detected by visual assessment by the radiologists; of these, CAD detected 44% (24 of 54) nodules without pleural contact. In addition, 12 nodules ≥5 mm were detected which were not mentioned in the radiologist's report but represented real nodules. Sensitivity for detection of nodules ≥5 mm was 85% (58 of 68) for radiologists and 38% (26 of 68) for CAD. There were 5.8±3.6 false-positive results of CAD per CT study. Computer-aided diagnosis improves detection of pulmonary nodules at spiral CT and is a valuable second opinion in a clinical setting for lung cancer screening despite of its still limited sensitivity. (orig.)

  16. Improving detection probabilities for pests in stored grain.

    Elmouttie, David; Kiermeier, Andreas; Hamilton, Grant

    2010-12-01

    The presence of insects in stored grain is a significant problem for grain farmers, bulk grain handlers and distributors worldwide. Inspection of bulk grain commodities is essential to detect pests and thereby to reduce the risk of their presence in exported goods. It has been well documented that insect pests cluster in response to factors such as microclimatic conditions within bulk grain. Statistical sampling methodologies for grain, however, have typically considered pests and pathogens to be homogeneously distributed throughout grain commodities. In this paper, a sampling methodology is demonstrated that accounts for the heterogeneous distribution of insects in bulk grain. It is shown that failure to account for the heterogeneous distribution of pests may lead to overestimates of the capacity for a sampling programme to detect insects in bulk grain. The results indicate the importance of the proportion of grain that is infested in addition to the density of pests within the infested grain. It is also demonstrated that the probability of detecting pests in bulk grain increases as the number of subsamples increases, even when the total volume or mass of grain sampled remains constant. This study underlines the importance of considering an appropriate biological model when developing sampling methodologies for insect pests. Accounting for a heterogeneous distribution of pests leads to a considerable improvement in the detection of pests over traditional sampling models. Copyright © 2010 Society of Chemical Industry.

  17. Improved Genetic Algorithm Optimization for Forward Vehicle Detection Problems

    Longhui Gang

    2015-07-01

    Full Text Available Automated forward vehicle detection is an integral component of many advanced driver-assistance systems. The method based on multi-visual information fusion, with its exclusive advantages, has become one of the important topics in this research field. During the whole detection process, there are two key points that should to be resolved. One is to find the robust features for identification and the other is to apply an efficient algorithm for training the model designed with multi-information. This paper presents an adaptive SVM (Support Vector Machine model to detect vehicle with range estimation using an on-board camera. Due to the extrinsic factors such as shadows and illumination, we pay more attention to enhancing the system with several robust features extracted from a real driving environment. Then, with the introduction of an improved genetic algorithm, the features are fused efficiently by the proposed SVM model. In order to apply the model in the forward collision warning system, longitudinal distance information is provided simultaneously. The proposed method is successfully implemented on a test car and evaluation experimental results show reliability in terms of both the detection rate and potential effectiveness in a real-driving environment.

  18. Portopulmonary hypertension: Improved detection using CT and echocardiography in combination

    Devaraj, Anand; Loveridge, Robert; Bernal, William; Willars, Christopher; Wendon, Julia A.; Auzinger, Georg; Bosanac, Diana; Stefanidis, Konstantinos; Desai, Sujal R.

    2014-01-01

    To establish the relationship between CT signs of pulmonary hypertension and mean pulmonary artery pressure (mPAP) in patients with liver disease, and to determine the additive value of CT in the detection of portopulmonary hypertension in combination with transthoracic echocardiography. Forty-nine patients referred for liver transplantation were retrospectively reviewed. Measured CT signs included the main pulmonary artery/ascending aorta diameter ratio (PA/AA meas ) and the mean left and right main PA diameter (RLPA meas ). Enlargement of the pulmonary artery compared to the ascending aorta was also assessed visually (PA/AA vis ). CT measurements were correlated with right-sided heart catheter-derived mPAP. The ability of PA/AA vis combined with echocardiogram-derived right ventricular systolic pressure (RVSP) to detect portopulmonary hypertension was tested with ROC analysis. There were moderate correlations between mPAP and both PA/AA meas and RLPA meas (r s = 0.41 and r s = 0.42, respectively; p vis and transthoracic echocardiography-derived RVSP improved the detection of portopulmonary hypertension (AUC = 0.8, p < 0.0001). CT contributes to the non-invasive detection of portopulmonary hypertension when used in a diagnostic algorithm with transthoracic echocardiography. CT may have a role in the pre-liver transplantation triage of patients with portopulmonary hypertension for right-sided heart catheterisation. (orig.)

  19. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  20. A ROC-based feature selection method for computer-aided detection and diagnosis

    Wang, Songyuan; Zhang, Guopeng; Liao, Qimei; Zhang, Junying; Jiao, Chun; Lu, Hongbing

    2014-03-01

    Image-based computer-aided detection and diagnosis (CAD) has been a very active research topic aiming to assist physicians to detect lesions and distinguish them from benign to malignant. However, the datasets fed into a classifier usually suffer from small number of samples, as well as significantly less samples available in one class (have a disease) than the other, resulting in the classifier's suboptimal performance. How to identifying the most characterizing features of the observed data for lesion detection is critical to improve the sensitivity and minimize false positives of a CAD system. In this study, we propose a novel feature selection method mR-FAST that combines the minimal-redundancymaximal relevance (mRMR) framework with a selection metric FAST (feature assessment by sliding thresholds) based on the area under a ROC curve (AUC) generated on optimal simple linear discriminants. With three feature datasets extracted from CAD systems for colon polyps and bladder cancer, we show that the space of candidate features selected by mR-FAST is more characterizing for lesion detection with higher AUC, enabling to find a compact subset of superior features at low cost.

  1. High resolution PET breast imager with improved detection efficiency

    Majewski, Stanislaw

    2010-06-08

    A highly efficient PET breast imager for detecting lesions in the entire breast including those located close to the patient's chest wall. The breast imager includes a ring of imaging modules surrounding the imaged breast. Each imaging module includes a slant imaging light guide inserted between a gamma radiation sensor and a photodetector. The slant light guide permits the gamma radiation sensors to be placed in close proximity to the skin of the chest wall thereby extending the sensitive region of the imager to the base of the breast. Several types of photodetectors are proposed for use in the detector modules, with compact silicon photomultipliers as the preferred choice, due to its high compactness. The geometry of the detector heads and the arrangement of the detector ring significantly reduce dead regions thereby improving detection efficiency for lesions located close to the chest wall.

  2. Virtual colonoscopy: effect of computer-assisted detection (CAD) on radiographer performance

    Burling, D.; Moore, A.; Marshall, M.; Weldon, J.; Gillen, C.; Baldwin, R.; Smith, K.; Pickhardt, P.; Honeyfield, L.; Taylor, S.

    2008-01-01

    Aim: To investigate the effect of a virtual colonoscopy (VC) computed-assisted detection (CAD) system on polyp detection by trained radiographers. Materials and methods: Four radiographers trained in VC interpretation and utilization of CAD systems read a total of 62 endoscopically validated VC examinations containing 150 polyps (size range 5-50 mm) in four sessions, recording any polyps found and the examination interpretation time, first without and then with the addition of CAD as a 'second reader'. After a temporal separation of 6 weeks to reduce recall bias, VC examinations were re-read using 'concurrent reader' CAD. Interpretation times, polyp detection, and number of false-positives were compared between the different reader paradigms using paired t and paired exact tests. Results: Overall, use of 'second reader' CAD significantly improved polyp detection by 12% (p < 0.001, CI 6%,17%)) from 48 to 60%. There was no significant improvement using CAD as a concurrent reader (p = 0.20; difference of 7%, CI -3%, 16%) and no significant overall difference in recorded false-positives with second reader or concurrent CAD paradigms compared with unassisted reading (p = 0.25 and 0.65, respectively). The mean interpretation time was 21.7 min for unassisted reading, 29.6 (p < 0.001) min for second reader and 19.1 min (p = 0.12) for concurrent reading paradigms. Conclusion: CAD, when used as a second reader, can significantly improve radiographer reading performance with only a moderate increase in interpretation times

  3. Computer-aided detection of brain metastasis on 3D MR imaging: Observer performance study.

    Sunwoo, Leonard; Kim, Young Jae; Choi, Seung Hong; Kim, Kwang-Gi; Kang, Ji Hee; Kang, Yeonah; Bae, Yun Jung; Yoo, Roh-Eul; Kim, Jihang; Lee, Kyong Joon; Lee, Seung Hyun; Choi, Byung Se; Jung, Cheolkyu; Sohn, Chul-Ho; Kim, Jae Hyoung

    2017-01-01

    To assess the effect of computer-aided detection (CAD) of brain metastasis (BM) on radiologists' diagnostic performance in interpreting three-dimensional brain magnetic resonance (MR) imaging using follow-up imaging and consensus as the reference standard. The institutional review board approved this retrospective study. The study cohort consisted of 110 consecutive patients with BM and 30 patients without BM. The training data set included MR images of 80 patients with 450 BM nodules. The test set included MR images of 30 patients with 134 BM nodules and 30 patients without BM. We developed a CAD system for BM detection using template-matching and K-means clustering algorithms for candidate detection and an artificial neural network for false-positive reduction. Four reviewers (two neuroradiologists and two radiology residents) interpreted the test set images before and after the use of CAD in a sequential manner. The sensitivity, false positive (FP) per case, and reading time were analyzed. A jackknife free-response receiver operating characteristic (JAFROC) method was used to determine the improvement in the diagnostic accuracy. The sensitivity of CAD was 87.3% with an FP per case of 302.4. CAD significantly improved the diagnostic performance of the four reviewers with a figure-of-merit (FOM) of 0.874 (without CAD) vs. 0.898 (with CAD) according to JAFROC analysis (p reviewers (FOM without vs. with CAD, 0.834 vs. 0.877, p review the CAD results was approximately 72 sec (40% of the total review time). CAD as a second reader helps radiologists improve their diagnostic performance in the detection of BM on MR imaging, particularly for less-experienced reviewers.

  4. Computer-based instrumentation for partial discharge detection in GIS

    Md Enamul Haque; Ahmad Darus; Yaacob, M.M.; Halil Hussain; Feroz Ahmed

    2000-01-01

    Partial discharge is one of the prominent indicators of defects and insulation degradation in a Gas Insulated Switchgear (GIS). Partial discharges (PD) have a harmful effect on the life of insulation of high voltage equipment. The PD detection using acoustic technique and subsequent analysis is currently an efficient method of performing non-destructive testing of GIS apparatus. A low cost PC-based acoustic PD detection instrument has been developed for the non-destructive diagnosis of GIS. This paper describes the development of a PC-based instrumentation system for partial discharge detection in GIS and some experimental results have also presented. (Author)

  5. Improved proton computed tomography by dual modality image reconstruction

    Hansen, David C., E-mail: dch@ki.au.dk; Bassler, Niels [Experimental Clinical Oncology, Aarhus University, 8000 Aarhus C (Denmark); Petersen, Jørgen Breede Baltzer [Medical Physics, Aarhus University Hospital, 8000 Aarhus C (Denmark); Sørensen, Thomas Sangild [Computer Science, Aarhus University, 8000 Aarhus C, Denmark and Clinical Medicine, Aarhus University, 8200 Aarhus N (Denmark)

    2014-03-15

    Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360

  6. Improved proton computed tomography by dual modality image reconstruction

    Hansen, David C.; Bassler, Niels; Petersen, Jørgen Breede Baltzer; Sørensen, Thomas Sangild

    2014-01-01

    Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360

  7. Improved patient specific seizure detection during pre-surgical evaluation.

    Chua, Eric C-P

    2011-04-01

    There is considerable interest in improved off-line automated seizure detection methods that will decrease the workload of EEG monitoring units. Subject-specific approaches have been demonstrated to perform better than subject-independent ones. However, for pre-surgical diagnostics, the traditional method of obtaining a priori data to train subject-specific classifiers is not practical. We present an alternative method that works by adapting the threshold of a subject-independent to a specific subject based on feedback from the user.

  8. Improvement of failed fuel detection system of light water reactor

    Chung, M.K.; Kang, H.D.; Cho, S.W.; Lee, K.W.

    1981-01-01

    Multi-task DAAS system by utilizing PDP-11/23 computer was assembled and tested for its performances. By connecting four Ge(Li) detectors to this DAAS, test experiments were done to prove system capability for detection and analysis of any fission gases resolved in four independently sampled primary cooling water from a power reactor. Appropriate computer programs were also introduced for this application and satisfactory results were obtained. Further application of this DAAS to the quality test of fuel pins (uniform distribution of enriched uranium in fresh fuel pellets), a prototype fuel scanner system was designed, constructed and tested. Operational principle of this system is based on the determination of 235 U/ 238 U abundance ratio in pellets by precision spectrometry or gamma-rays which are emitted from a portion of fuel pellets. For the uniform scanning, rotational and traverse motions at pre-selected speeds were applied to a fuel pin under tests. A long lens magnetic beta-spectrometer of Argonne National Laboratory was transferred to KAERI and re-installed for future precision beta-gamma spectroscopic research works on short-lived fission products nuclei

  9. Pipeline leak detection and location by on-line-correlation with a process computer

    Siebert, H.; Isermann, R.

    1977-01-01

    A method for leak detection using a correlation technique in pipelines is described. For leak detection and also for leak localisation and estimation of the leak flow recursive estimation algorithms are used. The efficiency of the methods is demonstrated with a process computer and a pipeline model operating on-line. It is shown that very small leaks can be detected. (orig.) [de

  10. Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-

    Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo

    1995-07-01

    This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on 'The improvement of level-1 PSA computer codes' is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author)

  11. Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-

    Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on `The improvement of level-1 PSA computer codes` is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author).

  12. Effect of noise in computed tomographic reconstructions on detectability

    Hanson, K.M.

    1982-01-01

    The detectability of features in an image is ultimately limited by the random fluctuations in density or noise present in that image. The noise in CT reconstructions arising from the statistical fluctuations in the one-dimensional input projection measurements has an unusual character owing to the reconstruction procedure. Such CT image noise differs from the white noise normally found in images in its lack of low-frequency components. The noise power spectrum of CT reconstructions can be related to the effective density of x-ray quanta detected in the projection measurements, designated as NEQ (noise-equivalent quanta). The detectability of objects that are somewhat larger than the spatial resolution is directly related to NEQ. Since contrast resolution may be defined in terms of the ability to detect large, low-contrast objects, the measurement of a CT scanner's NEQ may be used to characterize its contrast sensitivity

  13. Abstracting massive data for lightweight intrusion detection in computer networks

    Wang, Wei; Liu, Jiqiang; Pitsilis, Georgios; Zhang, Xiangliang

    2016-01-01

    detection. Data abstraction refers to abstract or extract the most relevant information from the massive dataset. In this work, we propose three strategies of data abstraction, namely, exemplar extraction, attribute selection and attribute abstraction. We

  14. Change Detection Algorithms for Information Assurance of Computer Networks

    Cardenas, Alvaro A

    2002-01-01

    .... In this thesis, the author will focus on the detection of three attack scenarios: the spreading of active worms throughout the Internet, distributed denial of service attacks, and routing attacks to wireless ad hoc networks...

  15. Evaluation of computer-aided detection and dual energy software in detection of peripheral pulmonary embolism on dual-energy pulmonary CT angiography

    Lee, Choong Wook; Seo, Joon Beom; Song, Jae-Woo; Kim, Mi-Young; Lee, Ha Young; Park, Yang Shin; Chae, Eun Jin; Jang, Yu Mi; Kim, Namkug; Krauss, Bernard

    2011-01-01

    To evaluate the sensitivity of computer-aided detection(CAD) and dual-energy software('Lung PBV', 'Lung Vessels') for detecting peripheral pulmonary embolism(PE). Between Jan-2007 and Jan-2008, 309 patients underwent dual-energy CT angiography(DECTA) for the evaluation of suspected PE. Among them, 37 patients were retrospectively selected; 21 with PE at segmental-or-below levels and 16 without PE according to clinical reports. A standard computer assisted detection (CAD) package and two new types of software('Lung PBV', 'Lung Vessels') were applied on a dedicated workstation. This resulted in four alternative tests for detecting PE: DECTA alone and DECTA with CAD, 'Lung Vessels' and 'Lung PBV'. Two radiologists independently read all cases at different reading sessions. Two thoracic radiologists set the reference standard by combining all information from DECTA and software. The sensitivity of detection for all, segmental and subsegmental-or-below PE were assessed. The reference standard contained 136 PE(segmental 65, subsegmental-or-below 71). With DECTA alone, the sensitivity of detection for all, segmental and subsegmental-or-below pulmonary arteries was 54.5%/73.7%/34.4%; DECTA with CAD, 57.8%/76.8%/37.9%; DECTA with 'Lung PBV', 61.1%/79.9%/41.4%; DECTA with 'Lung Vessels', 64.0%/78.3%/48.5% respectively. The use of CAD, Lung Vessels and Lung PBV shows improved capability to detect peripheral PE. (orig.)

  16. Observer training for computer-aided detection of pulmonary nodules in chest radiography.

    De Boo, Diederick W; van Hoorn, François; van Schuppen, Joost; Schijf, Laura; Scheerder, Maeke J; Freling, Nicole J; Mets, Onno; Weber, Michael; Schaefer-Prokop, Cornelia M

    2012-08-01

    To assess whether short-term feedback helps readers to increase their performance using computer-aided detection (CAD) for nodule detection in chest radiography. The 140 CXRs (56 with a solitary CT-proven nodules and 84 negative controls) were divided into four subsets of 35; each were read in a different order by six readers. Lesion presence, location and diagnostic confidence were scored without and with CAD (IQQA-Chest, EDDA Technology) as second reader. Readers received individual feedback after each subset. Sensitivity, specificity and area under the receiver-operating characteristics curve (AUC) were calculated for readings with and without CAD with respect to change over time and impact of CAD. CAD stand-alone sensitivity was 59 % with 1.9 false-positives per image. Mean AUC slightly increased over time with and without CAD (0.78 vs. 0.84 with and 0.76 vs. 0.82 without CAD) but differences did not reach significance. The sensitivity increased (65 % vs. 70 % and 66 % vs. 70 %) and specificity decreased over time (79 % vs. 74 % and 80 % vs. 77 %) but no significant impact of CAD was found. Short-term feedback does not increase the ability of readers to differentiate true- from false-positive candidate lesions and to use CAD more effectively. • Computer-aided detection (CAD) is increasingly used as an adjunct for many radiological techniques. • Short-term feedback does not improve reader performance with CAD in chest radiography. • Differentiation between true- and false-positive CAD for low conspicious possible lesions proves difficult. • CAD can potentially increase reader performance for nodule detection in chest radiography.

  17. Clinical evaluation of a computer-aided diagnosis (CAD) prototype for the detection of pulmonary embolism.

    Buhmann, Sonja; Herzog, Peter; Liang, Jin; Wolf, Mathias; Salganicoff, Marcos; Kirchhoff, Chlodwig; Reiser, Maximilian; Becker, Christoph H

    2007-06-01

    To evaluate the performance of a prototype computer-aided diagnosis (CAD) tool using artificial intelligence techniques for the detection of pulmonary embolism (PE) and the possible benefit for general radiologists. Forty multidetector row computed tomography datasets (16/64- channel scanner) using 100 kVp, 100 mAs effective/slice, and 1-mm axial reformats in a low-frequency reconstruction kernel were evaluated. A total of 80 mL iodinated contrast material was injected at a flow rate of 5 mL/seconds. Primarily, six general radiologists marked any PE using a commercially available lung evaluation software with simultaneous, automatic processing by CAD in the background. An expert panel consisting of two chest radiologists analyzed all PE marks from the readers and CAD, also searching for additional finding primarily missed by both, forming the ground truth. The ground truth consisted of 212 emboli. Of these, 65 (31%) were centrally and 147 (69%) were peripherally located. The readers detected 157/212 emboli (74%) leading to a sensitivity of 97% (63/65) for central and 70% (103/147) for peripheral emboli with 9 false-positive findings. CAD detected 168/212 emboli (79%), reaching a sensitivity of 74% for central (48/65) and 82%(120/147) for peripheral emboli. A total of 154 CAD candidates were considered as false positives, yielding an average of 3.85 false positives/case. The CAD software showed a sensitivity comparable to that of the general radiologists, but with more false positives. CAD detection of findings incremental to the radiologists suggests benefit when used as a second reader. Future versions of CAD have the potential to further increase clinical benefit by improving sensitivity and reducing false marks.

  18. Computer-aided detection system applied to full-field digital mammograms

    Vega Bolivar, Alfonso; Sanchez Gomez, Sonia; Merino, Paula; Alonso-Bartolome, Pilar; Ortega Garcia, Estrella; Munoz Cacho, Pedro; Hoffmeister, Jeffrey W.

    2010-01-01

    Background: Although mammography remains the mainstay for breast cancer screening, it is an imperfect examination with a sensitivity of 75-92% for breast cancer. Computer-aided detection (CAD) has been developed to improve mammographic detection of breast cancer. Purpose: To retrospectively estimate CAD sensitivity and false-positive rate with full-field digital mammograms (FFDMs). Material and Methods: CAD was used to evaluate 151 cases of ductal carcinoma in situ (DCIS) (n=48) and invasive breast cancer (n=103) detected with FFDM. Retrospectively, CAD sensitivity was estimated based on breast density, mammographic presentation, histopathology type, and lesion size. CAD false-positive rate was estimated with screening FFDMs from 200 women. Results: CAD detected 93% (141/151) of cancer cases: 97% (28/29) in fatty breasts, 94% (81/86) in breasts containing scattered fibroglandular densities, 90% (28/31) in heterogeneously dense breasts, and 80% (4/5) in extremely dense breasts. CAD detected 98% (54/55) of cancers manifesting as calcifications, 89% (74/83) as masses, and 100% (13/13) as mixed masses and calcifications. CAD detected 92% (73/79) of invasive ductal carcinomas, 89% (8/9) of invasive lobular carcinomas, 93% (14/15) of other invasive carcinomas, and 96% (46/48) of DCIS. CAD sensitivity for cancers 1-10 mm was 87% (47/54); 11-20 mm, 99% (70/71); 21-30 mm, 86% (12/14); and larger than 30 mm, 100% (12/12). The CAD false-positive rate was 2.5 marks per case. Conclusion: CAD with FFDM showed a high sensitivity in identifying cancers manifesting as calcifications or masses. CAD sensitivity was maintained in small lesions (1-20 mm) and invasive lobular carcinomas, which have lower mammographic sensitivity

  19. Computer-aided detection system applied to full-field digital mammograms

    Vega Bolivar, Alfonso; Sanchez Gomez, Sonia; Merino, Paula; Alonso-Bartolome, Pilar; Ortega Garcia, Estrella (Dept. of Radiology, Univ. Marques of Valdecilla Hospital, Santander (Spain)), e-mail: avegab@telefonica.net; Munoz Cacho, Pedro (Dept. of Statistics, Univ. Marques of Valdecilla Hospital, Santander (Spain)); Hoffmeister, Jeffrey W. (iCAD, Inc., Nashua, NH (United States))

    2010-12-15

    Background: Although mammography remains the mainstay for breast cancer screening, it is an imperfect examination with a sensitivity of 75-92% for breast cancer. Computer-aided detection (CAD) has been developed to improve mammographic detection of breast cancer. Purpose: To retrospectively estimate CAD sensitivity and false-positive rate with full-field digital mammograms (FFDMs). Material and Methods: CAD was used to evaluate 151 cases of ductal carcinoma in situ (DCIS) (n=48) and invasive breast cancer (n=103) detected with FFDM. Retrospectively, CAD sensitivity was estimated based on breast density, mammographic presentation, histopathology type, and lesion size. CAD false-positive rate was estimated with screening FFDMs from 200 women. Results: CAD detected 93% (141/151) of cancer cases: 97% (28/29) in fatty breasts, 94% (81/86) in breasts containing scattered fibroglandular densities, 90% (28/31) in heterogeneously dense breasts, and 80% (4/5) in extremely dense breasts. CAD detected 98% (54/55) of cancers manifesting as calcifications, 89% (74/83) as masses, and 100% (13/13) as mixed masses and calcifications. CAD detected 92% (73/79) of invasive ductal carcinomas, 89% (8/9) of invasive lobular carcinomas, 93% (14/15) of other invasive carcinomas, and 96% (46/48) of DCIS. CAD sensitivity for cancers 1-10 mm was 87% (47/54); 11-20 mm, 99% (70/71); 21-30 mm, 86% (12/14); and larger than 30 mm, 100% (12/12). The CAD false-positive rate was 2.5 marks per case. Conclusion: CAD with FFDM showed a high sensitivity in identifying cancers manifesting as calcifications or masses. CAD sensitivity was maintained in small lesions (1-20 mm) and invasive lobular carcinomas, which have lower mammographic sensitivity

  20. Application of Computational Intelligence to Improve Education in Smart Cities

    Gaffo, Fernando Henrique; de Barros, Rodolfo Miranda; Mendes, Leonardo de Souza

    2018-01-01

    According to UNESCO, education is a fundamental human right and every nation’s citizens should be granted universal access with equal quality to it. Because this goal is yet to be achieved in most countries, in particular in the developing and underdeveloped countries, it is extremely important to find more effective ways to improve education. This paper presents a model based on the application of computational intelligence (data mining and data science) that leads to the development of the student’s knowledge profile and that can help educators in their decision making for best orienting their students. This model also tries to establish key performance indicators to monitor objectives’ achievement within individual strategic planning assembled for each student. The model uses random forest for classification and prediction, graph description for data structure visualization and recommendation systems to present relevant information to stakeholders. The results presented were built based on the real dataset obtained from a Brazilian private k-9 (elementary school). The obtained results include correlations among key data, a model to predict student performance and recommendations that were generated for the stakeholders. PMID:29346288

  1. Application of Computational Intelligence to Improve Education in Smart Cities.

    Gomede, Everton; Gaffo, Fernando Henrique; Briganó, Gabriel Ulian; de Barros, Rodolfo Miranda; Mendes, Leonardo de Souza

    2018-01-18

    According to UNESCO, education is a fundamental human right and every nation's citizens should be granted universal access with equal quality to it. Because this goal is yet to be achieved in most countries, in particular in the developing and underdeveloped countries, it is extremely important to find more effective ways to improve education. This paper presents a model based on the application of computational intelligence (data mining and data science) that leads to the development of the student's knowledge profile and that can help educators in their decision making for best orienting their students. This model also tries to establish key performance indicators to monitor objectives' achievement within individual strategic planning assembled for each student. The model uses random forest for classification and prediction, graph description for data structure visualization and recommendation systems to present relevant information to stakeholders. The results presented were built based on the real dataset obtained from a Brazilian private k-9 (elementary school). The obtained results include correlations among key data, a model to predict student performance and recommendations that were generated for the stakeholders.

  2. Application of Computational Intelligence to Improve Education in Smart Cities

    Everton Gomede

    2018-01-01

    Full Text Available According to UNESCO, education is a fundamental human right and every nation’s citizens should be granted universal access with equal quality to it. Because this goal is yet to be achieved in most countries, in particular in the developing and underdeveloped countries, it is extremely important to find more effective ways to improve education. This paper presents a model based on the application of computational intelligence (data mining and data science that leads to the development of the student’s knowledge profile and that can help educators in their decision making for best orienting their students. This model also tries to establish key performance indicators to monitor objectives’ achievement within individual strategic planning assembled for each student. The model uses random forest for classification and prediction, graph description for data structure visualization and recommendation systems to present relevant information to stakeholders. The results presented were built based on the real dataset obtained from a Brazilian private k-9 (elementary school. The obtained results include correlations among key data, a model to predict student performance and recommendations that were generated for the stakeholders.

  3. Tablet computer enhanced training improves internal medicine exam performance.

    Baumgart, Daniel C; Wende, Ilja; Grittner, Ulrike

    2017-01-01

    Traditional teaching concepts in medical education do not take full advantage of current information technology. We aimed to objectively determine the impact of Tablet PC enhanced training on learning experience and MKSAP® (medical knowledge self-assessment program) exam performance. In this single center, prospective, controlled study final year medical students and medical residents doing an inpatient service rotation were alternatingly assigned to either the active test (Tablet PC with custom multimedia education software package) or traditional education (control) group, respectively. All completed an extensive questionnaire to collect their socio-demographic data, evaluate educational status, computer affinity and skills, problem solving, eLearning knowledge and self-rated medical knowledge. Both groups were MKSAP® tested at the beginning and the end of their rotation. The MKSAP® score at the final exam was the primary endpoint. Data of 55 (tablet n = 24, controls n = 31) male 36.4%, median age 28 years, 65.5% students, were evaluable. The mean MKSAP® score improved in the tablet PC (score Δ + 8 SD: 11), but not the control group (score Δ- 7, SD: 11), respectively. After adjustment for baseline score and confounders the Tablet PC group showed on average 11% better MKSAP® test results compared to the control group (plearning to their respective training programs.

  4. Digital Radiography and Computed Tomography (DRCT) Product Improvement Plan (PIP)

    Tim Roney; Bob Pink; Karen Wendt; Robert Seifert; Mike Smith

    2010-12-01

    The Idaho National Laboratory (INL) has been developing and deploying x-ray inspection systems for chemical weapons containers for the past 12 years under the direction of the Project Manager for Non-Stockpile Chemical Materiel (PMNSCM). In FY-10 funding was provided to advance the capabilities of these systems through the DRCT (Digital Radiography and Computed Tomography) Product Improvement Plan (PIP), funded by the PMNSCM. The DRCT PIP identified three research tasks; end user study, detector evaluation and DRCT/PINS integration. Work commenced in February, 2010. Due to the late start and the schedule for field inspection of munitions at various sites, it was not possible to spend sufficient field time with operators to develop a complete end user study. We were able to interact with several operators, principally Mr. Mike Rowan who provided substantial useful input through several discussions and development of a set of field notes from the Pueblo, CO field mission. We will be pursuing ongoing interactions with field personnel as opportunities arise in FY-11.

  5. Stratified computed tomography findings improve diagnostic accuracy for appendicitis

    Park, Geon; Lee, Sang Chul; Choi, Byung-Jo; Kim, Say-June

    2014-01-01

    AIM: To improve the diagnostic accuracy in patients with symptoms and signs of appendicitis, but without confirmative computed tomography (CT) findings. METHODS: We retrospectively reviewed the database of 224 patients who had been operated on for the suspicion of appendicitis, but whose CT findings were negative or equivocal for appendicitis. The patient population was divided into two groups: a pathologically proven appendicitis group (n = 177) and a non-appendicitis group (n = 47). The CT images of these patients were re-evaluated according to the characteristic CT features as described in the literature. The re-evaluations and baseline characteristics of the two groups were compared. RESULTS: The two groups showed significant differences with respect to appendiceal diameter, and the presence of periappendiceal fat stranding and intraluminal air in the appendix. A larger proportion of patients in the appendicitis group showed distended appendices larger than 6.0 mm (66.3% vs 37.0%; P appendicitis group. Furthermore, the presence of two or more of these factors increased the odds ratio to 6.8 times higher than baseline (95%CI: 3.013-15.454; P appendicitis with equivocal CT findings. PMID:25320531

  6. Computational reduction of specimen noise to enable improved thermography characterization of flaws in graphite polymer composites

    Winfree, William P.; Howell, Patricia A.; Zalameda, Joseph N.

    2014-05-01

    Flaw detection and characterization with thermographic techniques in graphite polymer composites are often limited by localized variations in the thermographic response. Variations in properties such as acceptable porosity, fiber volume content and surface polymer thickness result in variations in the thermal response that in general cause significant variations in the initial thermal response. These result in a "noise" floor that increases the difficulty of detecting and characterizing deeper flaws. A method is presented for computationally removing a significant amount of the "noise" from near surface porosity by diffusing the early time response, then subtracting it from subsequent responses. Simulations of the thermal response of a composite are utilized in defining the limitations of the technique. This method for reducing the data is shown to give considerable improvement characterizing both the size and depth of damage. Examples are shown for data acquired on specimens with fabricated delaminations and impact damage.

  7. Computational Reduction of Specimen Noise to Enable Improved Thermography Characterization of Flaws in Graphite Polymer Composites

    Winfree, William P.; Howell, Patricia A.; Zalameda, Joseph N.

    2014-01-01

    Flaw detection and characterization with thermographic techniques in graphite polymer composites are often limited by localized variations in the thermographic response. Variations in properties such as acceptable porosity, fiber volume content and surface polymer thickness result in variations in the thermal response that in general cause significant variations in the initial thermal response. These result in a "noise" floor that increases the difficulty of detecting and characterizing deeper flaws. A method is presented for computationally removing a significant amount of the "noise" from near surface porosity by diffusing the early time response, then subtracting it from subsequent responses. Simulations of the thermal response of a composite are utilized in defining the limitations of the technique. This method for reducing the data is shown to give considerable improvement characterizing both the size and depth of damage. Examples are shown for data acquired on specimens with fabricated delaminations and impact damage.

  8. Improvement of retinal blood vessel detection using morphological component analysis.

    Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza

    2015-03-01

    Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Improvements in detection system for pulse radiolysis facility

    Rao, V N; Manimaran, P; Mishra, R K; Mohan, H; Mukherjee, T; Nadkarni, S A; Sapre, A V; Shinde, S J; Toley, M

    2002-01-01

    This report describes the improvements made in the detection system of the pulse radiolysis facility based on a 7 MeV Linear Electron Accelerator (LINAC) located in the Radiation Chemistry and Chemical Dynamics Division of Bhabha Atomic Research Centre. The facility was created in 1986 for kinetic studies of transient species whose absorption lies between 200 and 700 nm. The newly developed detection circuits consist of a silicon (Si) photodiode (PD) detector for the wavelength range 450-1100 nm and a germanium (Ge) photodiode detector for the wavelength range 900-1600 nm. With these photodiode-based detection set-up, kinetic experiments are now routinely carried out in the wavelength range 450-1600 nm. The performance of these circuits has been tested using standard chemical systems. The rise time has been found to be 150 ns. The photo-multiplier tube (PMT) bleeder circuit has been modified. A new DC back-off circuit has been built and installed in order to avoid droop at longer time scales. A steady baselin...

  10. Mobile Anomaly Detection Based on Improved Self-Organizing Maps

    Chunyong Yin

    2017-01-01

    Full Text Available Anomaly detection has always been the focus of researchers and especially, the developments of mobile devices raise new challenges of anomaly detection. For example, mobile devices can keep connection with Internet and they are rarely turned off even at night. This means mobile devices can attack nodes or be attacked at night without being perceived by users and they have different characteristics from Internet behaviors. The introduction of data mining has made leaps forward in this field. Self-organizing maps, one of famous clustering algorithms, are affected by initial weight vectors and the clustering result is unstable. The optimal method of selecting initial clustering centers is transplanted from K-means to SOM. To evaluate the performance of improved SOM, we utilize diverse datasets and KDD Cup99 dataset to compare it with traditional one. The experimental results show that improved SOM can get higher accuracy rate for universal datasets. As for KDD Cup99 dataset, it achieves higher recall rate and precision rate.

  11. Algorithm for detecting violations of traffic rules based on computer vision approaches

    Ibadov Samir

    2017-01-01

    Full Text Available We propose a new algorithm for automatic detect violations of traffic rules for improving the people safety on the unregulated pedestrian crossing. The algorithm uses multi-step proceedings. They are zebra detection, cars detection, and pedestrian detection. For car detection, we use faster R-CNN deep learning tool. The algorithm shows promising results in the detection violations of traffic rules.

  12. Improved Statistical Fault Detection Technique and Application to Biological Phenomena Modeled by S-Systems.

    Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N

    2017-09-01

    In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to

  13. The use of gold nanoparticle aggregation for DNA computing and logic-based biomolecular detection

    Lee, In-Hee; Yang, Kyung-Ae; Zhang, Byoung-Tak; Lee, Ji-Hoon; Park, Ji-Yoon; Chai, Young Gyu; Lee, Jae-Hoon

    2008-01-01

    The use of DNA molecules as a physical computational material has attracted much interest, especially in the area of DNA computing. DNAs are also useful for logical control and analysis of biological systems if efficient visualization methods are available. Here we present a quick and simple visualization technique that displays the results of the DNA computing process based on a colorimetric change induced by gold nanoparticle aggregation, and we apply it to the logic-based detection of biomolecules. Our results demonstrate its effectiveness in both DNA-based logical computation and logic-based biomolecular detection

  14. Portopulmonary hypertension: Improved detection using CT and echocardiography in combination

    Devaraj, Anand [Royal Brompton and Harefield NHS Foundation Trust, Department of Radiology, London (United Kingdom); Loveridge, Robert; Bernal, William; Willars, Christopher; Wendon, Julia A.; Auzinger, Georg [King' s College Hospital NHS Foundation Trust, The Institute of Liver Studies, King' s Health Partners, King' s College London, London (United Kingdom); Bosanac, Diana; Stefanidis, Konstantinos; Desai, Sujal R. [King' s College Hospital NHS Foundation Trust, Department of Radiology, King' s Health Partners, King' s College London, London (United Kingdom)

    2014-10-15

    To establish the relationship between CT signs of pulmonary hypertension and mean pulmonary artery pressure (mPAP) in patients with liver disease, and to determine the additive value of CT in the detection of portopulmonary hypertension in combination with transthoracic echocardiography. Forty-nine patients referred for liver transplantation were retrospectively reviewed. Measured CT signs included the main pulmonary artery/ascending aorta diameter ratio (PA/AA{sub meas}) and the mean left and right main PA diameter (RLPA{sub meas}). Enlargement of the pulmonary artery compared to the ascending aorta was also assessed visually (PA/AA{sub vis}). CT measurements were correlated with right-sided heart catheter-derived mPAP. The ability of PA/AA{sub vis} combined with echocardiogram-derived right ventricular systolic pressure (RVSP) to detect portopulmonary hypertension was tested with ROC analysis. There were moderate correlations between mPAP and both PA/AA{sub meas} and RLPA{sub meas} (r{sub s} = 0.41 and r{sub s} = 0.42, respectively; p < 0.005). Compared to transthoracic echocardiography alone (AUC = 0.59, p = 0.23), a diagnostic algorithm incorporating PA/AA{sub vis} and transthoracic echocardiography-derived RVSP improved the detection of portopulmonary hypertension (AUC = 0.8, p < 0.0001). CT contributes to the non-invasive detection of portopulmonary hypertension when used in a diagnostic algorithm with transthoracic echocardiography. CT may have a role in the pre-liver transplantation triage of patients with portopulmonary hypertension for right-sided heart catheterisation. (orig.)

  15. Computer Security: Virus Highlights Need for Improved Internet Management

    1989-06-01

    Kingdom. Page 47 GAO/IMTEC-89-57 Internet Computer Virus Appendix III Major Contributors to This Report Information Management and Technology ...resources; disrupts the intended use of the Internet ; or wastes resources, destroys the integrity of computer -based information , or compromises users...and information from the other party in order to assist in preparation for trial. Page 32 GAO/IMTEC-89-57 Internet Computer Virus Chapter 3 Factors

  16. Computer-aided detection of breast masses: Four-view strategy for screening mammography

    Wei Jun; Chan Heangping; Zhou Chuan; Wu Yita; Sahiner, Berkman; Hadjiiski, Lubomir M.; Roubidoux, Marilyn A.; Helvie, Mark A.

    2011-01-01

    Purpose: To improve the performance of a computer-aided detection (CAD) system for mass detection by using four-view information in screening mammography. Methods: The authors developed a four-view CAD system that emulates radiologists' reading by using the craniocaudal and mediolateral oblique views of the ipsilateral breast to reduce false positives (FPs) and the corresponding views of the contralateral breast to detect asymmetry. The CAD system consists of four major components: (1) Initial detection of breast masses on individual views, (2) information fusion of the ipsilateral views of the breast (referred to as two-view analysis), (3) information fusion of the corresponding views of the contralateral breast (referred to as bilateral analysis), and (4) fusion of the four-view information with a decision tree. The authors collected two data sets for training and testing of the CAD system: A mass set containing 389 patients with 389 biopsy-proven masses and a normal set containing 200 normal subjects. All cases had four-view mammograms. The true locations of the masses on the mammograms were identified by an experienced MQSA radiologist. The authors randomly divided the mass set into two independent sets for cross validation training and testing. The overall test performance was assessed by averaging the free response receiver operating characteristic (FROC) curves of the two test subsets. The FP rates during the FROC analysis were estimated by using the normal set only. The jackknife free-response ROC (JAFROC) method was used to estimate the statistical significance of the difference between the test FROC curves obtained with the single-view and the four-view CAD systems. Results: Using the single-view CAD system, the breast-based test sensitivities were 58% and 77% at the FP rates of 0.5 and 1.0 per image, respectively. With the four-view CAD system, the breast-based test sensitivities were improved to 76% and 87% at the corresponding FP rates, respectively

  17. Eye Detection and Tracking for Intelligent Human Computer Interaction

    Yin, Lijun

    2006-01-01

    .... In this project, Dr. Lijun Yin has developed a new algorithm for detecting and tracking eyes under an unconstrained environment using a single ordinary camera or webcam. The new algorithm is advantageous in that it works in a non-intrusive way based on a socalled Topographic Context approach.

  18. Single-trial detection of visual evoked potentials by common spatial patterns and wavelet filtering for brain-computer interface.

    Tu, Yiheng; Huang, Gan; Hung, Yeung Sam; Hu, Li; Hu, Yong; Zhang, Zhiguo

    2013-01-01

    Event-related potentials (ERPs) are widely used in brain-computer interface (BCI) systems as input signals conveying a subject's intention. A fast and reliable single-trial ERP detection method can be used to develop a BCI system with both high speed and high accuracy. However, most of single-trial ERP detection methods are developed for offline EEG analysis and thus have a high computational complexity and need manual operations. Therefore, they are not applicable to practical BCI systems, which require a low-complexity and automatic ERP detection method. This work presents a joint spatial-time-frequency filter that combines common spatial patterns (CSP) and wavelet filtering (WF) for improving the signal-to-noise (SNR) of visual evoked potentials (VEP), which can lead to a single-trial ERP-based BCI.

  19. Creating a two-layered augmented artificial immune system for application to computer network intrusion detection

    Judge, Matthew G.; Lamont, Gary B.

    2009-05-01

    Computer network security has become a very serious concern of commercial, industrial, and military organizations due to the increasing number of network threats such as outsider intrusions and insider covert activities. An important security element of course is network intrusion detection which is a difficult real world problem that has been addressed through many different solution attempts. Using an artificial immune system has been shown to be one of the most promising results. By enhancing jREMISA, a multi-objective evolutionary algorithm inspired artificial immune system, with a secondary defense layer; we produce improved accuracy of intrusion classification and a flexibility in responsiveness. This responsiveness can be leveraged to provide a much more powerful and accurate system, through the use of increased processing time and dedicated hardware which has the flexibility of being located out of band.

  20. Article Commentary: Computer-Aided Detection of Breast Cancer — Have All Bases Been Covered?

    Gautam S. Muralidhar

    2008-01-01

    Full Text Available The use of computer-aided detection (CAD systems in mammography has been the subject of intense research for many years. These systems have been developed with the aim of helping radiologists to detect signs of breast cancer. However, the effectiveness of CAD systems in practice has sparked recent debate. In this commentary, we argue that computer-aided detection will become an increasingly important tool for radiologists in the early detection of breast cancer, but there are some important issues that need to be given greater focus in designing CAD systems if they are to reach their full potential.

  1. Deep neural network-based computer-assisted detection of cerebral aneurysms in MR angiography.

    Nakao, Takahiro; Hanaoka, Shouhei; Nomura, Yukihiro; Sato, Issei; Nemoto, Mitsutaka; Miki, Soichiro; Maeda, Eriko; Yoshikawa, Takeharu; Hayashi, Naoto; Abe, Osamu

    2018-04-01

    The usefulness of computer-assisted detection (CAD) for detecting cerebral aneurysms has been reported; therefore, the improved performance of CAD will help to detect cerebral aneurysms. To develop a CAD system for intracranial aneurysms on unenhanced magnetic resonance angiography (MRA) images based on a deep convolutional neural network (CNN) and a maximum intensity projection (MIP) algorithm, and to demonstrate the usefulness of the system by training and evaluating it using a large dataset. Retrospective study. There were 450 cases with intracranial aneurysms. The diagnoses of brain aneurysms were made on the basis of MRA, which was performed as part of a brain screening program. Noncontrast-enhanced 3D time-of-flight (TOF) MRA on 3T MR scanners. In our CAD, we used a CNN classifier that predicts whether each voxel is inside or outside aneurysms by inputting MIP images generated from a volume of interest (VOI) around the voxel. The CNN was trained in advance using manually inputted labels. We evaluated our method using 450 cases with intracranial aneurysms, 300 of which were used for training, 50 for parameter tuning, and 100 for the final evaluation. Free-response receiver operating characteristic (FROC) analysis. Our CAD system detected 94.2% (98/104) of aneurysms with 2.9 false positives per case (FPs/case). At a sensitivity of 70%, the number of FPs/case was 0.26. We showed that the combination of a CNN and an MIP algorithm is useful for the detection of intracranial aneurysms. 4 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018;47:948-953. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Cloud Computing as a Tool for Improving Business Competitiveness

    Wišniewski Michał

    2014-08-01

    Full Text Available This article organizes knowledge on cloud computing presenting the classification of deployment models, characteristics and service models. The author, looking at the problem from the entrepreneur’s perspective, draws attention to the differences in the benefits depending on the cloud computing deployment models and considers an effective way of selection of cloud computing services according to the specificity of organization. Within this work, a thesis statement was considered that in economic terms the cloud computing is not always the best solution for your organization. This raises the question, “What kind of tools should be used to estimate the usefulness of the model cloud computing services in the enterprise?”

  3. Max-AUC feature selection in computer-aided detection of polyps in CT colonography.

    Xu, Jian-Wu; Suzuki, Kenji

    2014-03-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level.

  4. Using new edges for anomaly detection in computer networks

    Neil, Joshua Charles

    2015-05-19

    Creation of new edges in a network may be used as an indication of a potential attack on the network. Historical data of a frequency with which nodes in a network create and receive new edges may be analyzed. Baseline models of behavior among the edges in the network may be established based on the analysis of the historical data. A new edge that deviates from a respective baseline model by more than a predetermined threshold during a time window may be detected. The new edge may be flagged as potentially anomalous when the deviation from the respective baseline model is detected. Probabilities for both new and existing edges may be obtained for all edges in a path or other subgraph. The probabilities may then be combined to obtain a score for the path or other subgraph. A threshold may be obtained by calculating an empirical distribution of the scores under historical conditions.

  5. Computer Aided Detection of Breast Masses in Digital Tomosynthesis

    2008-06-01

    of unknown pathology , all other ROIs generated from that specific subject’s reconstructed volumes were excluded from the KB. For scheme B, all the FPs...query ROI of unknown pathology , all other ROIs generated from that specific subject’s reconstructed volumes were excluded from the KB. For scheme B...Qian, L. Li, and L.P. Clarke, "Image feature extraction for mass detection in digital mammography: Influence of wavelet analysis." Med. Phys. 26

  6. Deception Detection in a Computer-Mediated Environment: Gender, Trust, and Training Issues

    Dziubinski, Monica

    2003-01-01

    .... This research draws on communication and deception literature to develop a conceptual model proposing relationships between deception detection abilities in a computer-mediated environment, gender, trust, and training...

  7. Incidentally Detected Enhancing Breast Lesions on Chest Computed Tomography

    Lin, Wen Chiung; Hsu, Hsian He; Yu, Jyh Cherng; Hsu, Giu Cheng; Yu, Cheng Ping; Chang, Tsun Hou; Huang, Guo Shu; Li, Chao Shiang

    2011-01-01

    To evaluate the nature and imaging appearance of incidental enhancing breast lesions detected on a routine contrast-enhanced chest CT. Twenty-three patients with incidental enhancing breast lesions on contrast-enhanced chest CT were retrospectively reviewed. The breast lesions were reviewed by unenhanced and enhanced CT, and evaluated by observing the shapes, margins, enhancement patterns and backgrounds of breast lesions. A histopathologic diagnosis or long-term follow-up served as reference standard. Sixteen (70%) patients had malignant breast lesions and seven (30%) had benign lesions. In 10 patients, the breast lesions were exclusively detected on contrast-enhanced CT. Using unenhanced CT, breast lesions with fi broglandular backgrounds were prone to be obscured (p < 0.001). Incidental primary breast cancer showed an non-significant trend of a higher percentage irregular margin (p = 0.056). All of the four incidental breast lesions with non-mass-like enhancement were proven to be malignant. Routine contrast-enhanced chest CT can reveal sufficient details to allow for the detection of unsuspected breast lesions, in which some cases may be proven as malignant. An irregular margin of incidental enhancing breast lesion can be considered a suggestive sign of malignancy

  8. Computed Tomography Features of Incidentally Detected Diffuse Thyroid Disease

    Myung Ho Rho

    2014-01-01

    Full Text Available Objective. This study aimed to evaluate the CT features of incidentally detected DTD in the patients who underwent thyroidectomy and to assess the diagnostic accuracy of CT diagnosis. Methods. We enrolled 209 consecutive patients who received preoperative neck CT and subsequent thyroid surgery. Neck CT in each case was retrospectively investigated by a single radiologist. We evaluated the diagnostic accuracy of individual CT features and the cut-off CT criteria for detecting DTD by comparing the CT features with histopathological results. Results. Histopathological examination of the 209 cases revealed normal thyroid (n=157, Hashimoto thyroiditis (n=17, non-Hashimoto lymphocytic thyroiditis (n=34, and diffuse hyperplasia (n=1. The CT features suggestive of DTD included low attenuation, inhomogeneous attenuation, increased glandular size, lobulated margin, and inhomogeneous enhancement. ROC curve analysis revealed that CT diagnosis of DTD based on the CT classification of “3 or more” abnormal CT features was superior. When the “3 or more” CT classification was selected, the sensitivity, specificity, positive and negative predictive values, and accuracy of CT diagnosis for DTD were 55.8%, 95.5%, 80.6%, 86.7%, and 85.6%, respectively. Conclusion. Neck CT may be helpful for the detection of incidental DTD.

  9. Automatic pitch detection for a computer game interface

    Fonseca Solis, Juan M.

    2015-01-01

    A software able to recognize notes played by musical instruments is created through automatic pitch recognition. A pitch recognition algorithm is embedded into a software project, using the C implementation of SWIPEP. A memory game is chosen for project. A sequence of notes is listened and played by user to the computer, using a soprano recorder flute. The basic concepts to understand the acoustic phenomena involved are explained. The paper is aimed for all students with basic programming knowledge and want to incorporate sound processing to their projects. (author) [es

  10. Computer-aided detection of masses in digital tomosynthesis mammography: Comparison of three approaches

    Chan Heangping; Wei Jun; Zhang Yiheng; Helvie, Mark A.; Moore, Richard H.; Sahiner, Berkman; Hadjiiski, Lubomir; Kopans, Daniel B.

    2008-01-01

    The authors are developing a computer-aided detection (CAD) system for masses on digital breast tomosynthesis mammograms (DBT). Three approaches were evaluated in this study. In the first approach, mass candidate identification and feature analysis are performed in the reconstructed three-dimensional (3D) DBT volume. A mass likelihood score is estimated for each mass candidate using a linear discriminant analysis (LDA) classifier. Mass detection is determined by a decision threshold applied to the mass likelihood score. A free response receiver operating characteristic (FROC) curve that describes the detection sensitivity as a function of the number of false positives (FPs) per breast is generated by varying the decision threshold over a range. In the second approach, prescreening of mass candidate and feature analysis are first performed on the individual two-dimensional (2D) projection view (PV) images. A mass likelihood score is estimated for each mass candidate using an LDA classifier trained for the 2D features. The mass likelihood images derived from the PVs are backprojected to the breast volume to estimate the 3D spatial distribution of the mass likelihood scores. The FROC curve for mass detection can again be generated by varying the decision threshold on the 3D mass likelihood scores merged by backprojection. In the third approach, the mass likelihood scores estimated by the 3D and 2D approaches, described above, at the corresponding 3D location are combined and evaluated using FROC analysis. A data set of 100 DBT cases acquired with a GE prototype system at the Breast Imaging Laboratory in the Massachusetts General Hospital was used for comparison of the three approaches. The LDA classifiers with stepwise feature selection were designed with leave-one-case-out resampling. In FROC analysis, the CAD system for detection in the DBT volume alone achieved test sensitivities of 80% and 90% at average FP rates of 1.94 and 3.40 per breast, respectively. With the

  11. Implementation. Improving caries detection, assessment, diagnosis and monitoring.

    Pitts, N B

    2009-01-01

    This chapter deals with improving the detection, assessment, diagnosis and monitoring of caries to ensure optimal personalized caries management. This can be achieved by delivering what we have (synthesized evidence and international consensus) better and more consistently, as well as driving research and innovation in the areas where we need them. There is a need to better understand the interrelated pieces of the jigsaw that makes up evidence-based dentistry, i.e. the linkages between (a) research and synthesis, (b) dissemination of research results and (c) the implementation of research findings which should ensure that research findings change practice at the clinician-patient level. The current situation is outlined; it is at the implementation step where preventive caries control seems to have failed in some countries but not others. Opportunities for implementation include: capitalizing on the World Health Organization's global policy for improvement of oral health, which sets out an action plan for health promotion and integrated disease prevention; utilizing the developments around the International Caries Detection and Assessment System wardrobe of options and e-learning; building on initiatives from the International Dental Federation and the American Dental Association and linking these to patients' preferences, the wider moves to wellbeing and health maintenance. Challenges for implementation include the slow pace of evolution around dental remuneration systems and some groups of dentists failing to embrace clinical prevention. In the future, implementation of current and developing evidence should be accompanied by research into getting research findings into routine practice, with impacts on the behaviour of patients, professionals and policy makers. Copyright 2009 S. Karger AG, Basel

  12. Foundations for Improvements to Passive Detection Systems - Final Report

    Labov, S E; Pleasance, L; Sokkappa, P; Craig, W; Chapline, G; Frank, M; Gronberg, J; Jernigan, J G; Johnson, S; Kammeraad, J; Lange, D; Meyer, A; Nelson, K; Pohl, B; Wright, D; Wurtz, R

    2004-01-01

    This project explores the scientific foundation and approach for improving passive detection systems for plutonium and highly enriched uranium in real applications. Sources of gamma-ray radiation of interest were chosen to represent a range of national security threats, naturally occurring radioactive materials, industrial and medical radiation sources, and natural background radiation. The gamma-ray flux emerging from these sources, which include unclassified criticality experiment configurations as surrogates for nuclear weapons, were modeled in detail. The performance of several types of gamma-ray imaging systems using Compton scattering were modeled and compared. A mechanism was created to model the combine sources and background emissions and have the simulated radiation ''scene'' impinge on a model of a detector. These modeling tools are now being used in various projects to optimize detector performance and model detector sensitivity in complex measuring environments. This study also developed several automated algorithms for isotope identification from gamma-ray spectra and compared these to each other and to algorithms already in use. Verification testing indicates that these alternative isotope identification algorithms produced less false positive and false negative results than the ''GADRAS'' algorithms currently in use. In addition to these algorithms that used binned spectra, a new approach to isotope identification using ''event mode'' analysis was developed. Finally, a technique using muons to detect nuclear material was explored

  13. Improving the Lane Reference Detection for Autonomous Road Vehicle Control

    Felipe Jiménez

    2016-01-01

    Full Text Available Autonomous road vehicles are increasingly becoming more important and there are several techniques and sensors that are being applied for vehicle control. This paper presents an alternative system for maintaining the position of autonomous vehicles without adding additional elements to the standard sensor architecture, by using a 3D laser scanner for continuously detecting a reference element in situations in which the GNSS receiver fails or provides accuracy below the required level. Considering that the guidance variables are more accurately estimated when dealing with reference points in front of and behind the vehicle, an algorithm based on vehicle dynamics mathematical model is proposed to extend the detected points in cases where the sensor is placed at the front of the vehicle. The algorithm has been tested when driving along a lane delimited by New Jersey barriers at both sides and the results show a correct behaviour. The system is capable of estimating the reference element behind the vehicle with sufficient accuracy when the laser scanner is placed at the front of it, so the robustness of the control input variables (lateral and angular errors estimation is improved making it unnecessary to place the sensor on the vehicle roof or to introduce additional sensors.

  14. Reproducibility of Computer-Aided Detection Marks in Digital Mammography

    Kim, Seung Ja; Moon, Woo Kyung; Cho, Nariya; Kim, Sun Mi; Im, Jung Gi; Cha, Joo Hee

    2007-01-01

    To evaluate the performance and reproducibility of a computeraided detection (CAD) system in mediolateral oblique (MLO) digital mammograms taken serially, without release of breast compression. A CAD system was applied preoperatively to the fulfilled digital mammograms of two MLO views taken without release of breast compression in 82 patients (age range: 33 83 years; mean age: 49 years) with previously diagnosed breast cancers. The total number of visible lesion components in 82 patients was 101: 66 masses and 35 microcalcifications. We analyzed the sensitivity and reproducibility of the CAD marks. The sensitivity of the CAD system for first MLO views was 71% (47/66) for masses and 80% (28/35) for microcalcifications. The sensitivity of the CAD system for second MLO views was 68% (45/66) for masses and 17% (6/35) for microcalcifications. In 84 ipsilateral serial MLO image sets (two patients had bilateral cancers), identical images, regardless of the existence of CAD marks, were obtained for 35% (29/84) and identical images with CAD marks were obtained for 29% (23/78). Identical images, regardless of the existence of CAD marks, for contralateral MLO images were 65% (52/80) and identical images with CAD marks were obtained for 28% (11/39). The reproducibility of CAD marks for the true positive masses in serial MLO views was 84% (42/50) and that for the true positive microcalcifications was 0% (0/34). The CAD system in digital mammograms showed a high sensitivity for detecting masses and microcalcifications. However, reproducibility of microcalcification marks was very low in MLO views taken serially without release of breast compression. Minute positional change and patient movement can alter the images and result in a significant effect on the algorithm utilized by the CAD for detecting microcalcifications

  15. Enhancement of optic cup detection through an improved vessel kink detection framework

    Wong, Damon W. K.; Liu, Jiang; Tan, Ngan Meng; Zhang, Zhuo; Lu, Shijian; Lim, Joo Hwee; Li, Huiqi; Wong, Tien Yin

    2010-03-01

    Glaucoma is a leading cause of blindness. The presence and extent of progression of glaucoma can be determined if the optic cup can be accurately segmented from retinal images. In this paper, we present a framework which improves the detection of the optic cup. First, a region of interest is obtained from the retinal fundus image, and a pallor-based preliminary cup contour estimate is determined. Patches are then extracted from the ROI along this contour. To improve the usability of the patches, adaptive methods are introduced to ensure the patches are within the optic disc and to minimize redundant information. The patches are then analyzed for vessels by an edge transform which generates pixel segments of likely vessel candidates. Wavelet, color and gradient information are used as input features for a SVM model to classify the candidates as vessel or non-vessel. Subsequently, a rigourous non-parametric method is adopted in which a bi-stage multi-resolution approach is used to probe and localize the location of kinks along the vessels. Finally, contenxtual information is used to fuse pallor and kink information to obtain an enhanced optic cup segmentation. Using a batch of 21 images obtained from the Singapore Eye Research Institute, the new method results in a 12.64% reduction in the average overlap error against a pallor only cup, indicating viable improvements in the segmentation and supporting the use of kinks for optic cup detection.

  16. Detection of Failure in Asynchronous Motor Using Soft Computing Method

    Vinoth Kumar, K.; Sony, Kevin; Achenkunju John, Alan; Kuriakose, Anto; John, Ano P.

    2018-04-01

    This paper investigates the stator short winding failure of asynchronous motor also their effects on motor current spectrums. A fuzzy logic approach i.e., model based technique possibly will help to detect the asynchronous motor failure. Actually, fuzzy logic similar to humanoid intelligent methods besides expected linguistic empowering inferences through vague statistics. The dynamic model is technologically advanced for asynchronous motor by means of fuzzy logic classifier towards investigate the stator inter turn failure in addition open phase failure. A hardware implementation was carried out with LabVIEW for the online-monitoring of faults.

  17. Improving the detection of illicit substance use in preoperative anesthesiological assessment.

    Kleinwächter, R; Kork, F; Weiss-Gerlach, E; Ramme, A; Linnen, H; Radtke, F; Lütz, A; Krampe, H; Spies, C D

    2010-01-01

    Illicit substance use (ISU) is a worldwide burden, and its prevalence in surgical patients has not been well investigated. Co-consumption of legal substances, such as alcohol and tobacco, complicates the perioperative management and is frequently underestimated during routine preoperative assessment. The aim of this study was to compare the anesthesiologists' detection rate of ISU during routine preoperative assessment with a computerized self-assessment questionnaire. In total, 2,938 patients were included in this study. Prior to preoperative assessment, patients were asked to complete a computer-based questionnaire that addressed ISU, alcohol use disorder (AUDIT), nicotine use (Fagerström) and socio-economic variables (education, income, employment, partnership and size of household). Medical records were reviewed, and the anesthesiologists' detection of ISU was compared to the patients' self-reported ISU. Seven point five percent of patients reported ISU within the previous twelve months. ISU was highest in the age group between 18 and 30 years (26.4%; P<0.01). Patients reporting ISU were more often men than women (P<0.01), smokers (P<0.01) and tested positive for alcohol use disorder (P<0.01). Anesthesiologists detected ISU in one in 43 patients, whereas the computerized self-assessment reported it in one in 13 patients. The detection was best in the subgroup self-reporting frequent ISU (P<0.01). Anesthesiologists underestimate the prevalence of ISU. Computer-based self-assessment increases the detection of ISU in preoperative assessment and may decrease perioperative risk. More strategies to improve the detection of ISU as well as brief interventions for ISU are required in preoperative assessment clinics.

  18. Computers and Communications. Improving the Employability of Persons with Handicaps.

    Deitel, Harvey M.

    1984-01-01

    Reviews applications of computer and communications technologies for persons with visual, hearing, physical, speech, and language impairments, as well as the effects of technologies on transportation, work at home, education, and other aspects affecting the employment of the disabled. (SK)

  19. Spying on real-time computers to improve performance

    Taff, L.M.

    1975-01-01

    The sampled program-counter histogram, an established technique for shortening the execution times of programs, is described for a real-time computer. The use of a real-time clock allows particularly easy implementation. (Auth.)

  20. Recent Improvements to CHEF, a Framework for Accelerator Computations

    Ostiguy, J.-F.; Michelotti, L.P.; /Fermilab

    2009-05-01

    CHEF is body of software dedicated to accelerator related computations. It consists of a hierarchical set of libraries and a stand-alone application based on the latter. The implementation language is C++; the code makes extensive use of templates and modern idioms such as iterators, smart pointers and generalized function objects. CHEF has been described in a few contributions at previous conferences. In this paper, we provide an overview and discuss recent improvements. Formally, CHEF refers to two distinct but related things: (1) a set of class libraries; and (2) a stand-alone application based on these libraries. The application makes use of and exposes a subset of the capabilities provided by the libraries. CHEF has its ancestry in efforts started in the early nineties. At that time, A. Dragt, E. Forest [2] and others showed that ring dynamics can be formulated in a way that puts maps rather than Hamiltonians, into a central role. Automatic differentiation (AD) techniques, which were just coming of age, were a natural fit in a context where maps are represented by their Taylor approximations. The initial vision, which CHEF carried over, was to develop a code that (1) concurrently supports conventional tracking, linear and non-linear map-based techniques (2) avoids 'hardwired' approximations that are not under user control (3) provides building blocks for applications. C++ was adopted as the implementation language because of its comprehensive support for operator overloading and the equal status it confers to built-in and user-defined data types. It should be mentioned that acceptance of AD techniques in accelerator science owes much to the pioneering work of Berz [1] who implemented--in fortran--the first production quality AD engine (the foundation for the code COSY). Nowadays other engines are available, but few are native C++ implementations. Although AD engines and map based techniques are making their way into more traditional codes e.g. [5

  1. Recent Improvements to CHEF, a Framework for Accelerator Computations

    Ostiguy, J.-F.; Michelotti, L.P.

    2009-01-01

    CHEF is body of software dedicated to accelerator related computations. It consists of a hierarchical set of libraries and a stand-alone application based on the latter. The implementation language is C++; the code makes extensive use of templates and modern idioms such as iterators, smart pointers and generalized function objects. CHEF has been described in a few contributions at previous conferences. In this paper, we provide an overview and discuss recent improvements. Formally, CHEF refers to two distinct but related things: (1) a set of class libraries; and (2) a stand-alone application based on these libraries. The application makes use of and exposes a subset of the capabilities provided by the libraries. CHEF has its ancestry in efforts started in the early nineties. At that time, A. Dragt, E. Forest [2] and others showed that ring dynamics can be formulated in a way that puts maps rather than Hamiltonians, into a central role. Automatic differentiation (AD) techniques, which were just coming of age, were a natural fit in a context where maps are represented by their Taylor approximations. The initial vision, which CHEF carried over, was to develop a code that (1) concurrently supports conventional tracking, linear and non-linear map-based techniques (2) avoids 'hardwired' approximations that are not under user control (3) provides building blocks for applications. C++ was adopted as the implementation language because of its comprehensive support for operator overloading and the equal status it confers to built-in and user-defined data types. It should be mentioned that acceptance of AD techniques in accelerator science owes much to the pioneering work of Berz [1] who implemented--in fortran--the first production quality AD engine (the foundation for the code COSY). Nowadays other engines are available, but few are native C++ implementations. Although AD engines and map based techniques are making their way into more traditional codes e.g. [5], it is also

  2. Overlapped flowers yield detection using computer-based interface

    Anuradha Sharma

    2016-09-01

    Full Text Available Precision agriculture has always dealt with the accuracy and timely information about agricultural products. With the help of computer hardware and software technology designing a decision support system that could generate flower yield information and serve as base for management and planning of flower marketing is made so easy. Despite such technologies, some problem still arise, for example, a colour homogeneity of a specimen which cannot be obtained similar to actual colour of image and overlapping of image. In this paper implementing a new ‘counting algorithm’ for overlapped flower is being discussed. For implementing this algorithm, some techniques and operations such as colour image segmentation technique, image segmentation, using HSV colour space and morphological operations have been used. In this paper used two most popular colour space; those are RGB and HSV. HSV colour space decouples brightness from a chromatic component in the image, by which it provides better result in case for occlusion and overlapping.

  3. Computer-aided detection of renal calculi from noncontrast CT images using TV-flow and MSER features

    Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B.; Linguraru, Marius George; Yao, Jianhua; Summers, Ronald M.

    2015-01-01

    Purpose: Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. Methods: The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients and compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. Results: At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e − 3) on all calculi from 1 to 433 mm3 in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Conclusions: Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis. PMID:25563255

  4. Improved optical ranging for space based gravitational wave detection

    Sutton, Andrew J; Shaddock, Daniel A; McKenzie, Kirk; Ware, Brent; De Vine, Glenn; Spero, Robert E; Klipstein, W

    2013-01-01

    The operation of 10 6  km scale laser interferometers in space will permit the detection of gravitational waves at previously unaccessible frequency regions. Multi-spacecraft missions, such as the Laser Interferometer Space Antenna (LISA), will use time delay interferometry to suppress the otherwise dominant laser frequency noise from their measurements. This is accomplished by performing sub-sample interpolation of the optical phase measurements recorded at each spacecraft for synchronization and cancellation of the otherwise dominant laser frequency noise. These sub-sample interpolation time shifts are dependent upon the inter-spacecraft range and will be measured using a pseudo-random noise ranging modulation upon the science laser. One limit to the ranging performance is mutual interference between the outgoing and incoming ranging signals upon each spacecraft. This paper reports on the demonstration of a noise cancellation algorithm which is shown to providing a factor of ∼8 suppression of the mutual interference noise. Demonstration of the algorithm in an optical test bed showed an rms ranging error of 0.06 m, improved from 0.19 m in previous results, surpassing the 1 m RMS LISA specification and potentially improving the cancellation of laser frequency noise. (paper)

  5. Improvement of Level-1 PSA computer code package -A study for nuclear safety improvement-

    Park, Chang Kyu; Kim, Tae Woon; Ha, Jae Joo; Han, Sang Hoon; Cho, Yeong Kyun; Jeong, Won Dae; Jang, Seung Cheol; Choi, Young; Seong, Tae Yong; Kang, Dae Il; Hwang, Mi Jeong; Choi, Seon Yeong; An, Kwang Il

    1994-07-01

    This year is the second year of the Government-sponsored Mid- and Long-Term Nuclear Power Technology Development Project. The scope of this subproject titled on 'The Improvement of Level-1 PSA Computer Codes' is divided into three main activities : (1) Methodology development on the under-developed fields such as risk assessment technology for plant shutdown and external events, (2) Computer code package development for Level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in the area of PSA methodology development, foreign PSA reports on shutdown and external events have been reviewed and various PSA methodologies have been compared. Level-1 PSA code KIRAP and CCF analysis code COCOA are converted from KOS to Windows. Human reliability database has been also established in this year. In the area of new technology applications, fuzzy set theory and entropy theory are used to estimate component life and to develop a new measure of uncertainty importance. Finally, in the field of application study of PSA technique to reactor regulation, a strategic study to develop a dynamic risk management tool PEPSI and the determination of inspection and test priority of motor operated valves based on risk importance worths have been studied. (Author)

  6. Improving early diagnosis of pulmonary infections in patients with febrile neutropenia using low-dose chest computed tomography.

    M G Gerritsen

    Full Text Available We performed a prospective study in patients with chemotherapy induced febrile neutropenia to investigate the diagnostic value of low-dose computed tomography compared to standard chest radiography. The aim was to compare both modalities for detection of pulmonary infections and to explore performance of low-dose computed tomography for early detection of invasive fungal disease. The low-dose computed tomography remained blinded during the study. A consensus diagnosis of the fever episode made by an expert panel was used as reference standard. We included 67 consecutive patients on the first day of febrile neutropenia. According to the consensus diagnosis 11 patients (16.4% had pulmonary infections. Sensitivity, specificity, positive predictive value and negative predictive value were 36%, 93%, 50% and 88% for radiography, and 73%, 91%, 62% and 94% for low-dose computed tomography, respectively. An uncorrected McNemar showed no statistical difference (p = 0.197. Mean radiation dose for low-dose computed tomography was 0.24 mSv. Four out of 5 included patients diagnosed with invasive fungal disease had radiographic abnormalities suspect for invasive fungal disease on the low-dose computed tomography scan made on day 1 of fever, compared to none of the chest radiographs. We conclude that chest radiography has little value in the initial assessment of febrile neutropenia on day 1 for detection of pulmonary abnormalities. Low-dose computed tomography improves detection of pulmonary infiltrates and seems capable of detecting invasive fungal disease at a very early stage with a low radiation dose.

  7. Improving early diagnosis of pulmonary infections in patients with febrile neutropenia using low-dose chest computed tomography.

    Gerritsen, M G; Willemink, M J; Pompe, E; van der Bruggen, T; van Rhenen, A; Lammers, J W J; Wessels, F; Sprengers, R W; de Jong, P A; Minnema, M C

    2017-01-01

    We performed a prospective study in patients with chemotherapy induced febrile neutropenia to investigate the diagnostic value of low-dose computed tomography compared to standard chest radiography. The aim was to compare both modalities for detection of pulmonary infections and to explore performance of low-dose computed tomography for early detection of invasive fungal disease. The low-dose computed tomography remained blinded during the study. A consensus diagnosis of the fever episode made by an expert panel was used as reference standard. We included 67 consecutive patients on the first day of febrile neutropenia. According to the consensus diagnosis 11 patients (16.4%) had pulmonary infections. Sensitivity, specificity, positive predictive value and negative predictive value were 36%, 93%, 50% and 88% for radiography, and 73%, 91%, 62% and 94% for low-dose computed tomography, respectively. An uncorrected McNemar showed no statistical difference (p = 0.197). Mean radiation dose for low-dose computed tomography was 0.24 mSv. Four out of 5 included patients diagnosed with invasive fungal disease had radiographic abnormalities suspect for invasive fungal disease on the low-dose computed tomography scan made on day 1 of fever, compared to none of the chest radiographs. We conclude that chest radiography has little value in the initial assessment of febrile neutropenia on day 1 for detection of pulmonary abnormalities. Low-dose computed tomography improves detection of pulmonary infiltrates and seems capable of detecting invasive fungal disease at a very early stage with a low radiation dose.

  8. The efficacy of using computer-aided detection (CAD) for detection of breast cancer in mammography screening

    Henriksen, Emilie L; Carlsen, Jonathan F; Vejborg, Ilse Mm

    2018-01-01

    Background Early detection of breast cancer (BC) is crucial in lowering the mortality. Purpose To present an overview of studies concerning computer-aided detection (CAD) in screening mammography for early detection of BC and compare diagnostic accuracy and recall rates (RR) of single reading (SR......) with SR + CAD and double reading (DR) with SR + CAD. Material and Methods PRISMA guidelines were used as a review protocol. Articles on clinical trials concerning CAD for detection of BC in a screening population were included. The literature search resulted in 1522 records. A total of 1491 records were...... excluded by abstract and 18 were excluded by full text reading. A total of 13 articles were included. Results All but two studies from the SR vs. SR + CAD group showed an increased sensitivity and/or cancer detection rate (CDR) when adding CAD. The DR vs. SR + CAD group showed no significant differences...

  9. Observer training for computer-aided detection of pulmonary nodules in chest radiography

    de Boo, Diederick W.; van Hoorn, François; van Schuppen, Joost; Schijf, Laura; Scheerder, Maeke J.; Freling, Nicole J.; Mets, Onno; Weber, Michael; Schaefer-Prokop, Cornelia M.

    2012-01-01

    To assess whether short-term feedback helps readers to increase their performance using computer-aided detection (CAD) for nodule detection in chest radiography. The 140 CXRs (56 with a solitary CT-proven nodules and 84 negative controls) were divided into four subsets of 35; each were read in a

  10. Practical methods to improve the development of computational software

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-01-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  11. Improvements on coronal hole detection in SDO/AIA images using supervised classification

    Reiss Martin A.

    2015-01-01

    Full Text Available We demonstrate the use of machine learning algorithms in combination with segmentation techniques in order to distinguish coronal holes and filaments in SDO/AIA EUV images of the Sun. Based on two coronal hole detection techniques (intensity-based thresholding, SPoCA, we prepared datasets of manually labeled coronal hole and filament channel regions present on the Sun during the time range 2011–2013. By mapping the extracted regions from EUV observations onto HMI line-of-sight magnetograms we also include their magnetic characteristics. We computed shape measures from the segmented binary maps as well as first order and second order texture statistics from the segmented regions in the EUV images and magnetograms. These attributes were used for data mining investigations to identify the most performant rule to differentiate between coronal holes and filament channels. We applied several classifiers, namely Support Vector Machine (SVM, Linear Support Vector Machine, Decision Tree, and Random Forest, and found that all classification rules achieve good results in general, with linear SVM providing the best performances (with a true skill statistic of ≈ 0.90. Additional information from magnetic field data systematically improves the performance across all four classifiers for the SPoCA detection. Since the calculation is inexpensive in computing time, this approach is well suited for applications on real-time data. This study demonstrates how a machine learning approach may help improve upon an unsupervised feature extraction method.

  12. SCALE-4 [Standardized Computer Analyses for Licensing Evaluation]: An improved computational system for spent-fuel cask analysis

    Parks, C.V.

    1989-01-01

    The purpose of this paper is to provide specific information regarding improvements available with Version 4.0 of the SCALE system and discuss the future of SCALE within the current computing and regulatory environment. The emphasis focuses on the improvements in SCALE-4 over that available in SCALE-3. 10 refs., 1 fig., 1 tab

  13. Phylogenetically informed logic relationships improve detection of biological network organization

    2011-01-01

    Background A "phylogenetic profile" refers to the presence or absence of a gene across a set of organisms, and it has been proven valuable for understanding gene functional relationships and network organization. Despite this success, few studies have attempted to search beyond just pairwise relationships among genes. Here we search for logic relationships involving three genes, and explore its potential application in gene network analyses. Results Taking advantage of a phylogenetic matrix constructed from the large orthologs database Roundup, we invented a method to create balanced profiles for individual triplets of genes that guarantee equal weight on the different phylogenetic scenarios of coevolution between genes. When we applied this idea to LAPP, the method to search for logic triplets of genes, the balanced profiles resulted in significant performance improvement and the discovery of hundreds of thousands more putative triplets than unadjusted profiles. We found that logic triplets detected biological network organization and identified key proteins and their functions, ranging from neighbouring proteins in local pathways, to well separated proteins in the whole pathway, and to the interactions among different pathways at the system level. Finally, our case study suggested that the directionality in a logic relationship and the profile of a triplet could disclose the connectivity between the triplet and surrounding networks. Conclusion Balanced profiles are superior to the raw profiles employed by traditional methods of phylogenetic profiling in searching for high order gene sets. Gene triplets can provide valuable information in detection of biological network organization and identification of key genes at different levels of cellular interaction. PMID:22172058

  14. Computer-aided system for detecting runway incursions

    Sridhar, Banavar; Chatterji, Gano B.

    1994-07-01

    A synthetic vision system for enhancing the pilot's ability to navigate and control the aircraft on the ground is described. The system uses the onboard airport database and images acquired by external sensors. Additional navigation information needed by the system is provided by the Inertial Navigation System and the Global Positioning System. The various functions of the system, such as image enhancement, map generation, obstacle detection, collision avoidance, guidance, etc., are identified. The available technologies, some of which were developed at NASA, that are applicable to the aircraft ground navigation problem are noted. Example images of a truck crossing the runway while the aircraft flies close to the runway centerline are described. These images are from a sequence of images acquired during one of the several flight experiments conducted by NASA to acquire data to be used for the development and verification of the synthetic vision concepts. These experiments provide a realistic database including video and infrared images, motion states from the Inertial Navigation System and the Global Positioning System, and camera parameters.

  15. On Improving Face Detection Performance by Modelling Contextual Information

    Atanasoaei, Cosmin; McCool, Chris; Marcel, Sébastien

    2010-01-01

    In this paper we present a new method to enhance object detection by removing false alarms and merging multiple detections in a principled way with few parameters. The method models the output of an object classiï¬er which we consider as the context. A hierarchical model is built using the detection distribution around a target sub-window to discriminate between false alarms and true detections. Next the context is used to iteratively reï¬ne the detections. Finally the detections are clustere...

  16. Computer-aided detection (CAD) of lung nodules and small tumours on chest radiographs

    De Boo, D.W.; Prokop, M.; Uffmann, M.; Ginneken, B. van; Schaefer-Prokop, C.M.

    2009-01-01

    Detection of focal pulmonary lesions is limited by quantum and anatomic noise and highly influenced by variable perception capacity of the reader. Multiple studies have proven that lesions - missed at time of primary interpretation - were visible on the chest radiographs in retrospect. Computer-aided diagnosis (CAD) schemes do not alter the anatomic noise but aim at decreasing the intrinsic limitations and variations of human perception by alerting the reader to suspicious areas in a chest radiograph when used as a 'second reader'. Multiple studies have shown that the detection performance can be improved using CAD especially for less experienced readers at a variable amount of decreased specificity. There seem to be a substantial learning process for both, experienced and inexperienced readers, to be able to optimally differentiate between false positive and true positive lesions and to build up sufficient trust in the capabilities of these systems to be able to use them at their full advantage. Studies so far focussed on stand-alone performance of the CAD schemes to reveal the magnitude of potential impact or on retrospective evaluation of CAD as a second reader for selected study groups. Further research is needed to assess the performance of these systems in clinical routine and to determine the trade-off between performance increase in terms of increased sensitivity and decreased inter-reader variability and loss of specificity and secondary indicated follow-up examinations for further diagnostic workup.

  17. Improving personality facet scores with multidimensional computer adaptive testing

    Makransky, Guido; Mortensen, Erik Lykke; Glas, Cees A W

    2013-01-01

    personality tests contain many highly correlated facets. This article investigates the possibility of increasing the precision of the NEO PI-R facet scores by scoring items with multidimensional item response theory and by efficiently administering and scoring items with multidimensional computer adaptive...

  18. Improving a Computer Networks Course Using the Partov Simulation Engine

    Momeni, B.; Kharrazi, M.

    2012-01-01

    Computer networks courses are hard to teach as there are many details in the protocols and techniques involved that are difficult to grasp. Employing programming assignments as part of the course helps students to obtain a better understanding and gain further insight into the theoretical lectures. In this paper, the Partov simulation engine and…

  19. Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes

    Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.

    2002-01-01

    A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)

  20. Factors cost effectively improved using computer simulations of ...

    LPhidza

    effectively managed using computer simulations in semi-arid conditions pertinent to much of sub-Saharan Africa. ... small scale farmers to obtain optimal crop yields thus ensuring their food security and livelihood is ... those that simultaneously incorporate and simulate processes involved throughout the course of crop ...

  1. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    Yigitbasi, M.N.

    2012-01-01

    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput

  2. Computational Intelligence based techniques for islanding detection of distributed generation in distribution network: A review

    Laghari, J.A.; Mokhlis, H.; Karimi, M.; Bakar, A.H.A.; Mohamad, Hasmaini

    2014-01-01

    Highlights: • Unintentional and intentional islanding, their causes, and solutions are presented. • Remote, passive, active and hybrid islanding detection techniques are discussed. • The limitation of these techniques in accurately detect islanding are discussed. • Computational intelligence techniques ability in detecting islanding is discussed. • Review of ANN, fuzzy logic control, ANFIS, Decision tree techniques is provided. - Abstract: Accurate and fast islanding detection of distributed generation is highly important for its successful operation in distribution networks. Up to now, various islanding detection technique based on communication, passive, active and hybrid methods have been proposed. However, each technique suffers from certain demerits that cause inaccuracies in islanding detection. Computational intelligence based techniques, due to their robustness and flexibility in dealing with complex nonlinear systems, is an option that might solve this problem. This paper aims to provide a comprehensive review of computational intelligence based techniques applied for islanding detection of distributed generation. Moreover, the paper compares the accuracies of computational intelligence based techniques over existing techniques to provide a handful of information for industries and utility researchers to determine the best method for their respective system

  3. Improved computer-assisted nuclear imaging in renovascular hypertension

    Gross, M.L.; Nally, J.V.; Potvini, W.J.; Clarke, H.S. Jr.; Higgins, J.T.; Windham, J.P.

    1985-01-01

    A computer-assisted program with digital background subtraction has been developed to analyze the initial 90 second Tc-99m DTPA renal flow scans in an attempt to quantitate the early isotope delivery to and uptake by the kidney. This study was designed to compare the computer-assisted 90 second DTPA scan with the conventional 30 minute I-131 Hippuran scan. Six patients with angiographically-proven unilateral renal artery stenosis were studied. The time activity curves for both studies were derived from regions of interest selected from the computer acquired dynamic images. The following parameters were used to assess renal blood flow: differential maximum activity, minimum/maximum activity ratio, and peak width. The computer-assisted DTPA study accurately predicted (6/6) the stenosed side documented angiographically, whereas the conventional Hippuran scan was clearly predictive in only 2/6. In selected cases successfully corrected surgically, the DTPA study proved superior in assessing the degree of patency of the graft. The best discriminatory factors when compared to a template synthesized from curves obtained from normal subjects were differential maximum activity and peak width. The authors conclude that: 1) the computer-assisted 90 second DTPA renal blood flow scan was superior to the conventional I-131 Hippuran scan in demonstrating unilateral reno-vascular disease; 2) the DTPA study was highly predictive of the angiographic findings; and 3) this non-invasive study should prove useful in the diagnosis and serial evaluation following surgery and/or angioplasty for renal artery stenosis

  4. An Improved Wavelet‐Based Multivariable Fault Detection Scheme

    Harrou, Fouzi; Sun, Ying; Madakyaru, Muddu

    2017-01-01

    Data observed from environmental and engineering processes are usually noisy and correlated in time, which makes the fault detection more difficult as the presence of noise degrades fault detection quality. Multiscale representation of data using

  5. Improved computation method in residual life estimation of structural components

    Maksimović Stevan M.

    2013-01-01

    Full Text Available This work considers the numerical computation methods and procedures for the fatigue crack growth predicting of cracked notched structural components. Computation method is based on fatigue life prediction using the strain energy density approach. Based on the strain energy density (SED theory, a fatigue crack growth model is developed to predict the lifetime of fatigue crack growth for single or mixed mode cracks. The model is based on an equation expressed in terms of low cycle fatigue parameters. Attention is focused on crack growth analysis of structural components under variable amplitude loads. Crack growth is largely influenced by the effect of the plastic zone at the front of the crack. To obtain efficient computation model plasticity-induced crack closure phenomenon is considered during fatigue crack growth. The use of the strain energy density method is efficient for fatigue crack growth prediction under cyclic loading in damaged structural components. Strain energy density method is easy for engineering applications since it does not require any additional determination of fatigue parameters (those would need to be separately determined for fatigue crack propagation phase, and low cyclic fatigue parameters are used instead. Accurate determination of fatigue crack closure has been a complex task for years. The influence of this phenomenon can be considered by means of experimental and numerical methods. Both of these models are considered. Finite element analysis (FEA has been shown to be a powerful and useful tool1,6 to analyze crack growth and crack closure effects. Computation results are compared with available experimental results. [Projekat Ministarstva nauke Republike Srbije, br. OI 174001

  6. Improving CMS data transfers among its distributed computing facilities

    Flix, J; Sartirana, A

    2001-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on thei...

  7. Improving CMS data transfers among its distributed computing facilities

    Flix, Jose

    2010-01-01

    CMS computing needs reliable, stable and fast connections among multi-tiered computing infrastructures. For data distribution, the CMS experiment relies on a data placement and transfer system, PhEDEx, managing replication operations at each site in the distribution network. PhEDEx uses the File Transfer Service (FTS), a low level data movement service responsible for moving sets of files from one site to another, while allowing participating sites to control the network resource usage. FTS servers are provided by Tier-0 and Tier-1 centres and are used by all computing sites in CMS, according to the established policy. FTS needs to be set up according to the Grid site's policies, and properly configured to satisfy the requirements of all Virtual Organizations making use of the Grid resources at the site. Managing the service efficiently requires good knowledge of the CMS needs for all kinds of transfer workflows. This contribution deals with a revision of FTS servers used by CMS, collecting statistics on the...

  8. Performance of computer-aided diagnosis for detection of lacunar infarcts on brain MR images: ROC analysis of radiologists' detection

    Uchiyama, Y.; Yokoyama, R.; Hara, T.; Fujita, H. [Dept. of Intelligent Image Information, Graduate Scholl of Medicine, Gifu Univ. (Japan); Asano, T.; Kato, H.; Hoshi, H. [Dept. of Radiology, Graduate Scholl of Medicine, Gifu Univ. (Japan); Yamakawa, H.; Iwama, T. [Dept. of Neurosurgery, Graduate Scholl of Medicine, Gifu Univ. (Japan); Ando, H. [Dept. of Neurosurgery, Gifu Municipal Hospital (Japan); Yamakawa, H. [Dept. of Emergency and Critical Care Medicine, Chuno-Kousei Hospital (Japan)

    2007-06-15

    The detection and management of asymptomatic lacunar infarcts on magnetic resonance (MR) images are important tasks for radiologists to ensure the prevention of sever cerebral infarctions. However, accurate identification of lacunar infarcts is a difficult. Therefore, we developed a computer-aided diagnosis (CAD) scheme for detection of lacunar infarcts. The purpose of this study was to evaluate radiologists' performance in detection of lacunar infarcts without and with use of CAD scheme. 30 T1- and 30 T2- weighted images obtained from 30 patients were used for an observer study, which were consisted of 15 cases with a single lacunar infarct and 15 cases without any lacunar infarct. Six radiologists participated in the observer study. They interpreted lacunar infarcts first without and then with use of the scheme. For all six observers, average area under the receiver operating characteristic curve value was increased from 0.920 to 0.965 when they used the computer output. This CAD scheme might have the potential to improve the accuracy of radiologists' performance in the detection of lacunar infarcts on MR images. (orig.)

  9. Improving auscultatory proficiency using computer simulated heart sounds

    Hanan Salah EL-Deen Mohamed EL-Halawany

    2016-09-01

    Full Text Available This study aimed to examine the effects of 'Heart Sounds', a web-based program on improving fifth-year medical students' auscultation skill in a medical school in Egypt. This program was designed for medical students to master cardiac auscultation skills in addition to their usual clinical medical courses. Pre- and post-tests were performed to assess students' auscultation skill improvement. Upon completing the training, students were required to complete a questionnaire to reflect on the learning experience they developed through 'Heart Sounds' program. Results from pre- and post-tests revealed a significant improvement in students' auscultation skills. In examining male and female students' pre- and post-test results, we found that both of male and female students had achieved a remarkable improvement in their auscultation skills. On the other hand, students stated clearly that the learning experience they had with 'Heart Sounds' program was different than any other traditional ways of teaching. They stressed that the program had significantly improved their auscultation skills and enhanced their self-confidence in their ability to practice those skills. It is also recommended that 'Heart Sounds' program learning experience should be extended by assessing students' practical improvement in real life situations.

  10. Fast automatic segmentation of anatomical structures in x-ray computed tomography images to improve fluorescence molecular tomography reconstruction.

    Freyer, Marcus; Ale, Angelique; Schulz, Ralf B; Zientkowska, Marta; Ntziachristos, Vasilis; Englmeier, Karl-Hans

    2010-01-01

    The recent development of hybrid imaging scanners that integrate fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) allows the utilization of x-ray information as image priors for improving optical tomography reconstruction. To fully capitalize on this capacity, we consider a framework for the automatic and fast detection of different anatomic structures in murine XCT images. To accurately differentiate between different structures such as bone, lung, and heart, a combination of image processing steps including thresholding, seed growing, and signal detection are found to offer optimal segmentation performance. The algorithm and its utilization in an inverse FMT scheme that uses priors is demonstrated on mouse images.

  11. Provide a model to improve the performance of intrusion detection systems in the cloud

    Foroogh Sedighi

    2016-01-01

    High availability of tools and service providers in cloud computing and the fact that cloud computing services are provided by internet and deal with public, have caused important challenges for new computing model. Cloud computing faces problems and challenges such as user privacy, data security, data ownership, availability of services, and recovery after breaking down, performance, scalability, programmability. So far, many different methods are presented for detection of intrusion in clou...

  12. Improved methods for computing masses from numerical simulations

    Kronfeld, A.S.

    1989-11-22

    An important advance in the computation of hadron and glueball masses has been the introduction of non-local operators. This talk summarizes the critical signal-to-noise ratio of glueball correlation functions in the continuum limit, and discusses the case of (q{bar q} and qqq) hadrons in the chiral limit. A new strategy for extracting the masses of excited states is outlined and tested. The lessons learned here suggest that gauge-fixed momentum-space operators might be a suitable choice of interpolating operators. 15 refs., 2 tabs.

  13. Hyperspectral Imagery Target Detection Using Improved Anomaly Detection and Signature Matching Methods

    Smetek, Timothy E

    2007-01-01

    This research extends the field of hyperspectral target detection by developing autonomous anomaly detection and signature matching methodologies that reduce false alarms relative to existing benchmark detectors...

  14. Distance Measurement Methods for Improved Insider Threat Detection

    Owen Lo

    2018-01-01

    Full Text Available Insider threats are a considerable problem within cyber security and it is often difficult to detect these threats using signature detection. Increasing machine learning can provide a solution, but these methods often fail to take into account changes of behaviour of users. This work builds on a published method of detecting insider threats and applies Hidden Markov method on a CERT data set (CERT r4.2 and analyses a number of distance vector methods (Damerau–Levenshtein Distance, Cosine Distance, and Jaccard Distance in order to detect changes of behaviour, which are shown to have success in determining different insider threats.

  15. Geostationary Sensor Based Forest Fire Detection and Monitoring: An Improved Version of the SFIDE Algorithm

    Valeria Di Biase

    2018-05-01

    Full Text Available The paper aims to present the results obtained in the development of a system allowing for the detection and monitoring of forest fires and the continuous comparison of their intensity when several events occur simultaneously—a common occurrence in European Mediterranean countries during the summer season. The system, called SFIDE (Satellite FIre DEtection, exploits a geostationary satellite sensor (SEVIRI, Spinning Enhanced Visible and InfraRed Imager, on board of MSG, Meteosat Second Generation, satellite series. The algorithm was developed several years ago in the framework of a project (SIGRI funded by the Italian Space Agency (ASI. This algorithm has been completely reviewed in order to enhance its efficiency by reducing false alarms rate preserving a high sensitivity. Due to the very low spatial resolution of SEVIRI images (4 × 4 km2 at Mediterranean latitude the sensitivity of the algorithm should be very high to detect even small fires. The improvement of the algorithm has been obtained by: introducing the sun elevation angle in the computation of the preliminary thresholds to identify potential thermal anomalies (hot spots, introducing a contextual analysis in the detection of clouds and in the detection of night-time fires. The results of the algorithm have been validated in the Sardinia region by using ground true data provided by the regional Corpo Forestale e di Vigilanza Ambientale (CFVA. A significant reduction of the commission error (less than 10% has been obtained with respect to the previous version of the algorithm and also with respect to fire-detection algorithms based on low earth orbit satellites.

  16. Do pre-trained deep learning models improve computer-aided classification of digital mammograms?

    Aboutalib, Sarah S.; Mohamed, Aly A.; Zuley, Margarita L.; Berg, Wendie A.; Luo, Yahong; Wu, Shandong

    2018-02-01

    Digital mammography screening is an important exam for the early detection of breast cancer and reduction in mortality. False positives leading to high recall rates, however, results in unnecessary negative consequences to patients and health care systems. In order to better aid radiologists, computer-aided tools can be utilized to improve distinction between image classifications and thus potentially reduce false recalls. The emergence of deep learning has shown promising results in the area of biomedical imaging data analysis. This study aimed to investigate deep learning and transfer learning methods that can improve digital mammography classification performance. In particular, we evaluated the effect of pre-training deep learning models with other imaging datasets in order to boost classification performance on a digital mammography dataset. Two types of datasets were used for pre-training: (1) a digitized film mammography dataset, and (2) a very large non-medical imaging dataset. By using either of these datasets to pre-train the network initially, and then fine-tuning with the digital mammography dataset, we found an increase in overall classification performance in comparison to a model without pre-training, with the very large non-medical dataset performing the best in improving the classification accuracy.

  17. Rapid Detection of Biological and Chemical Threat Agents Using Physical Chemistry, Active Detection, and Computational Analysis

    Chung, Myung; Dong, Li; Fu, Rong; Liotta, Lance; Narayanan, Aarthi; Petricoin, Emanuel; Ross, Mark; Russo, Paul; Zhou, Weidong; Luchini, Alessandra; Manes, Nathan; Chertow, Jessica; Han, Suhua; Kidd, Jessica; Senina, Svetlana; Groves, Stephanie

    2007-01-01

    Basic technologies have been successfully developed within this project: rapid collection of aerosols and a rapid ultra-sensitive immunoassay technique. Water-soluble, humidity-resistant polyacrylamide nano-filters were shown to (1) capture aerosol particles as small as 20 nm, (2) work in humid air and (3) completely liberate their captured particles in an aqueous solution compatible with the immunoassay technique. The immunoassay technology developed within this project combines electrophoretic capture with magnetic bead detection. It allows detection of as few as 150-600 analyte molecules or viruses in only three minutes, something no other known method can duplicate. The technology can be used in a variety of applications where speed of analysis and/or extremely low detection limits are of great importance: in rapid analysis of donor blood for hepatitis, HIV and other blood-borne infections in emergency blood transfusions, in trace analysis of pollutants, or in search of biomarkers in biological fluids. Combined in a single device, the water-soluble filter and ultra-sensitive immunoassay technique may solve the problem of early warning type detection of aerosolized pathogens. These two technologies are protected with five patent applications and are ready for commercialization.

  18. Difficulties encountered managing nodules detected during a computed tomography lung cancer screening program.

    Veronesi, Giulia; Bellomi, Massimo; Scanagatta, Paolo; Preda, Lorenzo; Rampinelli, Cristiano; Guarize, Juliana; Pelosi, Giuseppe; Maisonneuve, Patrick; Leo, Francesco; Solli, Piergiorgio; Masullo, Michele; Spaggiari, Lorenzo

    2008-09-01

    The main challenge of screening a healthy population with low-dose computed tomography is to balance the excessive use of diagnostic procedures with the risk of delayed cancer detection. We evaluated the pitfalls, difficulties, and sources of mistakes in the management of lung nodules detected in volunteers in the Cosmos single-center screening trial. A total of 5201 asymptomatic high-risk volunteers underwent screening with multidetector low-dose computed tomography. Nodules detected at baseline or new nodules at annual screening received repeat low-dose computed tomography at 1 year if less than 5 mm, repeat low-dose computed tomography 3 to 6 months later if between 5 and 8 mm, and fluorodeoxyglucose positron emission tomography if more than 8 mm. Growing nodules at the annual screening received low-dose computed tomography at 6 months and computed tomography-positron emission tomography or surgical biopsy according to doubling time, type, and size. During the first year of screening, 106 patients underwent lung biopsy and 91 lung cancers were identified (70% were stage I). Diagnosis was delayed (false-negative) in 6 patients (stage IIB in 1 patient, stage IIIA in 3 patients, and stage IV in 2 patients), including 2 small cell cancers and 1 central lesion. Surgical biopsy revealed benign disease (false-positives) in 15 cases (14%). Positron emission tomography sensitivity was 88% for prevalent cancers and 70% for cancers diagnosed after first annual screening. No needle biopsy procedures were performed in this cohort of patients. Low-dose computed tomography screening is effective for the early detection of lung cancers, but nodule management remains a challenge. Computed tomography-positron emission tomography is useful at baseline, but its sensitivity decreases significantly the subsequent year. Multidisciplinary management and experience are crucial for minimizing misdiagnoses.

  19. Image covariance and lesion detectability in direct fan-beam x-ray computed tomography.

    Wunderlich, Adam; Noo, Frédéric

    2008-05-21

    We consider noise in computed tomography images that are reconstructed using the classical direct fan-beam filtered backprojection algorithm, from both full- and short-scan data. A new, accurate method for computing image covariance is presented. The utility of the new covariance method is demonstrated by its application to the implementation of a channelized Hotelling observer for a lesion detection task. Results from the new covariance method and its application to the channelized Hotelling observer are compared with results from Monte Carlo simulations. In addition, the impact of a bowtie filter and x-ray tube current modulation on reconstruction noise and lesion detectability are explored for full-scan reconstruction.

  20. Robust fault detection of linear systems using a computationally efficient set-membership method

    Tabatabaeipour, Mojtaba; Bak, Thomas

    2014-01-01

    In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....

  1. Experimental detection of nonclassical correlations in mixed-state quantum computation

    Passante, G.; Moussa, O.; Trottier, D. A.; Laflamme, R.

    2011-01-01

    We report on an experiment to detect nonclassical correlations in a highly mixed state. The correlations are characterized by the quantum discord and are observed using four qubits in a liquid-state nuclear magnetic resonance quantum information processor. The state analyzed is the output of a DQC1 computation, whose input is a single quantum bit accompanied by n maximally mixed qubits. This model of computation outperforms the best known classical algorithms and, although it contains vanishing entanglement, it is known to have quantum correlations characterized by the quantum discord. This experiment detects nonvanishing quantum discord, ensuring the existence of nonclassical correlations as measured by the quantum discord.

  2. Image covariance and lesion detectability in direct fan-beam x-ray computed tomography

    Wunderlich, Adam; Noo, Frederic

    2008-01-01

    We consider noise in computed tomography images that are reconstructed using the classical direct fan-beam filtered backprojection algorithm, from both full- and short-scan data. A new, accurate method for computing image covariance is presented. The utility of the new covariance method is demonstrated by its application to the implementation of a channelized Hotelling observer for a lesion detection task. Results from the new covariance method and its application to the channelized Hotelling observer are compared with results from Monte Carlo simulations. In addition, the impact of a bowtie filter and x-ray tube current modulation on reconstruction noise and lesion detectability are explored for full-scan reconstruction

  3. Usage of polarisation features of landmines for improved automatic detection

    Jong, W. de; Cremer, F.; Schutte, K.; Storm, J.

    2000-01-01

    In this paper the landmine detection performance of an infrared and a visual light camera both equipped with a polarisation filter are compared with the detection performance of these cameras without polarisation filters. Sequences of images have been recorded with in front of these cameras a

  4. Probabilistic evaluations for CANTUP computer code analysis improvement

    Florea, S.; Pavelescu, M.

    2004-01-01

    Structural analysis with finite element method is today an usual way to evaluate and predict the behavior of structural assemblies subject to hard conditions in order to ensure their safety and reliability during their operation. A CANDU 600 fuel channel is an example of an assembly working in hard conditions, in which, except the corrosive and thermal aggression, long time irradiation, with implicit consequences on material properties evolution, interferes. That leads inevitably to material time-dependent properties scattering, their dynamic evolution being subject to a great degree of uncertainness. These are the reasons for developing, in association with deterministic evaluations with computer codes, the probabilistic and statistical methods in order to predict the structural component response. This work initiates the possibility to extend the deterministic thermomechanical evaluation on fuel channel components to probabilistic structural mechanics approach starting with deterministic analysis performed with CANTUP computer code which is a code developed to predict the long term mechanical behavior of the pressure tube - calandria tube assembly. To this purpose the structure of deterministic calculus CANTUP computer code has been reviewed. The code has been adapted from LAHEY 77 platform to Microsoft Developer Studio - Fortran Power Station platform. In order to perform probabilistic evaluations, it was added a part to the deterministic code which, using a subroutine from IMSL library from Microsoft Developer Studio - Fortran Power Station platform, generates pseudo-random values of a specified value. It was simulated a normal distribution around the deterministic value and 5% standard deviation for Young modulus material property in order to verify the statistical calculus of the creep behavior. The tube deflection and effective stresses were the properties subject to probabilistic evaluation. All the values of these properties obtained for all the values for

  5. An improved data clustering algorithm for outlier detection

    Anant Agarwal

    2016-12-01

    Full Text Available Data mining is the extraction of hidden predictive information from large databases. This is a technology with potential to study and analyze useful information present in data. Data objects which do not usually fit into the general behavior of the data are termed as outliers. Outlier Detection in databases has numerous applications such as fraud detection, customized marketing, and the search for terrorism. By definition, outliers are rare occurrences and hence represent a small portion of the data. However, the use of Outlier Detection for various purposes is not an easy task. This research proposes a modified PAM for detecting outliers. The proposed technique has been implemented in JAVA. The results produced by the proposed technique are found better than existing technique in terms of outliers detected and time complexity.

  6. Computer-aided detection and automated CT volumetry of pulmonary nodules

    Marten, Katharina; Engelke, Christoph

    2007-01-01

    With use of multislice computed tomography (MSCT), small pulmonary nodules are being detected in vast numbers, constituting the majority of all noncalcified lung nodules. Although the prevalence of lung cancers among such lesions in lung cancer screening populations is low, their isolation may contribute to increased patient survival. Computer-aided diagnosis (CAD) has emerged as a diverse set of diagnostic tools to handle the large number of images in MSCT datasets and most importantly, includes automated detection and volumetry of pulmonary nodules. Current CAD systems can significantly enhance experienced radiologists' performance and outweigh human limitations in identifying small lesions and manually measuring their diameters, augment observer consistency in the interpretation of such examinations and may thus help to detect significantly higher rates of early malignomas and give more precise estimates on chemotherapy response than can radiologists alone. In this review, we give an overview of current CAD in lung nodule detection and volumetry and discuss their relative merits and limitations. (orig.)

  7. Brain-computer interface for alertness estimation and improving

    Hramov, Alexander; Maksimenko, Vladimir; Hramova, Marina

    2018-02-01

    Using wavelet analysis of the signals of electrical brain activity (EEG), we study the processes of neural activity, associated with perception of visual stimuli. We demonstrate that the brain can process visual stimuli in two scenarios: (i) perception is characterized by destruction of the alpha-waves and increase in the high-frequency (beta) activity, (ii) the beta-rhythm is not well pronounced, while the alpha-wave energy remains unchanged. The special experiments show that the motivation factor initiates the first scenario, explained by the increasing alertness. Based on the obtained results we build the brain-computer interface and demonstrate how the degree of the alertness can be estimated and controlled in real experiment.

  8. X-ray scatter correction method for dedicated breast computed tomography: improvements and initial patient testing

    Ramamurthy, Senthil; D’Orsi, Carl J; Sechopoulos, Ioannis

    2016-01-01

    A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated. (paper)

  9. A diabetic retinopathy detection method using an improved pillar K-means algorithm.

    Gogula, Susmitha Valli; Divakar, Ch; Satyanarayana, Ch; Rao, Allam Appa

    2014-01-01

    The paper presents a new approach for medical image segmentation. Exudates are a visible sign of diabetic retinopathy that is the major reason of vision loss in patients with diabetes. If the exudates extend into the macular area, blindness may occur. Automated detection of exudates will assist ophthalmologists in early diagnosis. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after getting optimized by Pillar algorithm; pillars are constructed in such a way that they can withstand the pressure. Improved pillar algorithm can optimize the K-means clustering for image segmentation in aspects of precision and computation time. This evaluates the proposed approach for image segmentation by comparing with Kmeans and Fuzzy C-means in a medical image. Using this method, identification of dark spot in the retina becomes easier and the proposed algorithm is applied on diabetic retinal images of all stages to identify hard and soft exudates, where the existing pillar K-means is more appropriate for brain MRI images. This proposed system help the doctors to identify the problem in the early stage and can suggest a better drug for preventing further retinal damage.

  10. Computer aided detection of clusters of microcalcifications on full field digital mammograms

    Ge Jun; Sahiner, Berkman; Hadjiiski, Lubomir M.; Chan, H.-P.; Wei Jun; Helvie, Mark A.; Zhou Chuan

    2006-01-01

    We are developing a computer-aided detection (CAD) system to identify microcalcification clusters (MCCs) automatically on full field digital mammograms (FFDMs). The CAD system includes six stages: preprocessing; image enhancement; segmentation of microcalcification candidates; false positive (FP) reduction for individual microcalcifications; regional clustering; and FP reduction for clustered microcalcifications. At the stage of FP reduction for individual microcalcifications, a truncated sum-of-squares error function was used to improve the efficiency and robustness of the training of an artificial neural network in our CAD system for FFDMs. At the stage of FP reduction for clustered microcalcifications, morphological features and features derived from the artificial neural network outputs were extracted from each cluster. Stepwise linear discriminant analysis (LDA) was used to select the features. An LDA classifier was then used to differentiate clustered microcalcifications from FPs. A data set of 96 cases with 192 images was collected at the University of Michigan. This data set contained 96 MCCs, of which 28 clusters were proven by biopsy to be malignant and 68 were proven to be benign. The data set was separated into two independent data sets for training and testing of the CAD system in a cross-validation scheme. When one data set was used to train and validate the convolution neural network (CNN) in our CAD system, the other data set was used to evaluate the detection performance. With the use of a truncated error metric, the training of CNN could be accelerated and the classification performance was improved. The CNN in combination with an LDA classifier could substantially reduce FPs with a small tradeoff in sensitivity. By using the free-response receiver operating characteristic methodology, it was found that our CAD system can achieve a cluster-based sensitivity of 70, 80, and 90 % at 0.21, 0.61, and 1.49 FPs/image, respectively. For case

  11. Low tube voltage CT for improved detection of pancreatic cancer: detection threshold for small, simulated lesions

    Holm, Jon; Loizou, Louiza; Albiin, Nils; Kartalis, Nikolaos; Leidner, Bertil; Sundin, Anders

    2012-01-01

    Pancreatic ductal adenocarcinoma is associated with dismal prognosis. The detection of small pancreatic tumors which are still resectable is still a challenging problem. The aim of this study was to investigate the effect of decreasing the tube voltage from 120 to 80 kV on the detection of pancreatic tumors. Three scanning protocols was used; one using the standard tube voltage (120 kV) and current (160 mA) and two using 80 kV but with different tube currents (500 and 675 mA) to achieve equivalent dose (15 mGy) and noise (15 HU) as that of the standard protocol. Tumors were simulated into collected CT phantom images. The attenuation in normal parenchyma at 120 kV was set at 130 HU, as measured previously in clinical examinations, and the tumor attenuation was assumed to differ 20 HU and was set at 110HU. By scanning and measuring of iodine solution with different concentrations the corresponding tumor and parenchyma attenuation at 80 kV was found to be 185 and 219 HU, respectively. To objectively evaluate the differences between the three protocols, a multi-reader multi-case receiver operating characteristic study was conducted, using three readers and 100 cases, each containing 0–3 lesions. The highest reader averaged figure-of-merit (FOM) was achieved for 80 kV and 675 mA (FOM = 0,850), and the lowest for 120 kV (FOM = 0,709). There was a significant difference between the three protocols (p < 0,0001), when making an analysis of variance (ANOVA). Post-hoc analysis (students t-test) shows that there was a significant difference between 120 and 80 kV, but not between the two levels of tube currents at 80 kV. We conclude that when decreasing the tube voltage there is a significant improvement in tumor conspicuity

  12. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  13. Improved detection of calcium-binding proteins in polyacrylamide gels

    Anthony, F.A.; Babitch, J.A.

    1984-01-01

    The authors refined the method of Schibeci and Martonosi (1980) to enhance detection of calcium-binding proteins in polyacrylamide gels using 45 Ca 2+ . Their efforts have produced a method which is shorter, has 40-fold greater sensitivity over the previous method, and will detect 'EF hand'-containing calcium-binding proteins in polyacrylamide gels below the 0.5 μg level. In addition this method will detect at least one example from every described class of calcium-binding protein, including lectins and γ-carboxyglutamic acid containing calcium-binding proteins. The method should be useful for detecting calcium-binding proteins which may trigger neurotransmitter release. (Auth.)

  14. Improving mass candidate detection in mammograms via feature maxima propagation and local feature selection.

    Melendez, Jaime; Sánchez, Clara I; van Ginneken, Bram; Karssemeijer, Nico

    2014-08-01

    Mass candidate detection is a crucial component of multistep computer-aided detection (CAD) systems. It is usually performed by combining several local features by means of a classifier. When these features are processed on a per-image-location basis (e.g., for each pixel), mismatching problems may arise while constructing feature vectors for classification, which is especially true when the behavior expected from the evaluated features is a peaked response due to the presence of a mass. In this study, two of these problems, consisting of maxima misalignment and differences of maxima spread, are identified and two solutions are proposed. The first proposed method, feature maxima propagation, reproduces feature maxima through their neighboring locations. The second method, local feature selection, combines different subsets of features for different feature vectors associated with image locations. Both methods are applied independently and together. The proposed methods are included in a mammogram-based CAD system intended for mass detection in screening. Experiments are carried out with a database of 382 digital cases. Sensitivity is assessed at two sets of operating points. The first one is the interval of 3.5-15 false positives per image (FPs/image), which is typical for mass candidate detection. The second one is 1 FP/image, which allows to estimate the quality of the mass candidate detector's output for use in subsequent steps of the CAD system. The best results are obtained when the proposed methods are applied together. In that case, the mean sensitivity in the interval of 3.5-15 FPs/image significantly increases from 0.926 to 0.958 (p < 0.0002). At the lower rate of 1 FP/image, the mean sensitivity improves from 0.628 to 0.734 (p < 0.0002). Given the improved detection performance, the authors believe that the strategies proposed in this paper can render mass candidate detection approaches based on image location classification more robust to feature

  15. Nodule detection by chest X-ray and evaluation of computer-aided detection (CAD) software using an originally developed phantom for instructional purposes

    Nitta, Norihisa; Takahashi, Masashi; Takazakura, Ryutaro

    2006-01-01

    Chest X-ray and computed tomography (CT) are indispensable modalities for lung cancer examinations. CT technologies have dramatically improved and small nodules and obscure shadows have been detected more frequently. The new generation of radiologists feels that chest X-rays are not as useful as chest CT. Experiments using a newly-developed chest phantom were conducted to reconfirm blind spots in chest X-rays. Recent technological advances and high-definition capability have made chest X-rays more useful than ever. Even though development of multi-detector CT (MDCT) has facilitated detection of nodules, it has conversely incurred a problem of increasing data for analysis, taking tremendous time and effort. Here, employing a chest phantom and clinical samples, we evaluated the utility of two kinds of computer-aided detection (CAD) software (Image Checker CT and LungCARE NEV) as well as GGO CAD software that we have developed. More development of chest CT diagnostic software is urgently needed. (author)

  16. On the improvement of speaker diarization by detecting overlapped speech

    Hernando Pericás, Francisco Javier; Hernando Pericás, Francisco Javier

    2010-01-01

    Simultaneous speech in meeting environment is responsible for a certain amount of errors caused by standard speaker diarization systems. We are presenting an overlap detection system for far-field data based on spectral and spatial features, where the spatial features obtained on different microphone pairs are fused by means of principal component analysis. Detected overlap segments are applied for speaker diarization in order to increase the purity of speaker clusters an...

  17. Computational RNA secondary structure design: empirical complexity and improved methods

    Condon Anne

    2007-01-01

    Full Text Available Abstract Background We investigate the empirical complexity of the RNA secondary structure design problem, that is, the scaling of the typical difficulty of the design task for various classes of RNA structures as the size of the target structure is increased. The purpose of this work is to understand better the factors that make RNA structures hard to design for existing, high-performance algorithms. Such understanding provides the basis for improving the performance of one of the best algorithms for this problem, RNA-SSD, and for characterising its limitations. Results To gain insights into the practical complexity of the problem, we present a scaling analysis on random and biologically motivated structures using an improved version of the RNA-SSD algorithm, and also the RNAinverse algorithm from the Vienna package. Since primary structure constraints are relevant for designing RNA structures, we also investigate the correlation between the number and the location of the primary structure constraints when designing structures and the performance of the RNA-SSD algorithm. The scaling analysis on random and biologically motivated structures supports the hypothesis that the running time of both algorithms scales polynomially with the size of the structure. We also found that the algorithms are in general faster when constraints are placed only on paired bases in the structure. Furthermore, we prove that, according to the standard thermodynamic model, for some structures that the RNA-SSD algorithm was unable to design, there exists no sequence whose minimum free energy structure is the target structure. Conclusion Our analysis helps to better understand the strengths and limitations of both the RNA-SSD and RNAinverse algorithms, and suggests ways in which the performance of these algorithms can be further improved.

  18. Computer-aided detection of lung nodules on chest CT: issues to be solved before clinical use

    Goo, Jin Mo

    2005-01-01

    Given the increasing resolution of modern CT scanners, and the requirements for large-scale lung-screening examinations and diagnostic studies, there is an increased need for the accurate and reproducible analysis of the large number of images. Nodule detection is one of the main challenges of CT imaging, as they can be missed due to their small size, low relative contrast, or because they are located in an area with complex anatomy. Recent developments in computer-aided diagnosis (CAD) schemes are expected to aid radiologists in various tasks of chest imaging. In this era of multidetector row CT, the thoracic applications of greatest interest include the detection and volume measurement of lung nodules (1-7). Technology for CAD as applied to lung nodule detection on chest CT has been approved by the Food and Drug Administration and is currently commercially available. The article by Lee et al. (5) in this issue of the Korean Journal of Radiology is one of the few studies to examine the influence of a commercially available CAD system on the detection of lung nodules. In this study, some additional nodules were detected with the help of a CAD system, but at the expense of increased false positivity. The nodule detection rate of the CAD system in this study was lower than that achieved by radiologist, and the authors insist that the CAD system should be improved further. Compared to the use of CAD on mammograms, CAD evaluations of chest CTs remain limited to the laboratory setting. In this field, apart from the issues of detection rate and false positive detections, many obstacles must be overcome before CAD can be used in a true clinical reading environment. In this editorial, I will list some of these issues, but I emphasize now that I believe these issues will be solved by improved CAD versions in the near future

  19. Improved blade element momentum theory for wind turbine aerodynamic computations

    Sun, Zhenye; Chen, Jin; Shen, Wen Zhong

    2016-01-01

    Blade element momentum (BEM) theory is widely used in aerodynamic performance predictions and design applications for wind turbines. However, the classic BEM method is not quite accurate which often tends to under-predict the aerodynamic forces near root and over-predict its performance near tip....... for the MEXICO rotor. Results show that the improved BEM theory gives a better prediction than the classic BEM method, especially in the blade tip region, when comparing to the MEXICO measurements. (C) 2016 Elsevier Ltd. All rights reserved....

  20. The interplay of attention economics and computer-aided detection marks in screening mammography

    Schwartz, Tayler M.; Sridharan, Radhika; Wei, Wei; Lukyanchenko, Olga; Geiser, William; Whitman, Gary J.; Haygood, Tamara Miner

    2016-03-01

    Introduction: According to attention economists, overabundant information leads to decreased attention for individual pieces of information. Computer-aided detection (CAD) alerts radiologists to findings potentially associated with breast cancer but is notorious for creating an abundance of false-positive marks. We suspected that increased CAD marks do not lengthen mammogram interpretation time, as radiologists will selectively disregard these marks when present in larger numbers. We explore the relevance of attention economics in mammography by examining how the number of CAD marks affects interpretation time. Methods: We performed a retrospective review of bilateral digital screening mammograms obtained between January 1, 2011 and February 28, 2014, using only weekend interpretations to decrease distractions and the likelihood of trainee participation. We stratified data according to reader and used ANOVA to assess the relationship between number of CAD marks and interpretation time. Results: Ten radiologists, with median experience after residency of 12.5 years (range 6 to 24,) interpreted 1849 mammograms. When accounting for number of images, Breast Imaging Reporting and Data System category, and breast density, increasing numbers of CAD marks was correlated with longer interpretation time only for the three radiologists with the fewest years of experience (median 7 years.) Conclusion: For the 7 most experienced readers, increasing CAD marks did not lengthen interpretation time. We surmise that as CAD marks increase, the attention given to individual marks decreases. Experienced radiologists may rapidly dismiss larger numbers of CAD marks as false-positive, having learned that devoting extra attention to such marks does not improve clinical detection.

  1. SU-F-I-43: A Software-Based Statistical Method to Compute Low Contrast Detectability in Computed Tomography Images

    Chacko, M; Aldoohan, S [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States)

    2016-06-15

    Purpose: The low contrast detectability (LCD) of a CT scanner is its ability to detect and display faint lesions. The current approach to quantify LCD is achieved using vendor-specific methods and phantoms, typically by subjectively observing the smallest size object at a contrast level above phantom background. However, this approach does not yield clinically applicable values for LCD. The current study proposes a statistical LCD metric using software tools to not only to assess scanner performance, but also to quantify the key factors affecting LCD. This approach was developed using uniform QC phantoms, and its applicability was then extended under simulated clinical conditions. Methods: MATLAB software was developed to compute LCD using a uniform image of a QC phantom. For a given virtual object size, the software randomly samples the image within a selected area, and uses statistical analysis based on Student’s t-distribution to compute the LCD as the minimal Hounsfield Unit’s that can be distinguished from the background at the 95% confidence level. Its validity was assessed by comparison with the behavior of a known QC phantom under various scan protocols and a tissue-mimicking phantom. The contributions of beam quality and scattered radiation upon the computed LCD were quantified by using various external beam-hardening filters and phantom lengths. Results: As expected, the LCD was inversely related to object size under all scan conditions. The type of image reconstruction kernel filter and tissue/organ type strongly influenced the background noise characteristics and therefore, the computed LCD for the associated image. Conclusion: The proposed metric and its associated software tools are vendor-independent and can be used to analyze any LCD scanner performance. Furthermore, the method employed can be used in conjunction with the relationships established in this study between LCD and tissue type to extend these concepts to patients’ clinical CT

  2. Computer-aided detection of lung nodules in digital chest radiographs

    Giger, M.L.; Doi, K.; MacMahon, H.M.

    1986-01-01

    The authors are developing an automated method to detect lung nodules by eliminating the ''camouflaging: effect of the lung background. In order to increase the conspicuity of the nodules, we created, from a single chest radiograph, two images: one in which the signal-to-noise ratio (S/N) of the nodule is maximized and another in which that S/N is suppressed. The difference between these two processed images was subjected to feature-extraction technique in order to isolate the nodules. The detection accuracy of the computer-aided detection scheme, as compared with unaided radiologists' performance, was determined using receiver operating characteristic curve analysis

  3. Detection of common bile duct stones: comparison between endoscopic ultrasonography, magnetic resonance cholangiography, and helical-computed-tomographic cholangiography

    Kondo, Shintaro; Isayama, Hiroyuki; Akahane, Masaaki; Toda, Nobuo; Sasahira, Naoki; Nakai, Yosuke; Yamamoto, Natsuyo; Hirano, Kenji; Komatsu, Yutaka; Tada, Minoru; Yoshida, Haruhiko; Kawabe, Takao; Ohtomo, Kuni; Omata, Masao

    2005-01-01

    Objectives: New modalities, namely, endoscopic ultrasonography (EUS), magnetic resonance cholangiopancreatography (MRCP), and helical computed-tomographic cholangiography (HCT-C), have been introduced recently for the detection of common bile duct (CBD) stones and shown improved detectability compared to conventional ultrasound or computed tomography. We conducted this study to compare the diagnostic ability of EUS, MRCP, and HCT-C in patients with suspected choledocholithiasis. Methods: Twenty-eight patients clinically suspected of having CBD stones were enrolled, excluding those with cholangitis or a definite history of choledocholithiasis. Each patient underwent EUS, MRCP, and HCT-C prior to endoscopic retrograde cholangio-pancreatography (ERCP), the result of which served as the diagnostic gold standard. Results: CBD stones were detected in 24 (86%) of 28 patients by ERCP/IDUS. The sensitivity of EUS, MRCP, and HCT-C was 100%, 88%, and 88%, respectively. False negative cases for MRCP and HCT-C had a CBD stone smaller than 5 mm in diameter. No serious complications occurred while one patient complained of itching in the eyelids after the infusion of contrast agent on HCT-C. Conclusions: When examination can be scheduled, MRCP or HCT-C will be the first choice because they were less invasive than EUS. MRCP and HCT-C had similar detectability but the former may be preferable considering the possibility of allergic reaction in the latter. When MRCP is negative, EUS is recommended to check for small CBD stones

  4. Performance of computer-aided detection applied to full-field digital mammography in detection of breast cancers

    Sadaf, Arifa; Crystal, Pavel; Scaranelo, Anabel; Helbich, Thomas

    2011-01-01

    Objective: The aim of this retrospective study was to evaluate performance of computer-aided detection (CAD) with full-field digital mammography (FFDM) in detection of breast cancers. Materials and Methods: CAD was retrospectively applied to standard mammographic views of 127 cases with biopsy proven breast cancers detected with FFDM (Senographe 2000, GE Medical Systems). CAD sensitivity was assessed in total group of 127 cases and for subgroups based on breast density, mammographic lesion type, mammographic lesion size, histopathology and mode of presentation. Results: Overall CAD sensitivity was 91% (115 of 127 cases). There were no statistical differences (p > 0.1) in CAD detection of cancers in dense breasts 90% (53/59) versus non-dense breasts 91% (62/68). There was statistical difference (p 20 mm 97% (22/23). Conclusion: CAD applied to FFDM showed 100% sensitivity in identifying cancers manifesting as microcalcifications only and high sensitivity 86% (71/83) for other mammographic appearances of cancer. Sensitivity is influenced by lesion size. CAD in FFDM is an adjunct helping radiologist in early detection of breast cancers.

  5. Improving Patient Satisfaction Through Computer-Based Questionnaires.

    Smith, Matthew J; Reiter, Michael J; Crist, Brett D; Schultz, Loren G; Choma, Theodore J

    2016-01-01

    Patient-reported outcome measures are helping clinicians to use evidence-based medicine in decision making. The use of computer-based questionnaires to gather such data may offer advantages over traditional paper-based methods. These advantages include consistent presentation, prompts for missed questions, reliable scoring, and simple and accurate transfer of information into databases without manual data entry. The authors enrolled 308 patients over a 16-month period from 3 orthopedic clinics: spine, upper extremity, and trauma. Patients were randomized to complete either electronic or paper validated outcome forms during their first visit, and they completed the opposite modality at their second visit, which was approximately 7 weeks later. For patients with upper-extremity injuries, the Penn Shoulder Score (PSS) was used. For patients with lower-extremity injuries, the Foot Function Index (FFI) was used. For patients with lumbar spine symptoms, the Oswestry Disability Index (ODI) was used. All patients also were asked to complete the 36-Item Short Form Health Survey (SF-36) Health Status Survey, version 1. The authors assessed patient satisfaction with each survey modality and determined potential advantages and disadvantages for each. No statistically significant differences were found between the paper and electronic versions for patient-reported outcome data. However, patients strongly preferred the electronic surveys. Additionally, the paper forms had significantly more missed questions for the FFI (P<.0001), ODI (P<.0001), and PSS (P=.008), and patents were significantly less likely to complete these forms (P<.0001). Future research should focus on limiting the burden on responders, individualizing forms and questions as much as possible, and offering alternative environments for completion (home or mobile platforms). Copyright 2016, SLACK Incorporated.

  6. Improving aquatic warbler population assessments by accounting for imperfect detection.

    Steffen Oppel

    Full Text Available Monitoring programs designed to assess changes in population size over time need to account for imperfect detection and provide estimates of precision around annual abundance estimates. Especially for species dependent on conservation management, robust monitoring is essential to evaluate the effectiveness of management. Many bird species of temperate grasslands depend on specific conservation management to maintain suitable breeding habitat. One such species is the Aquatic Warbler (Acrocephalus paludicola, which breeds in open fen mires in Central Europe. Aquatic Warbler populations have so far been assessed using a complete survey that aims to enumerate all singing males over a large area. Because this approach provides no estimate of precision and does not account for observation error, detecting moderate population changes is challenging. From 2011 to 2013 we trialled a new line transect sampling monitoring design in the Biebrza valley, Poland, to estimate abundance of singing male Aquatic Warblers. We surveyed Aquatic Warblers repeatedly along 50 randomly placed 1-km transects, and used binomial mixture models to estimate abundances per transect. The repeated line transect sampling required 150 observer days, and thus less effort than the traditional 'full count' approach (175 observer days. Aquatic Warbler abundance was highest at intermediate water levels, and detection probability varied between years and was influenced by vegetation height. A power analysis indicated that our line transect sampling design had a power of 68% to detect a 20% population change over 10 years, whereas raw count data had a 9% power to detect the same trend. Thus, by accounting for imperfect detection we increased the power to detect population changes. We recommend to adopt the repeated line transect sampling approach for monitoring Aquatic Warblers in Poland and in other important breeding areas to monitor changes in population size and the effects of

  7. Accuracy of detecting stenotic changes on coronary cineangiograms using computer image processing

    Sugahara, Tetsuo; Kimura, Koji; Maeda, Hirofumi.

    1990-01-01

    To accurately interprets stenotic changes on coronary cineangiograms, an automatic method of detecting stenotic lesion using computer image processing was developed. First, tracing of artery was performed. The vessel edges were then determined by unilateral Gaussian fitting. The stenotic change was detected on the basis of the reference diameter estimated by Hough transformation. This method was evaluated in 132 segments of 27 arteries in 18 patients. Three observers carried out visual interpretation and computer-aided interpretation. The rate of detection by visual interpretation was 6.1, 28.8 and 20.5%, and by computer-aided interpretation, 39.4, 39.4 and 45.5%. With computer-aided interpretation, the agreement between any two observers on lesions and non-lesions was 40.2% and 59.8%, respectively. Therefore, visual interpretation tended to underestimate the stenotic changes on coronary cineangiograms. We think that computer-aided interpretation increase the reliability of diagnosis on coronary cineangiograms. (author)

  8. Computer-aided detection in breast MRI : a systematic review and meta-analysis

    Dorrius, Monique D.; Jansen-van der Weide, Marijke C.; van Ooijen, Peter M. A.; Pijnappel, Ruud M.; Oudkerk, Matthijs

    To evaluate the additional value of computer-aided detection (CAD) in breast MRI by assessing radiologists' accuracy in discriminating benign from malignant breast lesions. A literature search was performed with inclusion of relevant studies using a commercially available CAD system with automatic

  9. Automated Detection of Heuristics and Biases among Pathologists in a Computer-Based System

    Crowley, Rebecca S.; Legowski, Elizabeth; Medvedeva, Olga; Reitmeyer, Kayse; Tseytlin, Eugene; Castine, Melissa; Jukic, Drazen; Mello-Thoms, Claudia

    2013-01-01

    The purpose of this study is threefold: (1) to develop an automated, computer-based method to detect heuristics and biases as pathologists examine virtual slide cases, (2) to measure the frequency and distribution of heuristics and errors across three levels of training, and (3) to examine relationships of heuristics to biases, and biases to…

  10. Comparison of Computed Tomography and Chest Radiography in the Detection of Rib Fractures in Abused Infants

    Wootton-Gorges, Sandra L.; Stein-Wexler, Rebecca; Walton, John W.; Rosas, Angela J.; Coulter, Kevin P.; Rogers, Kristen K.

    2008-01-01

    Purpose: Chest radiographs (CXR) are the standard method for evaluating rib fractures in abused infants. Computed tomography (CT) is a sensitive method to detect rib fractures. The purpose of this study was to compare CT and CXR in the evaluation of rib fractures in abused infants. Methods: This retrospective study included all 12 abused infants…

  11. Detection of User Independent Single Trial ERPs in Brain Computer Interfaces: An Adaptive Spatial Filtering Approach

    Leza, Cristina; Puthusserypady, Sadasivan

    2017-01-01

    Brain Computer Interfaces (BCIs) use brain signals to communicate with the external world. The main challenges to address are speed, accuracy and adaptability. Here, a novel algorithm for P300 based BCI spelling system is presented, specifically suited for single-trial detection of Event...

  12. Detection of defects in logs using computer assisted tomography (CAT) scanning

    Tonner, P.D.; Lupton, L.R.

    1985-01-01

    The Chalk River Nuclear Laboratories of AECL have performed a preliminary feasibility study on the applicability of computer assisted tomographic techniques to detect the internal structure of logs. Cross sections of three logs have been obtained using a medical CAT scanner. The results show that knots, rot and growth rings are easily recognized in both dry and wet logs

  13. Progress in analysis of computed tomography (CT) images of hardwood logs for defect detection

    Erol Sarigul; A. Lynn Abbott; Daniel L. Schmoldt

    2003-01-01

    This paper addresses the problem of automatically detecting internal defects in logs using computed tomography (CT) images. The overall purpose is to assist in breakdown optimization. Several studies have shown that the commercial value of resulting boards can be increased substantially if defect locations are known in advance, and if this information is used to make...

  14. Detecting and Understanding the Impact of Cognitive and Interpersonal Conflict in Computer Supported Collaborative Learning Environments

    Prata, David Nadler; Baker, Ryan S. J. d.; Costa, Evandro d. B.; Rose, Carolyn P.; Cui, Yue; de Carvalho, Adriana M. J. B.

    2009-01-01

    This paper presents a model which can automatically detect a variety of student speech acts as students collaborate within a computer supported collaborative learning environment. In addition, an analysis is presented which gives substantial insight as to how students' learning is associated with students' speech acts, knowledge that will…

  15. Comparison of five cone beam computed tomography systems for the detection of vertical root fractures

    Hassan, B.; Metska, M.E.; Ozok, A.R.; van der Stelt, P.; Wesselink, P.R.

    2010-01-01

    Introduction This study compared the accuracy of cone beam computed tomography (CBCT) scans made by five different systems in detecting vertical root fractures (VRFs). It also assessed the influence of the presence of root canal filling (RCF), CBCT slice orientation selection, and the type of tooth

  16. Detection of vertical root fractures in endodontically treated teeth by a cone beam computed tomography scan

    Hassan, B.; Metska, M.E.; Özok, A.R.; van der Stelt, P.; Wesselink, P.R.

    2009-01-01

    Our aim was to compare the accuracy of cone beam computed tomography (CBCT) scans and periapical radiographs (PRs) in detecting vertical root fractures (VRFs) and to assess the influence of root canal filling (RCF) on fracture visibility. Eighty teeth were endodontically prepared and divided into

  17. A Real-Time Plagiarism Detection Tool for Computer-Based Assessments

    Jeske, Heimo J.; Lall, Manoj; Kogeda, Okuthe P.

    2018-01-01

    Aim/Purpose: The aim of this article is to develop a tool to detect plagiarism in real time amongst students being evaluated for learning in a computer-based assessment setting. Background: Cheating or copying all or part of source code of a program is a serious concern to academic institutions. Many academic institutions apply a combination of…

  18. Computer-Aided Detection in Breast Magnetic Resonance Imaging: A Review

    Dorrius, M. D.; Van Ooijen, P.M.A.

    2008-01-01

    The aim of this study is to give an overview on the accuracy of the discrimination between benign and malignant breast lesions on MRI with and without the use of a computer-aided detection (CAD) system. One investigator selected relevant articles based on title and abstract. Ten articles were

  19. A Privacy-Preserving Framework for Collaborative Intrusion Detection Networks Through Fog Computing

    Wang, Yu; Xie, Lin; Li, Wenjuan

    2017-01-01

    Nowadays, cyber threats (e.g., intrusions) are distributed across various networks with the dispersed networking resources. Intrusion detection systems (IDSs) have already become an essential solution to defend against a large amount of attacks. With the development of cloud computing, a modern IDS...

  20. On Combining Multiple-Instance Learning and Active Learning for Computer-Aided Detection of Tuberculosis

    Melendez Rodriguez, J.C.; Ginneken, B. van; Maduskar, P.; Philipsen, R.H.H.M.; Ayles, H.; Sanchez, C.I.

    2016-01-01

    The major advantage of multiple-instance learning (MIL) applied to a computer-aided detection (CAD) system is that it allows optimizing the latter with case-level labels instead of accurate lesion outlines as traditionally required for a supervised approach. As shown in previous work, a MIL-based

  1. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  2. A Robust and Fast System for CTC Computer-Aided Detection of Colorectal Lesions

    Gareth Beddoe

    2010-01-01

    Full Text Available We present a complete, end-to-end computer-aided detection (CAD system for identifying lesions in the colon, imaged with computed tomography (CT. This system includes facilities for colon segmentation, candidate generation, feature analysis, and classification. The algorithms have been designed to offer robust performance to variation in image data and patient preparation. By utilizing efficient 2D and 3D processing, software optimizations, multi-threading, feature selection, and an optimized cascade classifier, the CAD system quickly determines a set of detection marks. The colon CAD system has been validated on the largest set of data to date, and demonstrates excellent performance, in terms of its high sensitivity, low false positive rate, and computational efficiency.

  3. A Human/Computer Learning Network to Improve Biodiversity Conservation and Research

    Kelling, Steve; Gerbracht, Jeff; Fink, Daniel; Lagoze, Carl; Wong, Weng-Keen; Yu, Jun; Damoulas, Theodoros; Gomes, Carla

    2012-01-01

    In this paper we describe eBird, a citizen-science project that takes advantage of the human observational capacity to identify birds to species, which is then used to accurately represent patterns of bird occurrences across broad spatial and temporal extents. eBird employs artificial intelligence techniques such as machine learning to improve data quality by taking advantage of the synergies between human computation and mechanical computation. We call this a Human-Computer Learning Network,...

  4. My4Sight: A Human Computation Platform for Improving Flu Predictions

    Akupatni, Vivek Bharath

    2015-01-01

    While many human computation (human-in-the-loop) systems exist in the field of Artificial Intelligence (AI) to solve problems that can't be solved by computers alone, comparatively fewer platforms exist for collecting human knowledge, and evaluation of various techniques for harnessing human insights in improving forecasting models for infectious diseases, such as Influenza and Ebola. In this thesis, we present the design and implementation of My4Sight, a human computation system develope...

  5. Algorithm 589. SICEDR: a FORTRAN subroutine for improving the accuracy of computed matrix eigenvalues

    Dongarra, J.J.

    1982-01-01

    SICEDR is a FORTRAN subroutine for improving the accuracy of a computed real eigenvalue and improving or computing the associated eigenvector. It is first used to generate information during the determination of the eigenvalues by the Schur decomposition technique. In particular, the Schur decomposition technique results in an orthogonal matrix Q and an upper quasi-triangular matrix T, such that A = QTQ/sup T/. Matrices A, Q, and T and the approximate eigenvalue, say lambda, are then used in the improvement phase. SICEDR uses an iterative method similar to iterative improvement for linear systems to improve the accuracy of lambda and improve or compute the eigenvector x in O(n 2 ) work, where n is the order of the matrix A

  6. Comparison of {sup 18}F-fluorodeoxyglucose positron emission tomography/computed tomography, hydro-stomach computed tomography, and their combination for detecting primary gastric cancer

    Jang, Hye Young; Chung, Woo Suk; Song, E Rang; Kim, Jin Suk [Konyang University Myunggok Medical Research Institute, Konyang University Hospital, Konyang University College of Medicine, Daejeon (Korea, Republic of)

    2015-01-15

    To retrospectively compare the diagnostic accuracy for detecting primary gastric cancer on positron emission tomography/computed tomography (PET/CT) and hydro-stomach CT (S-CT) and determine whether the combination of the two techniques improves diagnostic performance. A total of 253 patients with pathologically proven primary gastric cancer underwent PET/CT and S-CT for the preoperative evaluation. Two radiologists independently reviewed the three sets (PET/CT set, S-CT set, and the combined set) of PET/CT and S-CT in a random order. They graded the likelihood for the presence of primary gastric cancer based on a 4-point scale. The diagnostic accuracy of the PET/CT set, the S-CT set, and the combined set were determined by the area under the alternative-free receiver operating characteristic curve, and sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Diagnostic accuracy, sensitivity, and NPV for detecting all gastric cancers and early gastric cancers (EGCs) were significantly higher with the combined set than those with the PET/CT and S-CT sets. Specificity and PPV were significantly higher with the PET/CT set than those with the combined and S-CT set for detecting all gastric cancers and EGCs. The combination of PET/CT and S-CT is more accurate than S-CT alone, particularly for detecting EGCs.

  7. Comparison of 18F-fluorodeoxyglucose positron emission tomography/computed tomography, hydro-stomach computed tomography, and their combination for detecting primary gastric cancer

    Jang, Hye Young; Chung, Woo Suk; Song, E Rang; Kim, Jin Suk

    2015-01-01

    To retrospectively compare the diagnostic accuracy for detecting primary gastric cancer on positron emission tomography/computed tomography (PET/CT) and hydro-stomach CT (S-CT) and determine whether the combination of the two techniques improves diagnostic performance. A total of 253 patients with pathologically proven primary gastric cancer underwent PET/CT and S-CT for the preoperative evaluation. Two radiologists independently reviewed the three sets (PET/CT set, S-CT set, and the combined set) of PET/CT and S-CT in a random order. They graded the likelihood for the presence of primary gastric cancer based on a 4-point scale. The diagnostic accuracy of the PET/CT set, the S-CT set, and the combined set were determined by the area under the alternative-free receiver operating characteristic curve, and sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Diagnostic accuracy, sensitivity, and NPV for detecting all gastric cancers and early gastric cancers (EGCs) were significantly higher with the combined set than those with the PET/CT and S-CT sets. Specificity and PPV were significantly higher with the PET/CT set than those with the combined and S-CT set for detecting all gastric cancers and EGCs. The combination of PET/CT and S-CT is more accurate than S-CT alone, particularly for detecting EGCs.

  8. VEHICLE LOCALIZATION BY LIDAR POINT CORRELATION IMPROVED BY CHANGE DETECTION

    A. Schlichting

    2016-06-01

    Full Text Available LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position and 0.06° (heading, and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany.

  9. Vehicle Localization by LIDAR Point Correlation Improved by Change Detection

    Schlichting, A.; Brenner, C.

    2016-06-01

    LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position) and 0.06° (heading), and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany.

  10. Improved axial position detection in optical tweezers measurements

    Dreyer, Jakob Kisbye; Berg-Sørensen, Kirstine; Oddershede, Lene

    2004-01-01

    We investigate the axial position detection of a trapped microsphere in an optical trap by using a quadrant photodiode. By replacing the photodiode with a CCD camera, we obtain detailed information on the light scattered by the microsphere. The correlation of the interference pattern with the axial...... position displays complex behavior with regions of positive and negative interference. By analyzing the scattered light intensity as a function of the axial position of the trapped sphere, we propose a simple method to increase the sensitivity and control the linear range of axial position detection....

  11. Improving computer usage for students with physical disabilities through a collaborative approach: a pilot study.

    Borgestig, Maria; Falkmer, Torbjörn; Hemmingsson, Helena

    2013-11-01

    The aim of this study was to evaluate the effect of an assistive technology (AT) intervention to improve the use of available computers as assistive technology in educational tasks for students with physical disabilities during an ongoing school year. Fifteen students (aged 12-18) with physical disabilities, included in mainstream classrooms in Sweden, and their teachers took part in the intervention. Pre-, post-, and follow-up data were collected with Goal Attainment Scaling (GAS), a computer usage diary, and with the Psychosocial Impact of Assistive Devices Scale (PIADS). Teachers' opinions of goal setting were collected at follow-up. The intervention improved the goal-related computer usage in educational tasks and teachers reported they would use goal setting again when appropriate. At baseline, students reported a positive impact from computer usage with no differences over time regarding the PIADS subscales independence, adaptability, or self-esteem. The AT intervention showed a positive effect on computer usage as AT in mainstream schools. Some additional support to teachers is recommended as not all students improved in all goal-related computer usage. A clinical implication is that students' computer usage can be improved and collaboratively established computer-based strategies can be carried out by teachers in mainstream schools.

  12. Quantitative Digital Tomosynthesis Mammography for Improved Breast Cancer Detection and Diagnosis

    Zhang, Yiheng

    2008-01-01

    .... When fully developed, the DTM can provide radiologists improved quantitative, three-dimensional volumetric information of the breast tissue, and assist in breast cancer detection and diagnosis...

  13. Detection of cores in fingerprints with improved dimension reduction

    Bazen, A.M.; Veldhuis, Raymond N.J.

    In this paper, we present a statistical approach to core detection in fingerprint images that is based on the likelihood ratio, using models of variation of core templates and randomly chosen templates. Additionally, we propose an alternative dimension reduction method. Unlike standard linear

  14. Improving Climate Communication through Comprehensive Linguistic Analyses Using Computational Tools

    Gann, T. M.; Matlock, T.

    2014-12-01

    An important lesson on climate communication research is that there is no single way to reach out and inform the public. Different groups conceptualize climate issues in different ways and different groups have different values and assumptions. This variability makes it extremely difficult to effectively and objectively communicate climate information. One of the main challenges is the following: How do we acquire a better understanding of how values and assumptions vary across groups, including political groups? A necessary starting point is to pay close attention to the linguistic content of messages used across current popular media sources. Careful analyses of that information—including how it is realized in language for conservative and progressive media—may ultimately help climate scientists, government agency officials, journalists and others develop more effective messages. Past research has looked at partisan media coverage of climate change, but little attention has been given to the fine-grained linguistic content of such media. And when researchers have done detailed linguistic analyses, they have relied primarily on hand-coding, an approach that is costly, labor intensive, and time-consuming. Our project, building on recent work on partisan news media (Gann & Matlock, 2014; under review) uses high dimensional semantic analyses and other methods of automated classification techniques from the field of natural language processing to quantify how climate issues are characterized in media sources that differ according to political orientation. In addition to discussing varied linguistic patterns, we share new methods for improving climate communication for varied stakeholders, and for developing better assessments of their effectiveness.

  15. Adapting detection sensitivity based on evidence of irregular sinus arrhythmia to improve atrial fibrillation detection in insertable cardiac monitors.

    Pürerfellner, Helmut; Sanders, Prashanthan; Sarkar, Shantanu; Reisfeld, Erin; Reiland, Jerry; Koehler, Jodi; Pokushalov, Evgeny; Urban, Luboš; Dekker, Lukas R C

    2017-10-03

    Intermittent change in p-wave discernibility during periods of ectopy and sinus arrhythmia is a cause of inappropriate atrial fibrillation (AF) detection in insertable cardiac monitors (ICM). To address this, we developed and validated an enhanced AF detection algorithm. Atrial fibrillation detection in Reveal LINQ ICM uses patterns of incoherence in RR intervals and absence of P-wave evidence over a 2-min period. The enhanced algorithm includes P-wave evidence during RR irregularity as evidence of sinus arrhythmia or ectopy to adaptively optimize sensitivity for AF detection. The algorithm was developed and validated using Holter data from the XPECT and LINQ Usability studies which collected surface electrocardiogram (ECG) and continuous ICM ECG over a 24-48 h period. The algorithm detections were compared with Holter annotations, performed by multiple reviewers, to compute episode and duration detection performance. The validation dataset comprised of 3187 h of valid Holter and LINQ recordings from 138 patients, with true AF in 37 patients yielding 108 true AF episodes ≥2-min and 449 h of AF. The enhanced algorithm reduced inappropriately detected episodes by 49% and duration by 66% with adapts sensitivity for AF detection reduced inappropriately detected episodes and duration with minimal reduction in sensitivity. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Cardiology

  16. Improving Air Force Active Network Defense Systems through an Analysis of Intrusion Detection Techniques

    Dunklee, David R

    2007-01-01

    .... The research then presents four recommendations to improve DCC operations. These include: Transition or improve the current signature-based IDS systems to include the capability to query and visualize network flows to detect malicious traffic...

  17. A Computationally Intelligent Approach to the Detection of Wormhole Attacks in Wireless Sensor Networks

    Mohammad Nurul Afsar Shaon

    2017-05-01

    Full Text Available A wormhole attack is one of the most critical and challenging security threats for wireless sensor networks because of its nature and ability to perform concealed malicious activities. This paper proposes an innovative wormhole detection scheme to detect wormhole attacks using computational intelligence and an artificial neural network (ANN. Most wormhole detection schemes reported in the literature assume the sensors are uniformly distributed in a network, and, furthermore, they use statistical and topological information and special hardware for their detection. However, these schemes may perform poorly in non-uniformly distributed networks, and, moreover, they may fail to defend against “out of band” and “in band” wormhole attacks. The aim of the proposed research is to develop a detection scheme that is able to detect all kinds of wormhole attacks in both uniformly and non-uniformly distributed sensor networks. Furthermore, the proposed research does not require any special hardware and causes no significant network overhead throughout the network. Most importantly, the probable location of the malicious nodes can be identified by the proposed ANN based detection scheme. We evaluate the efficacy of the proposed detection scheme in terms of detection accuracy, false positive rate, and false negative rate. The performance of the proposed algorithm is also compared with other machine learning techniques (i.e. SVM and regularized nonlinear logistic regression (LR based detection models. The simulation results show that proposed ANN based algorithm outperforms the SVM or LR based detection schemes in terms of detection accuracy, false positive rate, and false negative rates.

  18. Using Hybrid Algorithm to Improve Intrusion Detection in Multi Layer Feed Forward Neural Networks

    Ray, Loye Lynn

    2014-01-01

    The need for detecting malicious behavior on a computer networks continued to be important to maintaining a safe and secure environment. The purpose of this study was to determine the relationship of multilayer feed forward neural network architecture to the ability of detecting abnormal behavior in networks. This involved building, training, and…

  19. A methodology of error detection: Improving speech recognition in radiology

    Voll, Kimberly Dawn

    2006-01-01

    Automated speech recognition (ASR) in radiology report dictation demands highly accurate and robust recognition software. Despite vendor claims, current implementations are suboptimal, leading to poor accuracy, and time and money wasted on proofreading. Thus, other methods must be considered for increasing the reliability and performance of ASR before it is a viable alternative to human transcription. One such method is post-ASR error detection, used to recover from the inaccuracy of speech r...

  20. Improved Detection of Microsatellite Instability in Early Colorectal Lesions.

    Jeffery W Bacher

    Full Text Available Microsatellite instability (MSI occurs in over 90% of Lynch syndrome cancers and is considered a hallmark of the disease. MSI is an early event in colon tumor development, but screening polyps for MSI remains controversial because of reduced sensitivity compared to more advanced neoplasms. To increase sensitivity, we investigated the use of a novel type of marker consisting of long mononucleotide repeat (LMR tracts. Adenomas from 160 patients, ranging in age from 29-55 years old, were screened for MSI using the new markers and compared with current marker panels and immunohistochemistry standards. Overall, 15 tumors were scored as MSI-High using the LMRs compared to 9 for the NCI panel and 8 for the MSI Analysis System (Promega. This difference represents at least a 1.7-fold increase in detection of MSI-High lesions over currently available markers. Moreover, the number of MSI-positive markers per sample and the size of allelic changes were significantly greater with the LMRs (p = 0.001, which increased confidence in MSI classification. The overall sensitivity and specificity of the LMR panel for detection of mismatch repair deficient lesions were 100% and 96%, respectively. In comparison, the sensitivity and specificity of the MSI Analysis System were 67% and 100%; and for the NCI panel, 75% and 97%. The difference in sensitivity between the LMR panel and the other panels was statistically significant (p<0.001. The increased sensitivity for detection of MSI-High phenotype in early colorectal lesions with the new LMR markers indicates that MSI screening for the early detection of Lynch syndrome might be feasible.

  1. Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools

    Boe, Bryce A.

    There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.

  2. Whole lung computed tomography for detection of pulmonary metastasis of osteosarcoma confirmed at thoracotomy

    Ishida, Itsuro; Fukuma, Seigo; Sawada, Kinya; Seki, Yasuo; Tanaka, Fumitaka

    1980-01-01

    Whole lung computed tomography (CT) was performed in patients with osteosarcoma of bone to evaluate its diagnostic efficacy in comparison to that in conventional chest radiography and in whole lung tomography to detect metastatic nodules in the lung. In 11 of the 12 patients with osteosarcoma, CT detected pulmonary nodules and in 6 of the 11 patients pulmonary nodules were detected by CT, conventional chest radiography and whole lung tomography, respectively, and 22 pulmonary nodules were resected at thoracotomy and proved to be metastatic lesions. Nineteen nodules of the 22 nodules resected were detected by CT and nine of the 22 nodules were discovered only by CT, while only 10 of 22 nodules were recognized by the conventional chest radiography and the whole lung tomography. Two pulmonary nodules, measuring 1 mm and 2 mm in diameter, respectively, were not detected by any of these three methods. In three nodules that showed to be false positive in CT in the two patients, two nodules were histologically suture granulomas induced by the previous operation, and a deformed protuberance of the chest wall was erroneously interpreted to be a subpleural and intrapulmonary nodule in the remaining. We conclude that CT is the most efficient method to detect pulmonary nodules in the patients with osteosarcoma, but that the minimal size of the detectable nodule by CT is 3 mm in diameter. But a smaller nodule having a tendency to ossify can be detected by CT. (author)

  3. Computational study of a magnetic design to improve the diagnosis of malaria: 2D model

    Vyas, Siddharth; Genis, Vladimir; Friedman, Gary

    2017-01-01

    This paper investigates the feasibility of a cost effective high gradient magnetic separation based device for the detection and identification of malaria parasites in a blood sample. The design utilizes magnetic properties of hemozoin present in malaria-infected red blood cells (mRBCs) in order to separate and concentrate them inside a microfluidic channel slide for easier examination under the microscope. The design consists of a rectangular microfluidic channel with multiple magnetic wires positioned on top of and underneath it along the length of the channel at a small angle with respect to the channel axis. Strong magnetic field gradients, produced by the wires, exert sufficient magnetic forces on the mRBCs in order to separate and concentrate them in a specific region small enough to fit within the microscope field of view at magnifications typically required to identify the malaria parasite type. The feasibility of the device is studied using a model where the trajectories of the mRBCs inside the channel are determined using first-order ordinary differential equations (ODEs) solved numerically using a multistep ODE solver available within MATLAB. The mRBCs trajectories reveal that it is possible to separate and concentrate the mRBCs in less than 5 min, even in cases of very low parasitemia (1–10 parasites/µL of blood) using blood sample volumes of around 3 µL employed today. - Highlights: • A simple and cost-effective design is presented to improve the diagnosis of malaria. • The design is studied using a computational model. • It is possible to concentrate malaria-infected cells in a small area. • This can improve slide-examination and the efficiency of microscopists. • This can improve diagnosis of low-parasitemia and asymptomatic malaria.

  4. Computational study of a magnetic design to improve the diagnosis of malaria: 2D model

    Vyas, Siddharth, E-mail: svyas76@gmail.com [Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA 19104 (United States); Department of Engineering Technology, Drexel University, Philadelphia, PA 19104 (United States); Genis, Vladimir [Department of Engineering Technology, Drexel University, Philadelphia, PA 19104 (United States); Friedman, Gary [Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA 19104 (United States)

    2017-02-01

    This paper investigates the feasibility of a cost effective high gradient magnetic separation based device for the detection and identification of malaria parasites in a blood sample. The design utilizes magnetic properties of hemozoin present in malaria-infected red blood cells (mRBCs) in order to separate and concentrate them inside a microfluidic channel slide for easier examination under the microscope. The design consists of a rectangular microfluidic channel with multiple magnetic wires positioned on top of and underneath it along the length of the channel at a small angle with respect to the channel axis. Strong magnetic field gradients, produced by the wires, exert sufficient magnetic forces on the mRBCs in order to separate and concentrate them in a specific region small enough to fit within the microscope field of view at magnifications typically required to identify the malaria parasite type. The feasibility of the device is studied using a model where the trajectories of the mRBCs inside the channel are determined using first-order ordinary differential equations (ODEs) solved numerically using a multistep ODE solver available within MATLAB. The mRBCs trajectories reveal that it is possible to separate and concentrate the mRBCs in less than 5 min, even in cases of very low parasitemia (1–10 parasites/µL of blood) using blood sample volumes of around 3 µL employed today. - Highlights: • A simple and cost-effective design is presented to improve the diagnosis of malaria. • The design is studied using a computational model. • It is possible to concentrate malaria-infected cells in a small area. • This can improve slide-examination and the efficiency of microscopists. • This can improve diagnosis of low-parasitemia and asymptomatic malaria.

  5. Improving Visual Threat Detection: Research to Validate the Threat Detection Skills Trainer

    2013-08-01

    26 Threat Detection and Mitigation Strategies...quicker when identifying threats in relevant locations. This task utilized the Flicker paradigm (Rensink, O’Regan, & Clark, 1997; Scholl, 2000...the meaning and implication of threats, why cues were relevant, strategies used to detect and mitigate threats, and challenges when attempting to

  6. A Novel Method for Detecting and Computing Univolatility Curves in Ternary Mixtures

    Shcherbakov, Nataliya; Rodriguez-Donis, Ivonne; Abildskov, Jens

    2017-01-01

    Residue curve maps (RCMs) and univolatility curves are crucial tools for analysis and design of distillation processes. Even in the case of ternary mixtures, the topology of these maps is highly non-trivial. We propose a novel method allowing detection and computation of univolatility curves...... of the generalized univolatility and unidistribution curves in the three dimensional composition – temperature state space lead to a simple and efficient algorithm of computation of the univolatility curves. Two peculiar ternary systems, namely diethylamine – chloroform – methanol and hexane – benzene...

  7. Computed tomography lymphography for the detection of sentinel nodes in patients with gastric carcinoma

    Tsujimoto, Hironori; Yaguchi, Yoshihisa; Sakamoto, Naoko

    2010-01-01

    The sentinel node (SN) concept has been found to be feasible in gastric cancer. However, the lymphatic network of gastric cancer may be more complex, and it may be difficult to visualize all the SN distributed in unexpected areas by conventional modalities. In this study, we evaluate the feasibility and efficacy of CT lymphography for the detection of SN in gastric cancer. A total 24 patients with early gastric cancer were enrolled in the study. Three modalities (CT lymphography, dye and radioisotope [RI] methods) were used for the detection of SN. The images of CT lymphography were obtained at 10 min after injection of contrast agents. The SN were successfully identified by CT lymphography in 83.3% of patients; detection rates by the dye and RI methods were 95% and 100%, respectively. Most patients, in whom SN were successfully detected by CT lymphography, had positive results at 5 min after injection of the contrast material. The SN stations detected by CT lymphography were consistent with or included those detected by dye and/or RI methods. In conclusion, CT lymphography for the detection of SN in gastric cancer is feasible and has several advantages. However, based on this initial experience, CT lymphography had a relatively low detection rate compared with conventional methods, and further efforts will be necessary to improve the detection rate and widen the clinical application of CT lymphography for the detection of SN in gastric cancer. (author)

  8. Automated image-based colon cleansing for laxative-free CT colonography computer-aided polyp detection

    Linguraru, Marius George; Panjwani, Neil; Fletcher, Joel G.; Summer, Ronald M.

    2011-01-01

    Purpose: To evaluate the performance of a computer-aided detection (CAD) system for detecting colonic polyps at noncathartic computed tomography colonography (CTC) in conjunction with an automated image-based colon cleansing algorithm. Methods: An automated colon cleansing algorithm was designed to detect and subtract tagged-stool, accounting for heterogeneity and poor tagging, to be used in conjunction with a colon CAD system. The method is locally adaptive and combines intensity, shape, and texture analysis with probabilistic optimization. CTC data from cathartic-free bowel preparation were acquired for testing and training the parameters. Patients underwent various colonic preparations with barium or Gastroview in divided doses over 48 h before scanning. No laxatives were administered and no dietary modifications were required. Cases were selected from a polyp-enriched cohort and included scans in which at least 90% of the solid stool was visually estimated to be tagged and each colonic segment was distended in either the prone or supine view. The CAD system was run comparatively with and without the stool subtraction algorithm. Results: The dataset comprised 38 CTC scans from prone and/or supine scans of 19 patients containing 44 polyps larger than 10 mm (22 unique polyps, if matched between prone and supine scans). The results are robust on fine details around folds, thin-stool linings on the colonic wall, near polyps and in large fluid/stool pools. The sensitivity of the CAD system is 70.5% per polyp at a rate of 5.75 false positives/scan without using the stool subtraction module. This detection improved significantly (p = 0.009) after automated colon cleansing on cathartic-free data to 86.4% true positive rate at 5.75 false positives/scan. Conclusions: An automated image-based colon cleansing algorithm designed to overcome the challenges of the noncathartic colon significantly improves the sensitivity of colon CAD by approximately 15%.

  9. Development of a Wireless Computer Vision Instrument to Detect Biotic Stress in Wheat

    Joaquin J. Casanova

    2014-09-01

    Full Text Available Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM. In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV, vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32 than stressed wheat (111.34. In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014, as did the conventional camera (p < 0.0001. Vegetation hue obtained through a wireless computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications.

  10. Improvements in or relating to radiation detection arrangements

    Davis, G.P.

    1977-01-01

    A radiation detection arrangement is described that that comprises a number of scintillator devices, and a single multi-channel photomultiplier tube. Light from the scintillator devices is incident on the photocathode through an entrance window in the tube and multiplier entrance separating means are provided whereby light from each of the devices is made to be incident upon the channel entrances of photomultiplier tube. Various geometrical forms for the scintillator devices are described. This arrangement avoids the use of large number of small photomultiplier tubes, which is expensive and gives rise to difficulties in stacking the tubes in closely spaced side-by-side relationship. (U.K.)

  11. Improved Pulse Detection from Head Motions Using DCT

    Irani, Ramin; Nasrollahi, Kamal; Moeslund, Thomas B.

    2014-01-01

    . To measure it, physicians traditionally, either sense the pulsations of some blood vessels or install some sensors on the body. In either case, there is a need for a physical contact between the sensor and the body to obtain the heartbeat rate. This might not be always feasible, for example, for applications......The heart pulsation sends out the blood throughout the body. The rate in which the heart performs this vital task, heartbeat rate, is of curial importance to the body. Therefore, measuring heartbeat rate, a.k.a. pulse detection, is very important in many applications, especially the medical ones...

  12. Anomaly Detection for Aviation Safety Based on an Improved KPCA Algorithm

    Xiaoyu Zhang

    2017-01-01

    Full Text Available Thousands of flights datasets should be analyzed per day for a moderate sized fleet; therefore, flight datasets are very large. In this paper, an improved kernel principal component analysis (KPCA method is proposed to search for signatures of anomalies in flight datasets through the squared prediction error statistics, in which the number of principal components and the confidence for the confidence limit are automatically determined by OpenMP-based K-fold cross-validation algorithm and the parameter in the radial basis function (RBF is optimized by GPU-based kernel learning method. Performed on Nvidia GeForce GTX 660, the computation of the proposed GPU-based RBF parameter is 112.9 times (average 82.6 times faster than that of sequential CPU task execution. The OpenMP-based K-fold cross-validation process for training KPCA anomaly detection model becomes 2.4 times (average 1.5 times faster than that of sequential CPU task execution. Experiments show that the proposed approach can effectively detect the anomalies with the accuracy of 93.57% and false positive alarm rate of 1.11%.

  13. Diagnostic accuracy of multi-slice computed tomographic angiography in the detection of cerebral aneurysms

    Haghighatkhah, H. R.; Sabouri, S.; Borzouyeh, F.; Bagherzadeh, M. H.; Bakhshandeh, H.; Jalali, A. H.

    2008-01-01

    Multislice computed tomographic angiography is a rapid and minimally invasive method for the detection of intracranial aneurysms. The purpose of this study was to compare Multislice computed tomographic angiography with digital subtraction angiography In the diagnosis of cerebral aneurysms. Patients and Methods: In this cross sectional study we evaluated 111 consecutive patients [42(37.8%) male and 69(62.2%) female], who were admitted under clinical symptoms and signs. suggestive of harboring an intracranial aneurysm by using a four detector Multislice computed tomographic angiography. Then we compared results of Multislice computed tomographic angiography with digital subtraction angiography results as a gold standard method. Digital subtraction angiography was performed by bilateral selective common carotid artery injections and either unilateral or bilateral vertebral artery injections, as necessary. Multislice computed tomographic angiography images were interpreted by one radiologist and digital subtraction angiography was performed by another radiologist who was blinded to the interpretation of the Multislice computed tomographic angiograms. Results: The mean ±S D age of the patients was 49.1±13.6 years (range: 12-84 years). We performed Multislice computed tomographic in 111 and digital subtraction angiography in 85 patients. The sensitivity, specificity, positive predictive value, negative predictive value, positive and negative likelihood ratio of Multislice computed tomographic angiography, when compared with digital subtraction angiography as the gold standard, were 100%, 90%, 87.5%, 100%, 10 and 0, respectively. Conclusion: Multislice computed tomographic angiography seems to be an accurate and noninvasive imaging modality in the diagnosis of intracranial aneurysms

  14. The efficacy of using computer-aided detection (CAD) for detection of breast cancer in mammography screening: a systematic review.

    Henriksen, Emilie L; Carlsen, Jonathan F; Vejborg, Ilse Mm; Nielsen, Michael B; Lauridsen, Carsten A

    2018-01-01

    Background Early detection of breast cancer (BC) is crucial in lowering the mortality. Purpose To present an overview of studies concerning computer-aided detection (CAD) in screening mammography for early detection of BC and compare diagnostic accuracy and recall rates (RR) of single reading (SR) with SR + CAD and double reading (DR) with SR + CAD. Material and Methods PRISMA guidelines were used as a review protocol. Articles on clinical trials concerning CAD for detection of BC in a screening population were included. The literature search resulted in 1522 records. A total of 1491 records were excluded by abstract and 18 were excluded by full text reading. A total of 13 articles were included. Results All but two studies from the SR vs. SR + CAD group showed an increased sensitivity and/or cancer detection rate (CDR) when adding CAD. The DR vs. SR + CAD group showed no significant differences in sensitivity and CDR. Adding CAD to SR increased the RR and decreased the specificity in all but one study. For the DR vs. SR + CAD group only one study reported a significant difference in RR. Conclusion All but two studies showed an increase in RR, sensitivity and CDR when adding CAD to SR. Compared to DR no statistically significant differences in sensitivity or CDR were reported. Additional studies based on organized population-based screening programs, with longer follow-up time, high-volume readers, and digital mammography are needed to evaluate the efficacy of CAD.

  15. CINDA-3G: Improved Numerical Differencing Analyzer Program for Third-Generation Computers

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1970-01-01

    The goal of this work was to develop a new and versatile program to supplement or replace the original Chrysler Improved Numerical Differencing Analyzer (CINDA) thermal analyzer program in order to take advantage of the improved systems software and machine speeds of the third-generation computers.

  16. Detection of small traumatic hemorrhages using a computer-generated average human brain CT.

    Afzali-Hashemi, Liza; Hazewinkel, Marieke; Tjepkema-Cloostermans, Marleen C; van Putten, Michel J A M; Slump, Cornelis H

    2018-04-01

    Computed tomography is a standard diagnostic imaging technique for patients with traumatic brain injury (TBI). A limitation is the poor-to-moderate sensitivity for small traumatic hemorrhages. A pilot study using an automatic method to detect hemorrhages [Formula: see text] in diameter in patients with TBI is presented. We have created an average image from 30 normal noncontrast CT scans that were automatically aligned using deformable image registration as implemented in Elastix software. Subsequently, the average image was aligned to the scans of TBI patients, and the hemorrhages were detected by a voxelwise subtraction of the average image from the CT scans of nine TBI patients. An experienced neuroradiologist and a radiologist in training assessed the presence of hemorrhages in the final images and determined the false positives and false negatives. The 9 CT scans contained 67 small haemorrhages, of which 97% was correctly detected by our system. The neuroradiologist detected three false positives, and the radiologist in training found two false positives. For one patient, our method showed a hemorrhagic contusion that was originally missed. Comparing individual CT scans with a computed average may assist the physicians in detecting small traumatic hemorrhages in patients with TBI.

  17. COMPUTING

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  18. Noninvasive detection of macrophages in atherosclerotic lesions by computed tomography enhanced with PEGylated gold nanoparticles

    Qin J

    2014-12-01

    Full Text Available Jinbao Qin,1,* Chen Peng,2,* Binghui Zhao,2,* Kaichuang Ye,1 Fukang Yuan,1 Zhiyou Peng,1 Xinrui Yang,1 Lijia Huang,1 Mier Jiang,1 Qinghua Zhao,3 Guangyu Tang,2 Xinwu Lu1,4 1Department of Vascular Surgery, Shanghai Ninth People’s Hospital Affiliated to Shanghai JiaoTong University, School of Medicine; 2Department of Radiology, Shanghai Tenth People’s Hospital Affiliated to Tongji University, School of Medicine; 3Department of Orthopaedics, Shanghai First People’s Hospital, School of Medicine, Shanghai Jiao Tong University; 4Vascular Center of Shanghai JiaoTong University, Shanghai, People’s Republic of China *These authors contributed equally to this work Abstract: Macrophages are becoming increasingly significant in the progression of atherosclerosis (AS. Molecular imaging of macrophages may improve the detection and characterization of AS. In this study, dendrimer-entrapped gold nanoparticles (Au DENPs with polyethylene glycol (PEG and fluorescein isothiocyanate (FI coatings were designed, tested, and applied as contrast agents for the enhanced computed tomography (CT imaging of macrophages in atherosclerotic lesions. Cell counting kit-8 assay, fluorescence microscopy, silver staining, and transmission electron microscopy revealed that the FI-functionalized Au DENPs are noncytotoxic at high concentrations (3.0 µM and can be efficiently taken up by murine macrophages in vitro. These nanoparticles were administered to apolipoprotein E knockout mice as AS models, which demonstrated that the macrophage burden in atherosclerotic areas can be tracked noninvasively and dynamically three-dimensionally in live animals using micro-CT. Our findings suggest that the designed PEGylated gold nanoparticles are promising biocompatible nanoprobes for the CT imaging of macrophages in atherosclerotic lesions and will provide new insights into the pathophysiology of AS and other concerned inflammatory diseases. Keywords: atherosclerosis, CT, in vivo

  19. Computational cost for detecting inspiralling binaries using a network of laser interferometric detectors

    Pai, Archana; Bose, Sukanta; Dhurandhar, Sanjeev

    2002-01-01

    We extend a coherent network data-analysis strategy developed earlier for detecting Newtonian waveforms to the case of post-Newtonian (PN) waveforms. Since the PN waveform depends on the individual masses of the inspiralling binary, the parameter-space dimension increases by one from that of the Newtonian case. We obtain the number of templates and estimate the computational costs for PN waveforms: for a lower mass limit of 1M o-dot , for LIGO-I noise and with 3% maximum mismatch, the online computational speed requirement for single detector is a few Gflops; for a two-detector network it is hundreds of Gflops and for a three-detector network it is tens of Tflops. Apart from idealistic networks, we obtain results for realistic networks comprising of LIGO and VIRGO. Finally, we compare costs incurred in a coincidence detection strategy with those incurred in the coherent strategy detailed above

  20. SYN Flood Attack Detection in Cloud Computing using Support Vector Machine

    Zerina Mašetić

    2017-11-01

    Full Text Available Cloud computing is a trending technology, as it reduces the cost of running a business. However, many companies are skeptic moving about towards cloud due to the security concerns. Based on the Cloud Security Alliance report, Denial of Service (DoS attacks are among top 12 attacks in the cloud computing. Therefore, it is important to develop a mechanism for detection and prevention of these attacks. The aim of this paper is to evaluate Support Vector Machine (SVM algorithm in creating the model for classification of DoS attacks and normal network behaviors. The study was performed in several phases: a attack simulation, b data collection, cfeature selection, and d classification. The proposedmodel achieved 100% classification accuracy with true positive rate (TPR of 100%. SVM showed outstanding performance in DoS attack detection and proves that it serves as a valuable asset in the network security area.

  1. Computational cost for detecting inspiralling binaries using a network of laser interferometric detectors

    Pai, A; Dhurandhar, S V

    2002-01-01

    We extend a coherent network data-analysis strategy developed earlier for detecting Newtonian waveforms to the case of post-Newtonian (PN) waveforms. Since the PN waveform depends on the individual masses of the inspiralling binary, the parameter-space dimension increases by one from that of the Newtonian case. We obtain the number of templates and estimate the computational costs for PN waveforms: for a lower mass limit of 1M sub o sub - sub d sub o sub t , for LIGO-I noise and with 3% maximum mismatch, the online computational speed requirement for single detector is a few Gflops; for a two-detector network it is hundreds of Gflops and for a three-detector network it is tens of Tflops. Apart from idealistic networks, we obtain results for realistic networks comprising of LIGO and VIRGO. Finally, we compare costs incurred in a coincidence detection strategy with those incurred in the coherent strategy detailed above.

  2. Improving Automation Routines for Automatic Heating Load Detection in Buildings

    Stephen Timlin

    2012-11-01

    Full Text Available Energy managers use weather compensation data and heating system cut off routines to reduce heating energy consumption in buildings and improve user comfort. These routines are traditionally based on the calculation of an estimated building load that is inferred from the external dry bulb temperature at any point in time. While this method does reduce heating energy consumption and accidental overheating, it can be inaccurate under some weather conditions and therefore has limited effectiveness. There remains considerable scope to improve on the accuracy and relevance of the traditional method by expanding the calculations used to include a larger range of environmental metrics. It is proposed that weather compensation and automatic shut off routines that are commonly used could be improved notably with little additional cost by the inclusion of additional weather metrics. This paper examines the theoretical relationship between various external metrics and building heating loads. Results of the application of an advanced routine to a recently constructed building are examined, and estimates are made of the potential savings that can be achieved through the use of the routines proposed.

  3. Fine focal spot size improves image quality in computed tomography abdomen and pelvis

    Goh, Yin P.; Low, Keat; Kuganesan, Ahilan [Monash Health, Diagnostic Imaging Department, 246, Clayton Road, Clayton, Victoria (Australia); Lau, Kenneth K. [Monash Health, Diagnostic Imaging Department, 246, Clayton Road, Clayton, Victoria (Australia); Monash University, Department of Medicine, Faculty of Medicine, Nursing and Health Sciences, Victoria (Australia); Buchan, Kevin [Philips Healthcare, Clinical Science, PO Box 312, Mont Albert, Victoria (Australia); Oh, Lawrence Chia Wei [Flinders Medical Centre, Division of Medical Imaging, Bedford Park South (Australia); Huynh, Minh [Swinburne University, Department of Statistics, Data Science and Epidemiology, School of Health Sciences, Faculty of Health, Arts and Design, Hawthorn (Australia)

    2016-12-15

    To compare the image quality between fine focal spot size (FFSS) and standard focal spot size (SFSS) in computed tomography of the abdomen and pelvis (CTAP) This retrospective review included all consecutive adult patients undergoing contrast-enhanced CTAP between June and September 2014. Two blinded radiologists assessed the margin clarity of the abdominal viscera and the detected lesions using a five-point grading scale. Cohen's kappa test was used to examine the inter-observer reliability between the two reviewers for organ margin clarity. Mann-Whitney U testing was utilised to assess the statistical difference of the organ and lesion margin clarity. 100 consecutive CTAPs were recruited. 52 CTAPs were examined with SFSS of 1.1 x 1.2 mm and 48 CTAPs were examined with FFSS of 0.6 x 0.7 mm. Results showed that there was substantial agreement for organ margin clarity (mean κ = 0.759, p < 0.001) among the reviewers. FFSS produces images with clearer organ margins (U = 76194.0, p < 0.001, r = 0.523) and clearer lesion margins (U = 239, p = 0.052, r = 0.269). FFSS CTAP improves image quality in terms of better organ and lesion margin clarity. Fine focus CT scanning is a novel technique that may be applied in routine CTAP imaging. (orig.)

  4. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    Guo, Yumeng; Zeng, Li

    2017-01-01

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  5. Incorporating modern neuroscience findings to improve brain-computer interfaces: tracking auditory attention.

    Wronkiewicz, Mark; Larson, Eric; Lee, Adrian Kc

    2016-10-01

    Brain-computer interface (BCI) technology allows users to generate actions based solely on their brain signals. However, current non-invasive BCIs generally classify brain activity recorded from surface electroencephalography (EEG) electrodes, which can hinder the application of findings from modern neuroscience research. In this study, we use source imaging-a neuroimaging technique that projects EEG signals onto the surface of the brain-in a BCI classification framework. This allowed us to incorporate prior research from functional neuroimaging to target activity from a cortical region involved in auditory attention. Classifiers trained to detect attention switches performed better with source imaging projections than with EEG sensor signals. Within source imaging, including subject-specific anatomical MRI information (instead of using a generic head model) further improved classification performance. This source-based strategy also reduced accuracy variability across three dimensionality reduction techniques-a major design choice in most BCIs. Our work shows that source imaging provides clear quantitative and qualitative advantages to BCIs and highlights the value of incorporating modern neuroscience knowledge and methods into BCI systems.

  6. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    Guo, Yumeng [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China); Zeng, Li, E-mail: drlizeng@cqu.edu.cn [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China)

    2017-01-11

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  7. Improved algorithm for quantum separability and entanglement detection

    Ioannou, L.M.; Ekert, A.K.; Travaglione, B.C.; Cheung, D.

    2004-01-01

    Determining whether a quantum state is separable or entangled is a problem of fundamental importance in quantum information science. It has recently been shown that this problem is NP-hard, suggesting that an efficient, general solution does not exist. There is a highly inefficient 'basic algorithm' for solving the quantum separability problem which follows from the definition of a separable state. By exploiting specific properties of the set of separable states, we introduce a classical algorithm that solves the problem significantly faster than the 'basic algorithm', allowing a feasible separability test where none previously existed, e.g., in 3x3-dimensional systems. Our algorithm also provides a unique tool in the experimental detection of entanglement

  8. Management algorithm for images of hepatic incidentalomas, renal and adrenal detected by computed tomography

    Montero Gonzalez, Allan

    2012-01-01

    A literature review has been carried out in the diagnostic and monitoring algorithms for image of incidentalomas of solid abdominal organs (liver, kidney and adrenal glands) detected by computed tomography (CT). The criteria have been unified and updated for a effective diagnosis. Posed algorithms have been made in simplified form. The imaging techniques have been specified for each pathology, showing the advantages and disadvantages of using it and justifying the application in daily practice [es

  9. Detection of Malware and Kernel-Level Rootkits in Cloud Computing Environments

    Win, Thu Yein; Tianfield, Huaglory; Mair, Quentin

    2016-01-01

    Cyberattacks targeted at virtualization infrastructure underlying cloud computing services has become increasingly sophisticated. This paper presents a novel malware and rookit detection system which protects the guests against different attacks. It combines system call monitoring and system call hashing on the guest kernel together with Support Vector Machines (SVM)-based external monitoring on the host. We demonstrate the effectiveness of our solution by evaluating it against well-known use...

  10. Progress of computer-aided detection/diagnosis (CAD in dentistryCAD in dentistry

    Akitoshi Katsumata

    2014-08-01

    CAD is also useful in the detection and evaluation of dental and maxillofacial lesions. Identifying alveolar bone resorption due to periodontitis and radiolucent jaw lesions (such as radicular and dentigerous cysts are important goals for CAD. CAD can be applied not only to panoramic radiography but also to dental cone-beam computed tomography (CBCT images. Linking of CAD and teleradiology will be an important issue.

  11. Noninvasive Characterization of Indeterminate Pulmonary Nodules Detected on Chest High-Resolution Computed Tomography

    2017-10-01

    Chest High- Resolution Computed Tomography 5b. GRANT NUMBER W81XWH-15-1-0110 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Fabien Maldonado 5d. PROJECT...Selection of cancer cases and controls, flowcharts : Screen-detected lung cancers (N=649) Adenocarcinomas (N=353) Squamous cell carcinomas (N=136...during the process, and I hope everyone has a nice weekend! Best, Liz …… Elizabeth S. Moses, Ph.D. | Scientific Program Manager, DECAMP Boston

  12. Proposed Network Intrusion Detection System ‎Based on Fuzzy c Mean Algorithm in Cloud ‎Computing Environment

    Shawq Malik Mehibs

    2017-12-01

    Full Text Available Nowadays cloud computing had become is an integral part of IT industry, cloud computing provides Working environment allow a user of environmental to share data and resources over the internet. Where cloud computing its virtual grouping of resources offered over the internet, this lead to different matters related to the security and privacy in cloud computing. And therefore, create intrusion detection very important to detect outsider and insider intruders of cloud computing with high detection rate and low false positive alarm in the cloud environment. This work proposed network intrusion detection module using fuzzy c mean algorithm. The kdd99 dataset used for experiments .the proposed system characterized by a high detection rate with low false positive alarm

  13. Quantification, improvement, and harmonization of small lesion detection with state-of-the-art PET

    Vos, Charlotte S. van der [Radboud University Medical Centre, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); University of Twente, MIRA Institute for Biomedical Technology and Technical Medicine, Enschede (Netherlands); Koopman, Danielle [University of Twente, MIRA Institute for Biomedical Technology and Technical Medicine, Enschede (Netherlands); Isala Hospital, Department of Nuclear Medicine, Zwolle (Netherlands); Rijnsdorp, Sjoerd; Arends, Albert J. [Catharina Hospital, Department of Medical Physics, Eindhoven (Netherlands); Boellaard, Ronald [University of Groningen, University Medical Centre Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands); VU University Medical Center, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); Dalen, Jorn A. van [Isala Hospital, Department of Nuclear Medicine, Zwolle (Netherlands); Isala, Department of Medical Physics, Zwolle (Netherlands); Lubberink, Mark [Uppsala University, Department of Surgical Sciences, Uppsala (Sweden); Uppsala University Hospital, Department of Medical Physics, Uppsala (Sweden); Willemsen, Antoon T.M. [University of Groningen, University Medical Centre Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands); Visser, Eric P. [Radboud University Medical Centre, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands)

    2017-08-15

    In recent years, there have been multiple advances in positron emission tomography/computed tomography (PET/CT) that improve cancer imaging. The present generation of PET/CT scanners introduces new hardware, software, and acquisition methods. This review describes these new developments, which include time-of-flight (TOF), point-spread-function (PSF), maximum-a-posteriori (MAP) based reconstruction, smaller voxels, respiratory gating, metal artefact reduction, and administration of quadratic weight-dependent {sup 18}F-fluorodeoxyglucose (FDG) activity. Also, hardware developments such as continuous bed motion (CBM), (digital) solid-state photodetectors and combined PET and magnetic resonance (MR) systems are explained. These novel techniques have a significant impact on cancer imaging, as they result in better image quality, improved small lesion detectability, and more accurate quantification of radiopharmaceutical uptake. This influences cancer diagnosis and staging, as well as therapy response monitoring and radiotherapy planning. Finally, the possible impact of these developments on the European Association of Nuclear Medicine (EANM) guidelines and EANM Research Ltd. (EARL) accreditation for FDG-PET/CT tumor imaging is discussed. (orig.)

  14. Quantification, improvement, and harmonization of small lesion detection with state-of-the-art PET

    Vos, Charlotte S. van der; Koopman, Danielle; Rijnsdorp, Sjoerd; Arends, Albert J.; Boellaard, Ronald; Dalen, Jorn A. van; Lubberink, Mark; Willemsen, Antoon T.M.; Visser, Eric P.

    2017-01-01

    In recent years, there have been multiple advances in positron emission tomography/computed tomography (PET/CT) that improve cancer imaging. The present generation of PET/CT scanners introduces new hardware, software, and acquisition methods. This review describes these new developments, which include time-of-flight (TOF), point-spread-function (PSF), maximum-a-posteriori (MAP) based reconstruction, smaller voxels, respiratory gating, metal artefact reduction, and administration of quadratic weight-dependent 18 F-fluorodeoxyglucose (FDG) activity. Also, hardware developments such as continuous bed motion (CBM), (digital) solid-state photodetectors and combined PET and magnetic resonance (MR) systems are explained. These novel techniques have a significant impact on cancer imaging, as they result in better image quality, improved small lesion detectability, and more accurate quantification of radiopharmaceutical uptake. This influences cancer diagnosis and staging, as well as therapy response monitoring and radiotherapy planning. Finally, the possible impact of these developments on the European Association of Nuclear Medicine (EANM) guidelines and EANM Research Ltd. (EARL) accreditation for FDG-PET/CT tumor imaging is discussed. (orig.)

  15. Computer-aided detection system for masses in automated whole breast ultrasonography: development and evaluation of the effectiveness

    Kim, Jeoung Hyun [Dept. of Radiology, Ewha Womans University Mokdong Hospital, Ewha Womans University School of Medicine, Seoul (Korea, Republic of); Cha, Joo Hee; Kim, Nam Kug; Chang, Young Jun; Kim, Hak Hee [Dept. of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Ko, Myung Su [Health Screening and Promotion Center, Asan Medical Center, Seoul (Korea, Republic of); Choi, Young Wook [Korea Electrotechnology Research Institute, Ansan (Korea, Republic of)

    2014-04-15

    The aim of this study was to evaluate the performance of a proposed computer-aided detection (CAD) system in automated breast ultrasonography (ABUS). Eighty-nine two-dimensional images (20 cysts, 42 benign lesions, and 27 malignant lesions) were obtained from 47 patients who underwent ABUS (ACUSON S2000). After boundary detection and removal, we detected mass candidates by using the proposed adjusted Otsu's threshold; the threshold was adaptive to the variations of pixel intensities in an image. Then, the detected candidates were segmented. Features of the segmented objects were extracted and used for training/testing in the classification. In our study, a support vector machine classifier was adopted. Eighteen features were used to determine whether the candidates were true lesions or not. A five-fold cross validation was repeated 20 times for the performance evaluation. The sensitivity and the false positive rate per image were calculated, and the classification accuracy was evaluated for each feature. In the classification step, the sensitivity of the proposed CAD system was 82.67% (SD, 0.02%). The false positive rate was 0.26 per image. In the detection/segmentation step, the sensitivities for benign and malignant mass detection were 90.47% (38/42) and 92.59% (25/27), respectively. In the five-fold cross-validation, the standard deviation of pixel intensities for the mass candidates was the most frequently selected feature, followed by the vertical position of the centroids. In the univariate analysis, each feature had 50% or higher accuracy. The proposed CAD system can be used for lesion detection in ABUS and may be useful in improving the screening efficiency.

  16. Improving EWMA Plans for Detecting Unusual Increases in Poisson Counts

    R. S. Sparks

    2009-01-01

    adaptive exponentially weighted moving average (EWMA plan is developed for signalling unusually high incidence when monitoring a time series of nonhomogeneous daily disease counts. A Poisson transitional regression model is used to fit background/expected trend in counts and provides “one-day-ahead” forecasts of the next day's count. Departures of counts from their forecasts are monitored. The paper outlines an approach for improving early outbreak data signals by dynamically adjusting the exponential weights to be efficient at signalling local persistent high side changes. We emphasise outbreak signals in steady-state situations; that is, changes that occur after the EWMA statistic had run through several in-control counts.

  17. Computer-aided pulmonary nodule detection. Performance of two CAD systems at different CT dose levels

    Hein, Patrick Alexander; Rogalla, P.; Klessen, C.; Lembcke, A.; Romano, V.C.

    2009-01-01

    Purpose: To evaluate the impact of dose reduction on the performance of computer-aided lung nodule detection systems (CAD) of two manufacturers by comparing respective CAD results on ultra-low-dose computed tomography (ULD-CT) and standard dose CT (SD-CT). Materials and Methods: Multi-slice computed tomography (MSCT) data sets of 26 patients (13 male and 13 female, patients 31 - 74 years old) were retrospectively selected for CAD analysis. Indication for CT examination was staging of a known primary malignancy or suspected pulmonary malignancy. CT images were consecutively acquired at 5 mAs (ULD-CT) and 75 mAs (SD-CT) with 120kV tube voltage (1 mm slice thickness). The standard of reference was determined by three experienced readers in consensus. CAD reading algorithms (pre-commercial CAD system, Philips, Netherlands: CAD-1; LungCARE, Siemens, Germany: CAD-2) were applied to the CT data sets. Results: Consensus reading identified 253 nodules on SD-CT and ULD-CT. Nodules ranged in diameter between 2 and 41 mm (mean diameter 4.8 mm). Detection rates were recorded with 72% and 62% (CAD-1 vs. CAD-2) for SD-CT and with 73% and 56% for ULD-CT. Median also positive rates per patient were calculated with 6 and 5 (CAD-1 vs. CAD-2) for SD-CT and with 8 and 3 for ULD-CT. After separate statistical analysis of nodules with diameters of 5 mm and greater, the detection rates increased to 83% and 61% for SD-CT and to 89% and 67% for ULD-CT (CAD-1 vs. CAD-2). For both CAD systems there were no significant differences between the detection rates for standard and ultra-low-dose data sets (p>0.05). Conclusion: Dose reduction of the underlying CT scan did not significantly influence nodule detection performance of the tested CAD systems. (orig.)

  18. Computer Literacy Improvement Needs: Physicians' Self Assessment in the Makkah Region

    Hani Abdulsattar Shaker

    2013-11-01

    Full Text Available Objective: A confidential inquiry by the Directorate General of health affairs, Makkah region, Saudi Arabia, found physicians were resistant to enter patient-related information in the electronic medical records system at different hospitals. This study aims to highlight their computer literacy needs.Methods: This cross-sectional survey was conducted on physicians using a structured questionnaire bearing nine questions/stems with dichotomous answers, (i.e., yes/no that was distributed among physicians at six different Ministry of Health hospitals in the Makkah Region, Saudi Arabia, between May and August 2009. The results for future needs in computer skills were categorized as "none" if the rate of answer "yes" to any stem was 0-25%, "little" if 26-50%, "some" if 51-75% and "substantial" if >75% rated "yes".Results: The response rate of 82% of determined sample size (n = 451 was attained. Computer literacy improvement elements (CLIE, i.e., "word processing software skills (MS Word", "presentation software skills (Power Point", "internet search skills", "medical database search skills", "spreadsheet software skills (Excel" and "advanced e-mail management skills" were in "substantial" need of improvement among the majority of settings and categories. All other computer literacy improvement elements were in "some" need of improvement.Conclusion: The overall outcome of this study indicates that physicians need further computer literacy improvements.

  19. Improving the psychometric properties of dot-probe attention measures using response-based computation.

    Evans, Travis C; Britton, Jennifer C

    2018-09-01

    Abnormal threat-related attention in anxiety disorders is most commonly assessed and modified using the dot-probe paradigm; however, poor psychometric properties of reaction-time measures may contribute to inconsistencies across studies. Typically, standard attention measures are derived using average reaction-times obtained in experimentally-defined conditions. However, current approaches based on experimentally-defined conditions are limited. In this study, the psychometric properties of a novel response-based computation approach to analyze dot-probe data are compared to standard measures of attention. 148 adults (19.19 ± 1.42 years, 84 women) completed a standardized dot-probe task including threatening and neutral faces. We generated both standard and response-based measures of attention bias, attentional orientation, and attentional disengagement. We compared overall internal consistency, number of trials necessary to reach internal consistency, test-retest reliability (n = 72), and criterion validity obtained using each approach. Compared to standard attention measures, response-based measures demonstrated uniformly high levels of internal consistency with relatively few trials and varying improvements in test-retest reliability. Additionally, response-based measures demonstrated specific evidence of anxiety-related associations above and beyond both standard attention measures and other confounds. Future studies are necessary to validate this approach in clinical samples. Response-based attention measures demonstrate superior psychometric properties compared to standard attention measures, which may improve the detection of anxiety-related associations and treatment-related changes in clinical samples. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Improving the Accuracy of Planet Occurrence Rates from Kepler Using Approximate Bayesian Computation

    Hsu, Danley C.; Ford, Eric B.; Ragozzine, Darin; Morehead, Robert C.

    2018-05-01

    We present a new framework to characterize the occurrence rates of planet candidates identified by Kepler based on hierarchical Bayesian modeling, approximate Bayesian computing (ABC), and sequential importance sampling. For this study, we adopt a simple 2D grid in planet radius and orbital period as our model and apply our algorithm to estimate occurrence rates for Q1–Q16 planet candidates orbiting solar-type stars. We arrive at significantly increased planet occurrence rates for small planet candidates (R p 80 day) compared to the rates estimated by the more common inverse detection efficiency method (IDEM). Our improved methodology estimates that the occurrence rate density of small planet candidates in the habitable zone of solar-type stars is {1.6}-0.5+1.2 per factor of 2 in planet radius and orbital period. Additionally, we observe a local minimum in the occurrence rate for strong planet candidates marginalized over orbital period between 1.5 and 2 R ⊕ that is consistent with previous studies. For future improvements, the forward modeling approach of ABC is ideally suited to incorporating multiple populations, such as planets, astrophysical false positives, and pipeline false alarms, to provide accurate planet occurrence rates and uncertainties. Furthermore, ABC provides a practical statistical framework for answering complex questions (e.g., frequency of different planetary architectures) and providing sound uncertainties, even in the face of complex selection effects, observational biases, and follow-up strategies. In summary, ABC offers a powerful tool for accurately characterizing a wide variety of astrophysical populations.

  1. Towards Improved Airborne Fire Detection Systems Using Beetle Inspired Infrared Detection and Fire Searching Strategies

    Herbert Bousack

    2015-06-01

    Full Text Available Every year forest fires cause severe financial losses in many countries of the world. Additionally, lives of humans as well as of countless animals are often lost. Due to global warming, the problem of wildfires is getting out of control; hence, the burning of thousands of hectares is obviously increasing. Most important, therefore, is the early detection of an emerging fire before its intensity becomes too high. More than ever, a need for early warning systems capable of detecting small fires from distances as large as possible exists. A look to nature shows that pyrophilous “fire beetles” of the genus Melanophila can be regarded as natural airborne fire detection systems because their larvae can only develop in the wood of fire-killed trees. There is evidence that Melanophila beetles can detect large fires from distances of more than 100 km by visual and infrared cues. In a biomimetic approach, a concept has been developed to use the surveying strategy of the “fire beetles” for the reliable detection of a smoke plume of a fire from large distances by means of a basal infrared emission zone. Future infrared sensors necessary for this ability are also inspired by the natural infrared receptors of Melanophila beetles.

  2. ATLAS Distributed Computing Operations: Experience and improvements after 2 full years of data-taking

    Jézéquel, S; Stewart, G

    2012-01-01

    This paper summarizes operational experience and improvements in ATLAS computing infrastructure in 2010 and 2011. ATLAS has had 2 periods of data taking, with many more events recorded in 2011 than in 2010. It ran 3 major reprocessing campaigns. The activity in 2011 was similar to 2010, but scalability issues had to be addressed due to the increase in luminosity and trigger rate. Based on improved monitoring of ATLAS Grid computing, the evolution of computing activities (data/group production, their distribution and grid analysis) over time is presented. The main changes in the implementation of the computing model that will be shown are: the optimization of data distribution over the Grid, according to effective transfer rate and site readiness for analysis; the progressive dismantling of the cloud model, for data distribution and data processing; software installation migration to cvmfs; changing database access to a Frontier/squid infrastructure.

  3. Communicative interactions improve visual detection of biological motion.

    Valeria Manera

    Full Text Available BACKGROUND: In the context of interacting activities requiring close-body contact such as fighting or dancing, the actions of one agent can be used to predict the actions of the second agent. In the present study, we investigated whether interpersonal predictive coding extends to interactive activities--such as communicative interactions--in which no physical contingency is implied between the movements of the interacting individuals. METHODOLOGY/PRINCIPAL FINDINGS: Participants observed point-light displays of two agents (A and B performing separate actions. In the communicative condition, the action performed by agent B responded to a communicative gesture performed by agent A. In the individual condition, agent A's communicative action was substituted with a non-communicative action. Using a simultaneous masking detection task, we demonstrate that observing the communicative gesture performed by agent A enhanced visual discrimination of agent B. CONCLUSIONS/SIGNIFICANCE: Our finding complements and extends previous evidence for interpersonal predictive coding, suggesting that the communicative gestures of one agent can serve as a predictor for the expected actions of the respondent, even if no physical contact between agents is implied.

  4. CT colonography: computer-aided detection of morphologically flat T1 colonic carcinoma

    Taylor, Stuart A.; Iinuma, Gen; Saito, Yutaka; Zhang, Jie; Halligan, Steve

    2008-01-01

    The purpose was to evaluate the ability of computer-aided detection (CAD) software to detect morphologically flat early colonic carcinoma using CT colonography (CTC). Twenty-four stage T1 colonic carcinomas endoscopically classified as flat (width over twice height) were accrued from patients undergoing staging CTC. Tumor location was annotated by three experienced radiologists in consensus aided by the endosocpic report. CAD software was then applied at three settings of sphericity (0, 0.75, and 1). Computer prompts were categorized as either true positive (overlapping tumour boundary) or false positive. True positives were subclassified as focal or non focal. The 24 cancers were endoscopically classified as type IIa (n=11) and type IIa+IIc (n=13). Mean size (range) was 27 mm (7-70 mm). CAD detected 20 (83.3%), 17 (70.8%), and 13 (54.1%) of the 24 cancers at filter settings of 0, 0.75, and 1, respectively with 3, 4, and 8 missed cancers of type IIa, respectively. The mean total number of false-positive CAD marks per patient at each filter setting was 36.5, 21.1, and 9.5, respectively, excluding polyps. At all settings, >96.1% of CAD true positives were classified as focal. CAD may be effective for the detection of morphologically flat cancer, although minimally raised laterally spreading tumors remain problematic. (orig.)

  5. Improvements in diagnostic tools for early detection of psoriatic arthritis.

    D'Angelo, Salvatore; Palazzi, Carlo; Gilio, Michele; Leccese, Pietro; Padula, Angela; Olivieri, Ignazio

    2016-11-01

    Psoriatic arthritis (PsA) is a heterogeneous chronic inflammatory disease characterized by a wide clinical spectrum. The early diagnosis of PsA is currently a challenging topic. Areas covered: The literature was extensively reviewed for studies addressing the topic area "diagnosis of psoriatic arthritis". This review will summarize improvements in diagnostic tools, especially referral to the rheumatologist, the role of patient history and clinical examination, laboratory tests, and imaging techniques in getting an early and correct diagnosis of PsA. Expert commentary: Due to the heterogeneity of its expression, PsA may be easily either overdiagnosed or underdiagnosed. A diagnosis of PsA should be taken into account every time a patient with psoriasis or a family history of psoriasis shows peripheral arthritis, especially if oligoarticular or involving the distal interphalangeal joints, enthesitis or dactylitis. Magnetic resonance imaging and ultrasonography are useful for diagnosing PsA early, particularly when isolated enthesitis or inflammatory spinal pain occur.

  6. A comparative study of computed tomographic techniques for the detection of emphysema in middle-aged and older patient populations

    Tanino, Michie; Nishimura, Masaharu; Betsuyaku, Tomoko; Takeyabu, Kimihiro; Tanino, Yoshinori; Kawakami, Yoshikazu; Miyamoto, Kenji

    2000-01-01

    Helical-scan computed tomography (CT) is now widely utilized as a mass screening procedure for lung cancer. By adding 3 slices of high-resolution CT (HRCT) to the standard screening procedure, we were able to compare the efficacy of helical-scan CT and HRCT in detecting pulmonary emphysema. Additionally, the prevalence of emphysema detected by HRCT was examined as a function of patient age and smoking history. The subjects (106 men and 28 women) were all community-based middle-aged and older volunteers who participated in a mass lung cancer screening program. Based on visual assessments of the CT films, emphysema was detected in 29 subjects (22%) by HRCT, but in only 4 (3%) by helical-scan CT. Although the prevalence of emphysema was higher among subjects with a higher smoking index, no correlations with age were observed. We concluded that the efficacy of helical scan CT in detecting pulmonary emphysema can be significantly improved with the inclusion of 3 slices of HRCT, and confirmed that cigarette smoking is linked to the development of pulmonary emphysema. (author)

  7. Detection Performance of Packet Arrival under Downclocking for Mobile Edge Computing

    Zhimin Wang

    2018-01-01

    Full Text Available Mobile edge computing (MEC enables battery-powered mobile nodes to acquire information technology services at the network edge. These nodes desire to enjoy their service under power saving. The sampling rate invariant detection (SRID is the first downclocking WiFi technique that can achieve this objective. With SRID, a node detects one packet arrival at a downclocked rate. Upon a successful detection, the node reverts to a full-clocked rate to receive the packet immediately. To ensure that a node acquires its service immediately, the detection performance (namely, the miss-detection probability and the false-alarm probability of SRID is of importance. This paper is the first one to theoretically study the crucial impact of SRID attributes (e.g., tolerance threshold, correlation threshold, and energy ratio threshold on the packet detection performance. Extensive Monte Carlo experiments show that our theoretical model is very accurate. This study can help system developers set reasonable system parameters for WiFi downclocking.

  8. Dual-energy bone removal computed tomography (BRCT): preliminary report of efficacy of acute intracranial hemorrhage detection.

    Naruto, Norihito; Tannai, Hidenori; Nishikawa, Kazuma; Yamagishi, Kentaro; Hashimoto, Masahiko; Kawabe, Hideto; Kamisaki, Yuichi; Sumiya, Hisashi; Kuroda, Satoshi; Noguchi, Kyo

    2018-02-01

    One of the major applications of dual-energy computed tomography (DECT) is automated bone removal (BR). We hypothesized that the visualization of acute intracranial hemorrhage could be improved on BRCT by removing bone as it has the highest density tissue in the head. This preliminary study evaluated the efficacy of a DE BR algorithm for the head CT of trauma patients. Sixteen patients with acute intracranial hemorrhage within 1 day after head trauma were enrolled in this study. All CT examinations were performed on a dual-source dual-energy CT scanner. BRCT images were generated using the Bone Removal Application. Simulated standard CT and BRCT images were visually reviewed in terms of detectability (presence or absence) of acute hemorrhagic lesions. DECT depicted 28 epidural/subdural hemorrhages, 17 contusional hemorrhages, and 7 subarachnoid hemorrhages. In detecting epidural/subdural hemorrhage, BRCT [28/28 (100%)] was significantly superior to simulated standard CT [17/28 (61%)] (p = .001). In detecting contusional hemorrhage, BRCT [17/17 (100%)] was also significantly superior to simulated standard CT [11/17 (65%)] (p = .0092). BRCT was superior to simulated standard CT in detecting acute intracranial hemorrhage. BRCT could improve the detection of small intracranial hemorrhages, particularly those adjacent to bone, by removing bone that can interfere with the visualization of small acute hemorrhage. In an emergency such as head trauma, BRCT can be used as support imaging in combination with simulated standard CT and bone scale CT, although BRCT cannot replace a simulated standard CT.

  9. Evidence-based investigation of the influence of computer-aided detection of polyps on screening of colon cancer with CT colonography

    Yoshida, Hiroyuki

    2008-01-01

    Computed tomographic colonography (CTC), also known as virtual colonoscopy, is a CT examination of the colon for colorectal neoplasms. Recent large-scale clinical trials have demonstrated that CTC yields sensitivity comparable to optical colonoscopy in the detection of clinically significant polyps in a screening population, making CTC a promising technique for screening of colon cancer. For CTC to be a clinically practical means of screening, it must reliably and consistently detect polyps with high accuracy. However, high-level expertise is required to interpret the resulting CT images to find polyps, resulting in variable diagnostic accuracy among radiologists in the detection of polyps. A key technology to overcome this problem and to bring CTC to prime time for screening of colorectal cancer is computer-aided detection (CAD) of polyps. CAD automatically detects the locations of suspicious polyps in CTC images and presents them to radiologists. CAD has the potential to increase diagnostic performance in the detection of polyps as well as to reduce variability of the diagnostic accuracy among radiologists. This paper presents an evidence-based investigation of the influence of CAD on screening of colon cancer with CTC by describing the benefits of using CAD in the diagnosis of CTC, the fundamental CAD scheme for the detection of polyps in CTC, its detection performance, the effect on the improvement of detection performance, as well as the current and future challenges in CAD. (author)

  10. Computer-aided Detection Fidelity of Pulmonary Nodules in Chest Radiograph

    Nikolaos Dellios

    2017-01-01

    Full Text Available Aim: The most ubiquitous chest diagnostic method is the chest radiograph. A common radiographic finding, quite often incidental, is the nodular pulmonary lesion. The detection of small lesions out of complex parenchymal structure is a daily clinical challenge. In this study, we investigate the efficacy of the computer-aided detection (CAD software package SoftView™ 2.4A for bone suppression and OnGuard™ 5.2 (Riverain Technologies, Miamisburg, OH, USA for automated detection of pulmonary nodules in chest radiographs. Subjects and Methods: We retrospectively evaluated a dataset of 100 posteroanterior chest radiographs with pulmonary nodular lesions ranging from 5 to 85 mm. All nodules were confirmed with a consecutive computed tomography scan and histologically classified as 75% malignant. The number of detected lesions by observation in unprocessed images was compared to the number and dignity of CAD-detected lesions in bone-suppressed images (BSIs. Results: SoftView™ BSI does not affect the objective lesion-to-background contrast. OnGuard™ has a stand-alone sensitivity of 62% and specificity of 58% for nodular lesion detection in chest radiographs. The false positive rate is 0.88/image and the false negative (FN rate is 0.35/image. From the true positive lesions, 20% were proven benign and 80% were malignant. FN lesions were 47% benign and 53% malignant. Conclusion: We conclude that CAD does not qualify for a stand-alone standard of diagnosis. The use of CAD accompanied with a critical radiological assessment of the software suggested pattern appears more realistic. Accordingly, it is essential to focus on studies assessing the quality-time-cost profile of real-time (as opposed to retrospective CAD implementation in clinical diagnostics.

  11. Is Neural Activity Detected by ERP-Based Brain-Computer Interfaces Task Specific?

    Markus A Wenzel

    Full Text Available Brain-computer interfaces (BCIs that are based on event-related potentials (ERPs can estimate to which stimulus a user pays particular attention. In typical BCIs, the user silently counts the selected stimulus (which is repeatedly presented among other stimuli in order to focus the attention. The stimulus of interest is then inferred from the electroencephalogram (EEG. Detecting attention allocation implicitly could be also beneficial for human-computer interaction (HCI, because it would allow software to adapt to the user's interest. However, a counting task would be inappropriate for the envisaged implicit application in HCI. Therefore, the question was addressed if the detectable neural activity is specific for silent counting, or if it can be evoked also by other tasks that direct the attention to certain stimuli.Thirteen people performed a silent counting, an arithmetic and a memory task. The tasks required the subjects to pay particular attention to target stimuli of a random color. The stimulus presentation was the same in all three tasks, which allowed a direct comparison of the experimental conditions.Classifiers that were trained to detect the targets in one task, according to patterns present in the EEG signal, could detect targets in all other tasks (irrespective of some task-related differences in the EEG.The neural activity detected by the classifiers is not strictly task specific but can be generalized over tasks and is presumably a result of the attention allocation or of the augmented workload. The results may hold promise for the transfer of classification algorithms from BCI research to implicit relevance detection in HCI.

  12. Improved pulsed photoacoustic detection by means of an adapted filter

    González, M.; Santiago, G.; Peuriot, A.; Slezak, V.; Mosquera, C.

    2005-06-01

    We present a numerical and experimental study of two adapted filters devised to the quantitative analysis of weak photoacoustic signals. The first one is a simple convolution-type one and the other is based on neural networks of the multilayer perceptron type. The theoretical signal used as one of the inputs in both filters is derived from the solution of the transient response of the acoustic cell modeled with a simple transmission-line analogue. The filters were tested numerically by using the theoretical signal corrupted with white noise. After 500 iterations it was possible to define an average error for the returned value of each filter. Since the neural network outperformed the convolution-type, we assessed its performance by measuring SF6 traces diluted in N2 and excited by tuned TEA CO2 laser. The results show the use of the neural network filter allows recovering a signal with poor signal-to-noise ratio without resorting to extensive averaging, thus reducing the acquisition time while improving the precision of the measurement.

  13. Improvement of the computing speed of the FBR fuel pin bundle deformation analysis code 'BAMBOO'

    Ito, Masahiro; Uwaba, Tomoyuki

    2005-04-01

    JNC has developed a coupled analysis system of a fuel pin bundle deformation analysis code 'BAMBOO' and a thermal hydraulics analysis code ASFRE-IV' for the purpose of evaluating the integrity of a subassembly under the BDI condition. This coupled analysis took much computation time because it needs convergent calculations to obtain numerically stationary solutions for thermal and mechanical behaviors. We improved the computation time of the BAMBOO code analysis to make the coupled analysis practicable. 'BAMBOO' is a FEM code and as such its matrix calculations consume large memory area to temporarily stores intermediate results in the solution of simultaneous linear equations. The code used the Hard Disk Drive (HDD) for the virtual memory area to save Random Access Memory (RAM) of the computer. However, the use of the HDD increased the computation time because Input/Output (I/O) processing with the HDD took much time in data accesses. We improved the code in order that it could conduct I/O processing only with the RAM in matrix calculations and run with in high-performance computers. This improvement considerably increased the CPU occupation rate during the simulation and reduced the total simulation time of the BAMBOO code to about one-seventh of that before the improvement. (author)

  14. Deep learning of contrast-coated serrated polyps for computer-aided detection in CT colonography

    Näppi, Janne J.; Pickhardt, Perry; Kim, David H.; Hironaka, Toru; Yoshida, Hiroyuki

    2017-03-01

    Serrated polyps were previously believed to be benign lesions with no cancer potential. However, recent studies have revealed a novel molecular pathway where also serrated polyps can develop into colorectal cancer. CT colonography (CTC) can detect serrated polyps using the radiomic biomarker of contrast coating, but this requires expertise from the reader and current computer-aided detection (CADe) systems have not been designed to detect the contrast coating. The purpose of this study was to develop a novel CADe method that makes use of deep learning to detect serrated polyps based on their contrast-coating biomarker in CTC. In the method, volumetric shape-based features are used to detect polyp sites over soft-tissue and fecal-tagging surfaces of the colon. The detected sites are imaged using multi-angular 2D image patches. A deep convolutional neural network (DCNN) is used to review the image patches for the presence of polyps. The DCNN-based polyp-likelihood estimates are merged into an aggregate likelihood index where highest values indicate the presence of a polyp. For pilot evaluation, the proposed DCNN-CADe method was evaluated with a 10-fold cross-validation scheme using 101 colonoscopy-confirmed cases with 144 biopsy-confirmed serrated polyps from a CTC screening program, where the patients had been prepared for CTC with saline laxative and fecal tagging by barium and iodine-based diatrizoate. The average per-polyp sensitivity for serrated polyps >=6 mm in size was 93+/-7% at 0:8+/-1:8 false positives per patient on average. The detection accuracy was substantially higher that of a conventional CADe system. Our results indicate that serrated polyps can be detected automatically at high accuracy in CTC.

  15. Improved Instrument for Detecting Water and Ice in Soil

    Buehler, Martin; Chin, Keith; Keymeulen, Didler; McCann, Timothy; Seshadri, Suesh; Anderson, Robert

    2009-01-01

    An instrument measures electrical properties of relatively dry soils to determine their liquid water and/or ice contents. Designed as a prototype of instruments for measuring the liquid-water and ice contents of lunar and planetary soils, the apparatus could also be utilized for similar purposes in research and agriculture involving terrestrial desert soils and sands, and perhaps for measuring ice buildup on aircraft surfaces. This instrument is an improved version of the apparatus described in Measuring Low Concentrations of Liquid Water and Ice in Soil (NPO-41822), NASA Tech Briefs, Vol. 33, No. 2 (February 2009), page 22. The designs of both versions are based on the fact that the electrical behavior of a typical soil sample is well approximated by a network of resistors and capacitors in which resistances decrease and capacitances increase (and the magnitude and phase angle of impedance changes accordingly) with increasing water content. The previous version included an impedance spectrometer and a jar into which a sample of soil was placed. Four stainless-steel screws at the bottom of the jar were used as electrodes of a fourpoint impedance probe connected to the spectrometer. The present instrument does not include a sample jar and can be operated without acquiring or handling samples. Its impedance probe consists of a compact assembly of electrodes housed near the tip of a cylinder. The electrodes protrude slightly from the cylinder (see Figure 1). In preparation for measurements, the cylinder is simply pushed into the ground to bring the soil into contact with the electrodes.

  16. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  17. Statistical control chart and neural network classification for improving human fall detection

    Harrou, Fouzi; Zerrouki, Nabil; Sun, Ying; Houacine, Amrane

    2017-01-01

    This paper proposes a statistical approach to detect and classify human falls based on both visual data from camera and accelerometric data captured by accelerometer. Specifically, we first use a Shewhart control chart to detect the presence of potential falls by using accelerometric data. Unfortunately, this chart cannot distinguish real falls from fall-like actions, such as lying down. To bypass this difficulty, a neural network classifier is then applied only on the detected cases through visual data. To assess the performance of the proposed method, experiments are conducted on the publicly available fall detection databases: the University of Rzeszow's fall detection (URFD) dataset. Results demonstrate that the detection phase play a key role in reducing the number of sequences used as input into the neural network classifier for classification, significantly reducing computational burden and achieving better accuracy.

  18. Statistical control chart and neural network classification for improving human fall detection

    Harrou, Fouzi

    2017-01-05

    This paper proposes a statistical approach to detect and classify human falls based on both visual data from camera and accelerometric data captured by accelerometer. Specifically, we first use a Shewhart control chart to detect the presence of potential falls by using accelerometric data. Unfortunately, this chart cannot distinguish real falls from fall-like actions, such as lying down. To bypass this difficulty, a neural network classifier is then applied only on the detected cases through visual data. To assess the performance of the proposed method, experiments are conducted on the publicly available fall detection databases: the University of Rzeszow\\'s fall detection (URFD) dataset. Results demonstrate that the detection phase play a key role in reducing the number of sequences used as input into the neural network classifier for classification, significantly reducing computational burden and achieving better accuracy.

  19. Computer-aided detection of pulmonary embolism: Influence on radiologists' detection performance with respect to vessel segments

    Das, Marco; Muehlenbruch, Georg; Helm, Anita; Guenther, Rolf W.; Wildberger, Joachim E.; Bakai, Annemarie; Salganicoff, Marcos; Liang, Jianming; Wolf, Matthias; Stanzel, Sven

    2008-01-01

    The purpose was to assess the sensitivity of a CAD software prototype for the detection of pulmonary embolism in MDCT chest examinations with regard to vessel level and to assess the influence on radiologists' detection performance. Forty-three patients with suspected PE were included in this retrospective study. MDCT chest examinations with a standard PE protocol were acquired at a 16-slice MDCT. All patient data were read by three radiologists (R1, R2, R3), and all thrombi were marked. A CAD prototype software was applied to all datasets, and each finding of the software was analyzed with regard to vessel level. The standard of reference was assessed in a consensus read. Sensitivity for the radiologists and CAD software was assessed. Thirty-three patients were positive for PE, with a total of 215 thrombi. The mean overall sensitivity for the CAD software alone was 83% (specificity, 80%). Radiologist sensitivity was 77% = R3, 82% = R2, and R1 = 87%. With the aid of the CAD software, sensitivities increased to 98% (R1), 93% (R2), and 92% (R3) (p<0.0001). CAD performance at the lobar level was 87%, at the segmental 90% and at the subsegmental 77%. With the use of CAD for PE, the detection performance of radiologists can be improved. (orig.)

  20. Pulmonary Emphysema in Cystic Fibrosis Detected by Densitometry on Chest Multidetector Computed Tomography

    Wielpütz, Mark O.; Weinheimer, Oliver; Eichinger, Monika; Wiebel, Matthias; Biederer, Jürgen; Kauczor, Hans-Ulrich; Heußel, Claus P.

    2013-01-01

    Background Histopathological studies on lung specimens from patients with cystic fibrosis (CF) and recent results from a mouse model indicate that emphysema may contribute to CF lung disease. However, little is known about the relevance of emphysema in patients with CF. In the present study, we used computationally generated density masks based on multidetector computed tomography (MDCT) of the chest for non-invasive characterization and quantification of emphysema in CF. Methods Volumetric MDCT scans were acquired in parallel to pulmonary function testing in 41 patients with CF (median age 20.1 years; range 7-66 years) and 21 non-CF controls (median age 30.4 years; range 4-68 years), and subjected to dedicated software. The lung was segmented, low attenuation volumes below a threshold of -950 Hounsfield units were assigned to emphysema volume (EV), and the emphysema index was computed (EI). Results were correlated with forced expiratory volume in 1 s percent predicted (FEV1%), residual volume (RV), and RV/total lung capacity (RV/TLC). Results We show that EV was increased in CF (457±530 ml) compared to non-CF controls (78±90 ml) (PEmphysema in CF was detected from early adolescence (~13 years) and increased with age (rs=0.67, Pemphysema detected by densitometry on chest MDCT is a characteristic pathology that contributes to airflow limitation and may serve as a novel endpoint for monitoring lung disease in CF. PMID:23991177

  1. The use of computers for the registration, presentation and analysis of operation data and for the improvement of internal communication

    Lamarre, J.C.; Depigny-Huet, C.; Ghertman, F.

    1988-01-01

    Internal communication is discussed. It is shown that its improvement leads to a quality improvement throughout the whole company. Communication improvement is possible through the employment of computers; the organization of security rounds is given as an example

  2. Computer-aided detection in CT colonography: initial clinical experience using a prototype system

    Graser, A.; Geisbuesch, S.; Reiser, M.F.; Becker, C.R.; Kolligs, F.T.; Schaefer, C.; Mang, T.

    2007-01-01

    Computer-aided detection (CAD) algorithms help to detect colonic polyps at CT colonography (CTC). The purpose of this study was to evaluate the accuracy of CAD versus an expert reader in CTC. One hundred forty individuals (67 men, 73 women; mean age, 59 years) underwent screening 64-MDCT colonography after full cathartic bowel cleansing without fecal tagging. One expert reader interpreted supine and prone scans using a 3D workstation with integrated CAD used as ''second reader.'' The system's sensitivity for the detection of polyps, the number of false-positive findings, and its running time were evaluated. Polyps were classified as small (≤5 mm), medium (6-9 mm), and large (≥10 mm). A total of 118 polyps (small, 85; medium, 19; large, 14) were found in 56 patients. CAD detected 72 polyps (61%) with an average of 2.2 false-positives. Sensitivity was 51% (43/85) for small, 90% (17/19) for medium, and 86% (12/14) for large polyps. For all polyps, per-patient sensitivity was 89% (50/56) for the radiologist and 73% (41/56) for CAD. For large and medium polyps, per-patient sensitivity was 100% for the radiologist, and 96% for CAD. In conclusion, CAD shows high sensitivity in the detection of clinically significant polyps with acceptable false-positive rates. (orig.)

  3. Toward the automatic detection of coronary artery calcification in non-contrast computed tomography data.

    Brunner, Gerd; Chittajallu, Deepak R; Kurkure, Uday; Kakadiaris, Ioannis A

    2010-10-01

    Measurements related to coronary artery calcification (CAC) offer significant predictive value for coronary artery disease (CAD). In current medical practice CAC scoring is a labor-intensive task. The objective of this paper is the development and evaluation of a family of coronary artery region (CAR) models applied to the detection of CACs in coronary artery zones and sections. Thirty patients underwent non-contrast electron-beam computed tomography scanning. Coronary artery trajectory points as presented in the University of Houston heart-centered coordinate system were utilized to construct the CAR models which automatically detect coronary artery zones and sections. On a per-patient and per-zone basis the proposed CAR models detected CACs with a sensitivity, specificity and accuracy of 85.56 (± 15.80)%, 93.54 (± 1.98)%, and 85.27 (± 14.67)%, respectively while the corresponding values in the zones and segments based case were 77.94 (± 7.78)%, 96.57 (± 4.90)%, and 73.58 (± 8.96)%, respectively. The results of this study suggest that the family of CAR models provide an effective method to detect different regions of the coronaries. Further, the CAR classifiers are able to detect CACs with a mean sensitivity and specificity of 86.33 and 93.78%, respectively.

  4. Automated detection of heuristics and biases among pathologists in a computer-based system.

    Crowley, Rebecca S; Legowski, Elizabeth; Medvedeva, Olga; Reitmeyer, Kayse; Tseytlin, Eugene; Castine, Melissa; Jukic, Drazen; Mello-Thoms, Claudia

    2013-08-01

    The purpose of this study is threefold: (1) to develop an automated, computer-based method to detect heuristics and biases as pathologists examine virtual slide cases, (2) to measure the frequency and distribution of heuristics and errors across three levels of training, and (3) to examine relationships of heuristics to biases, and biases to diagnostic errors. The authors conducted the study using a computer-based system to view and diagnose virtual slide cases. The software recorded participant responses throughout the diagnostic process, and automatically classified participant actions based on definitions of eight common heuristics and/or biases. The authors measured frequency of heuristic use and bias across three levels of training. Biases studied were detected at varying frequencies, with availability and search satisficing observed most frequently. There were few significant differences by level of training. For representativeness and anchoring, the heuristic was used appropriately as often or more often than it was used in biased judgment. Approximately half of the diagnostic errors were associated with one or more biases. We conclude that heuristic use and biases were observed among physicians at all levels of training using the virtual slide system, although their frequencies varied. The system can be employed to detect heuristic use and to test methods for decreasing diagnostic errors resulting from cognitive biases.

  5. Detection of Steganography-Producing Software Artifacts on Crime-Related Seized Computers

    Asawaree Kulkarni

    2009-06-01

    Full Text Available Steganography is the art and science of hiding information within information so that an observer does not know that communication is taking place. Bad actors passing information using steganography are of concern to the national security establishment and law enforcement. An attempt was made to determine if steganography was being used by criminals to communicate information. Web crawling technology was used and images were downloaded from Web sites that were considered as likely candidates for containing information hidden using steganographic techniques. A detection tool was used to analyze these images. The research failed to demonstrate that steganography was prevalent on the public Internet. The probable reasons included the growth and availability of large number of steganography-producing tools and the limited capacity of the detection tools to cope with them. Thus, a redirection was introduced in the methodology and the detection focus was shifted from the analysis of the ‘product’ of the steganography-producing software; viz. the images, to the 'artifacts’ left by the steganography-producing software while it is being used to generate steganographic images. This approach was based on the concept of ‘Stego-Usage Timeline’. As a proof of concept, a sample set of criminal computers was scanned for the remnants of steganography-producing software. The results demonstrated that the problem of ‘the detection of the usage of steganography’ could be addressed by the approach adopted after the research redirection and that certain steganographic software was popular among the criminals. Thus, the contribution of the research was in demonstrating that the limitations of the tools based on the signature detection of steganographically altered images can be overcome by focusing the detection effort on detecting the artifacts of the steganography-producing tools.

  6. Automatic epileptic seizure detection in EEGs using MF-DFA, SVM based on cloud computing.

    Zhang, Zhongnan; Wen, Tingxi; Huang, Wei; Wang, Meihong; Li, Chunfeng

    2017-01-01

    Epilepsy is a chronic disease with transient brain dysfunction that results from the sudden abnormal discharge of neurons in the brain. Since electroencephalogram (EEG) is a harmless and noninvasive detection method, it plays an important role in the detection of neurological diseases. However, the process of analyzing EEG to detect neurological diseases is often difficult because the brain electrical signals are random, non-stationary and nonlinear. In order to overcome such difficulty, this study aims to develop a new computer-aided scheme for automatic epileptic seizure detection in EEGs based on multi-fractal detrended fluctuation analysis (MF-DFA) and support vector machine (SVM). New scheme first extracts features from EEG by MF-DFA during the first stage. Then, the scheme applies a genetic algorithm (GA) to calculate parameters used in SVM and classify the training data according to the selected features using SVM. Finally, the trained SVM classifier is exploited to detect neurological diseases. The algorithm utilizes MLlib from library of SPARK and runs on cloud platform. Applying to a public dataset for experiment, the study results show that the new feature extraction method and scheme can detect signals with less features and the accuracy of the classification reached up to 99%. MF-DFA is a promising approach to extract features for analyzing EEG, because of its simple algorithm procedure and less parameters. The features obtained by MF-DFA can represent samples as well as traditional wavelet transform and Lyapunov exponents. GA can always find useful parameters for SVM with enough execution time. The results illustrate that the classification model can achieve comparable accuracy, which means that it is effective in epileptic seizure detection.

  7. Disk brake design for cooling improvement using Computational Fluid Dynamics (CFD)

    Munisamy, Kannan M; Shafik, Ramel

    2013-01-01

    The car disk brake design is improved with two different blade designs compared to the baseline blade design. The two designs were simulated in Computational fluid dynamics (CFD) to obtain heat transfer properties such as Nusselt number and Heat transfer coefficient. The heat transfer property is compared against the baseline design. The improved shape has the highest heat transfer performance. The curved design is inferior to baseline design in heat transfer performance.

  8. Computational intelligence for qualitative coaching diagnostics: Automated assessment of tennis swings to improve performance and safety

    Bačić, Boris; Hume, Patria

    2017-01-01

    Coaching technology, wearables and exergames can provide quantitative feedback based on measured activity, but there is little evidence of qualitative feedback to aid technique improvement. To achieve personalised qualitative feedback, we demonstrated a proof-of-concept prototype combining kinesiology and computational intelligence that could help improving tennis swing technique. Three-dimensional tennis motion data were acquired from multi-camera video (22 backhands and 21 forehands, includ...

  9. Disk brake design for cooling improvement using Computational Fluid Dynamics (CFD)

    Munisamy, Kannan M.; Shafik, Ramel

    2013-06-01

    The car disk brake design is improved with two different blade designs compared to the baseline blade design. The two designs were simulated in Computational fluid dynamics (CFD) to obtain heat transfer properties such as Nusselt number and Heat transfer coefficient. The heat transfer property is compared against the baseline design. The improved shape has the highest heat transfer performance. The curved design is inferior to baseline design in heat transfer performance.

  10. Computer navigation experience in hip resurfacing improves femoral component alignment using a conventional jig.

    Morison, Zachary; Mehra, Akshay; Olsen, Michael; Donnelly, Michael; Schemitsch, Emil

    2013-11-01

    The use of computer navigation has been shown to improve the accuracy of femoral component placement compared to conventional instrumentation in hip resurfacing. Whether exposure to computer navigation improves accuracy when the procedure is subsequently performed with conventional instrumentation without navigation has not been explored. We examined whether femoral component alignment utilizing a conventional jig improves following experience with the use of imageless computer navigation for hip resurfacing. Between December 2004 and December 2008, 213 consecutive hip resurfacings were performed by a single surgeon. The first 17 (Cohort 1) and the last 9 (Cohort 2) hip resurfacings were performed using a conventional guidewire alignment jig. In 187 cases, the femoral component was implanted using the imageless computer navigation. Cohorts 1 and 2 were compared for femoral component alignment accuracy. All components in Cohort 2 achieved the position determined by the preoperative plan. The mean deviation of the stem-shaft angle (SSA) from the preoperatively planned target position was 2.2° in Cohort 2 and 5.6° in Cohort 1 (P = 0.01). Four implants in Cohort 1 were positioned at least 10° varus compared to the target SSA position and another four were retroverted. Femoral component placement utilizing conventional instrumentation may be more accurate following experience using imageless computer navigation.

  11. Computer navigation experience in hip resurfacing improves femoral component alignment using a conventional jig

    Zachary Morison

    2013-01-01

    Full Text Available Background:The use of computer navigation has been shown to improve the accuracy of femoral component placement compared to conventional instrumentation in hip resurfacing. Whether exposure to computer navigation improves accuracy when the procedure is subsequently performed with conventional instrumentation without navigation has not been explored. We examined whether femoral component alignment utilizing a conventional jig improves following experience with the use of imageless computer navigation for hip resurfacing. Materials and Methods:Between December 2004 and December 2008, 213 consecutive hip resurfacings were performed by a single surgeon. The first 17 (Cohort 1 and the last 9 (Cohort 2 hip resurfacings were performed using a conventional guidewire alignment jig. In 187 cases, the femoral component was implanted using the imageless computer navigation. Cohorts 1 and 2 were compared for femoral component alignment accuracy. Results:All components in Cohort 2 achieved the position determined by the preoperative plan. The mean deviation of the stem-shaft angle (SSA from the preoperatively planned target position was 2.2° in Cohort 2 and 5.6° in Cohort 1 ( P = 0.01. Four implants in Cohort 1 were positioned at least 10° varus compared to the target SSA position and another four were retroverted. Conclusions: Femoral component placement utilizing conventional instrumentation may be more accurate following experience using imageless computer navigation.

  12. Improved detection limits for phthalates by selective solid-phase micro-extraction

    Zia, Asif I.; Afsarimanesh, Nasrin; Xie, Li; Nag, Anindya; Al-Bahadly, I. H.; Yu, P. L.; Kosel, Jü rgen

    2016-01-01

    Presented research reports on an improved method and enhanced limits of detection for phthalates; a hazardous additive used in the production of plastics by solid-phase micro-extraction (SPME) polymer in comparison to molecularly imprinted solid

  13. Modern Approaches to the Computation of the Probability of Target Detection in Cluttered Environments

    Meitzler, Thomas J.

    The field of computer vision interacts with fields such as psychology, vision research, machine vision, psychophysics, mathematics, physics, and computer science. The focus of this thesis is new algorithms and methods for the computation of the probability of detection (Pd) of a target in a cluttered scene. The scene can be either a natural visual scene such as one sees with the naked eye (visual), or, a scene displayed on a monitor with the help of infrared sensors. The relative clutter and the temperature difference between the target and background (DeltaT) are defined and then used to calculate a relative signal -to-clutter ratio (SCR) from which the Pd is calculated for a target in a cluttered scene. It is shown how this definition can include many previous definitions of clutter and (DeltaT). Next, fuzzy and neural -fuzzy techniques are used to calculate the Pd and it is shown how these methods can give results that have a good correlation with experiment. The experimental design for actually measuring the Pd of a target by observers is described. Finally, wavelets are applied to the calculation of clutter and it is shown how this new definition of clutter based on wavelets can be used to compute the Pd of a target.

  14. Diagnostic ability of Barrett's index to detect dysthyroid optic neuropathy using multidetector computed tomography

    Monteiro, Mario L.R.; Goncalves, Allan C.P.; Silva, Carla T.M.; Moura, Janete P.; Ribeiro, Carolina S.; Gebrim, Eloisa M.M.S.; Universidade de Sao Paulo; Universidade de Sao Paulo

    2008-01-01

    Objectives: The objective of this study was to evaluate the ability of a muscular index (Barrett's Index), calculated with multidetector computed tomography, to detect dysthyroid optic neuropathy in patients with Graves' orbitopathy. Methods: Thirty-six patients with Graves' orbitopathy were prospectively studied and submitted to neuro-ophthalmic evaluation and multidetector computed tomography scans of the orbits. Orbits were divided into two groups: those with and without dysthyroid optic neuropathy. Barrett's index was calculated as the percentage of the orbit occupied by muscles. Sensitivity and specificity were determined for several index values. Results: Sixty-four orbits (19 with and 45 without dysthyroid optic neuropathy) met the inclusion criteria for the study. The mean Barrett's index values (±SD) were 64.47% ± 6.06% and 49.44% ± 10.94% in the groups with and without dysthyroid optic neuropathy, respectively (p 60% should be carefully examined and followed for the development of dysthyroid optic neuropathy. (author)

  15. Computer-controlled detection system for high-precision isotope ratio measurements

    McCord, B.R.; Taylor, J.W.

    1986-01-01

    In this paper the authors describe a detection system for high-precision isotope ratio measurements. In this new system, the requirement for a ratioing digital voltmeter has been eliminated, and a standard digital voltmeter interfaced to a computer is employed. Instead of measuring the ratio of the two steadily increasing output voltages simultaneously, the digital voltmeter alternately samples the outputs at a precise rate over a certain period of time. The data are sent to the computer which calculates the rate of charge of each amplifier and divides the two rates to obtain the isotopic ratio. These results simulate a coincident measurement of the output of both integrators. The charge rate is calculated by using a linear regression method, and the standard error of the slope gives a measure of the stability of the system at the time the measurement was taken

  16. Detecting Mental States by Machine Learning Techniques: The Berlin Brain-Computer Interface

    Blankertz, Benjamin; Tangermann, Michael; Vidaurre, Carmen; Dickhaus, Thorsten; Sannelli, Claudia; Popescu, Florin; Fazli, Siamac; Danóczy, Márton; Curio, Gabriel; Müller, Klaus-Robert

    The Berlin Brain-Computer Interface Brain-Computer Interface (BBCI) uses a machine learning approach to extract user-specific patterns from high-dimensional EEG-features optimized for revealing the user's mental state. Classical BCI applications are brain actuated tools for patients such as prostheses (see Section 4.1) or mental text entry systems ([1] and see [2-5] for an overview on BCI). In these applications, the BBCI uses natural motor skills of the users and specifically tailored pattern recognition algorithms for detecting the user's intent. But beyond rehabilitation, there is a wide range of possible applications in which BCI technology is used to monitor other mental states, often even covert ones (see also [6] in the fMRI realm). While this field is still largely unexplored, two examples from our studies are exemplified in Sections 4.3 and 4.4.

  17. COMPUTING

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  18. COMPUTING

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  19. COMPUTING

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  20. Single reading with computer-aided detection performed by selected radiologists in a breast cancer screening program

    Bargalló, Xavier, E-mail: xbarga@clinic.cat [Department of Radiology (CDIC), Hospital Clínic de Barcelona, C/ Villarroel, 170, 08036 Barcelona (Spain); Santamaría, Gorane; Amo, Montse del; Arguis, Pedro [Department of Radiology (CDIC), Hospital Clínic de Barcelona, C/ Villarroel, 170, 08036 Barcelona (Spain); Ríos, José [Biostatistics and Data Management Core Facility, IDIBAPS, (Hospital Clinic) C/ Mallorca, 183. Floor -1. Office #60. 08036 Barcelona (Spain); Grau, Jaume [Preventive Medicine and Epidemiology Unit, Hospital Clínic de Barcelona, C/ Villarroel, 170, 08036 Barcelona (Spain); Burrel, Marta; Cores, Enrique; Velasco, Martín [Department of Radiology (CDIC), Hospital Clínic de Barcelona, C/ Villarroel, 170, 08036 Barcelona (Spain)

    2014-11-15

    Highlights: • 1-The cancer detection rate of the screening program improved using a single reading protocol by experienced radiologists assisted by CAD. • 2-The cancer detection rate improved at the cost of increasing recall rate. • 3-CAD, used by breast radiologists, did not help to detect more cancers. - Abstract: Objectives: To assess the impact of shifting from a standard double reading plus arbitration protocol to a single reading by experienced radiologists assisted by computer-aided detection (CAD) in a breast cancer screening program. Methods: This was a prospective study approved by the ethics committee. Data from 21,321 consecutive screening mammograms in incident rounds (2010–2012) were read following a single reading plus CAD protocol and compared with data from 47,462 consecutive screening mammograms in incident rounds (2004–2010) that were interpreted following a double reading plus arbitration protocol. For the single reading, radiologists were selected on the basis of the appraisement of their previous performance. Results: Period 2010–2012 vs. period 2004–2010: Cancer detection rate (CDR): 6.1‰ (95% confidence interval: 5.1–7.2) vs. 5.25‰; Recall rate (RR): 7.02% (95% confidence interval: 6.7–7.4) vs. 7.24% (selected readers before arbitration) and vs. 3.94 (all readers after arbitration); Predictive positive value of recall: 8.69% vs. 13.32%. Average size of invasive cancers: 14.6 ± 9.5 mm vs. 14.3 ± 9.5 mm. Stage: 0 (22.3/26.1%); I (59.2/50.8%); II (19.2/17.1%); III (3.1/3.3%); IV (0/1.9%). Specialized breast radiologists performed better than general radiologists. Conclusions: The cancer detection rate of the screening program improved using a single reading protocol by experienced radiologists assisted by CAD, at the cost of a moderate increase of the recall rate mainly related to the lack of arbitration.

  1. ICARE improves antinuclear antibody detection by overcoming the barriers preventing accreditation.

    Bertin, Daniel; Mouhajir, Yassin; Bongrand, Pierre; Bardin, Nathalie

    2016-02-15

    Antinuclear antibodies (ANA) are useful biomarkers for the diagnosis and the monitoring of rheumatic diseases. The American College of Rheumatology has stated that indirect immunofluorescence (IIF) analysis remains the gold standard for ANA screening. However, IIF is time consuming, subjective, not fully standardized and presents several issues for accreditation which is the process leading to ISO 15189 certification for medical laboratories. We propose an innovative tool for accreditation by using the quantitative evaluation of the automated image capture and analysis "ICARE" (Immunofluorescence for Computed Antinuclear antibody Rational Evaluation). We established the optimal screening dilution (1:160) and a fluorescence index (FI) cutoff for ICARE on a cohort of 91 healthy blood donors. Then, we evaluated performance of ICARE on a routine cohort of 236 patients. Precision parameters of ANA detection by IIF were evaluated according to ISO 15189. ICARE showed an excellent concordance with visual evaluation (88%, Kappa=0.76) and significantly discriminated between weak to moderate (1:160-1:320 titers) and high (>1:320 titers) ANA levels. A significant correlation was found between FI and ANA titers (Spearman's ρ=0.67; Pprocess of continuous improvement of the quality of clinical laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Improving model construction of profile HMMs for remote homology detection through structural alignment

    Zaverucha Gerson

    2007-11-01

    Full Text Available Abstract Background Remote homology detection is a challenging problem in Bioinformatics. Arguably, profile Hidden Markov Models (pHMMs are one of the most successful approaches in addressing this important problem. pHMM packages present a relatively small computational cost, and perform particularly well at recognizing remote homologies. This raises the question of whether structural alignments could impact the performance of pHMMs trained from proteins in the Twilight Zone, as structural alignments are often more accurate than sequence alignments at identifying motifs and functional residues. Next, we assess the impact of using structural alignments in pHMM performance. Results We used the SCOP database to perform our experiments. Structural alignments were obtained using the 3DCOFFEE and MAMMOTH-mult tools; sequence alignments were obtained using CLUSTALW, TCOFFEE, MAFFT and PROBCONS. We performed leave-one-family-out cross-validation over super-families. Performance was evaluated through ROC curves and paired two tailed t-test. Conclusion We observed that pHMMs derived from structural alignments performed significantly better than pHMMs derived from sequence alignment in low-identity regions, mainly below 20%. We believe this is because structural alignment tools are better at focusing on the important patterns that are more often conserved through evolution, resulting in higher quality pHMMs. On the other hand, sensitivity of these tools is still quite low for these low-identity regions. Our results suggest a number of possible directions for improvements in this area.

  3. Improving model construction of profile HMMs for remote homology detection through structural alignment.

    Bernardes, Juliana S; Dávila, Alberto M R; Costa, Vítor S; Zaverucha, Gerson

    2007-11-09

    Remote homology detection is a challenging problem in Bioinformatics. Arguably, profile Hidden Markov Models (pHMMs) are one of the most successful approaches in addressing this important problem. pHMM packages present a relatively small computational cost, and perform particularly well at recognizing remote homologies. This raises the question of whether structural alignments could impact the performance of pHMMs trained from proteins in the Twilight Zone, as structural alignments are often more accurate than sequence alignments at identifying motifs and functional residues. Next, we assess the impact of using structural alignments in pHMM performance. We used the SCOP database to perform our experiments. Structural alignments were obtained using the 3DCOFFEE and MAMMOTH-mult tools; sequence alignments were obtained using CLUSTALW, TCOFFEE, MAFFT and PROBCONS. We performed leave-one-family-out cross-validation over super-families. Performance was evaluated through ROC curves and paired two tailed t-test. We observed that pHMMs derived from structural alignments performed significantly better than pHMMs derived from sequence alignment in low-identity regions, mainly below 20%. We believe this is because structural alignment tools are better at focusing on the important patterns that are more often conserved through evolution, resulting in higher quality pHMMs. On the other hand, sensitivity of these tools is still quite low for these low-identity regions. Our results suggest a number of possible directions for improvements in this area.

  4. Knowledge Graphs as Context Models: Improving the Detection of Cross-Language Plagiarism with Paraphrasing

    Franco-Salvador, Marc; Gupta, Parth; Rosso, Paolo

    2013-01-01

    Cross-language plagiarism detection attempts to identify and extract automatically plagiarism among documents in different languages. Plagiarized fragments can be translated verbatim copies or may alter their structure to hide the copying, which is known as paraphrasing and is more difficult to detect. In order to improve the paraphrasing detection, we use a knowledge graph-based approach to obtain and compare context models of document fragments in different languages. Experimental results i...

  5. A new approach to develop computer-aided detection schemes of digital mammograms

    Tan, Maxine; Qian, Wei; Pu, Jiantao; Liu, Hong; Zheng, Bin

    2015-06-01

    The purpose of this study is to develop a new global mammographic image feature analysis based computer-aided detection (CAD) scheme and evaluate its performance in detecting positive screening mammography examinations. A dataset that includes images acquired from 1896 full-field digital mammography (FFDM) screening examinations was used in this study. Among them, 812 cases were positive for cancer and 1084 were negative or benign. After segmenting the breast area, a computerized scheme was applied to compute 92 global mammographic tissue density based features on each of four mammograms of the craniocaudal (CC) and mediolateral oblique (MLO) views. After adding three existing popular risk factors (woman’s age, subjectively rated mammographic density, and family breast cancer history) into the initial feature pool, we applied a sequential forward floating selection feature selection algorithm to select relevant features from the bilateral CC and MLO view images separately. The selected CC and MLO view image features were used to train two artificial neural networks (ANNs). The results were then fused by a third ANN to build a two-stage classifier to predict the likelihood of the FFDM screening examination being positive. CAD performance was tested using a ten-fold cross-validation method. The computed area under the receiver operating characteristic curve was AUC = 0.779   ±   0.025 and the odds ratio monotonically increased from 1 to 31.55 as CAD-generated detection scores increased. The study demonstrated that this new global image feature based CAD scheme had a relatively higher discriminatory power to cue the FFDM examinations with high risk of being positive, which may provide a new CAD-cueing method to assist radiologists in reading and interpreting screening mammograms.

  6. A pragmatic approach to measuring, monitoring and evaluating interventions for improved tuberculosis case detection

    Blok, Lucie; Creswell, Jacob; Stevens, Robert; Brouwer, Miranda; Ramis, Oriol; Weil, Olivier; Klatser, Paul; Sahu, Suvanand; Bakker, Mirjam I.

    2014-01-01

    The inability to detect all individuals with active tuberculosis has led to a growing interest in new approaches to improve case detection. Policy makers and program staff face important challenges measuring effectiveness of newly introduced interventions and reviewing feasibility of scaling-up

  7. A pragmatic approach to measuring, monitoring and evaluating interventions for improved tuberculosis case detection.

    Blok, L; Creswell, J; Stevens, R.; Brouwer, M; Ramis, O; Weil, O; Klatser, P.R.; Sahu, S; Bakker, M.I.

    2014-01-01

    The inability to detect all individuals with active tuberculosis has led to a growing interest in new approaches to improve case detection. Policy makers and program staff face important challenges measuring effectiveness of newly introduced interventions and reviewing feasibility of scaling-up

  8. Opportunities and Challenges of Cloud Computing to Improve Health Care Services

    2011-01-01

    Cloud computing is a new way of delivering computing resources and services. Many managers and experts believe that it can improve health care services, benefit health care research, and change the face of health information technology. However, as with any innovation, cloud computing should be rigorously evaluated before its widespread adoption. This paper discusses the concept and its current place in health care, and uses 4 aspects (management, technology, security, and legal) to evaluate the opportunities and challenges of this computing model. Strategic planning that could be used by a health organization to determine its direction, strategy, and resource allocation when it has decided to migrate from traditional to cloud-based health services is also discussed. PMID:21937354

  9. Improved look-up table method of computer-generated holograms.

    Wei, Hui; Gong, Guanghong; Li, Ni

    2016-11-10

    Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.

  10. An improved, computer-based, on-line gamma monitor for plutonium anion exchange process control

    Pope, N.G.; Marsh, S.F.

    1987-06-01

    An improved, low-cost, computer-based system has replaced a previously developed on-line gamma monitor. Both instruments continuously profile uranium, plutonium, and americium in the nitrate anion exchange process used to recover and purify plutonium at the Los Alamos Plutonium Facility. The latest system incorporates a personal computer that provides full-feature multichannel analyzer (MCA) capabilities by means of a single-slot, plug-in integrated circuit board. In addition to controlling all MCA functions, the computer program continuously corrects for gain shift and performs all other data processing functions. This Plutonium Recovery Operations Gamma Ray Energy Spectrometer System (PROGRESS) provides on-line process operational data essential for efficient operation. By identifying abnormal conditions in real time, it allows operators to take corrective actions promptly. The decision-making capability of the computer will be of increasing value as we implement automated process-control functions in the future. 4 refs., 6 figs

  11. Opportunities and challenges of cloud computing to improve health care services.

    Kuo, Alex Mu-Hsing

    2011-09-21

    Cloud computing is a new way of delivering computing resources and services. Many managers and experts believe that it can improve health care services, benefit health care research, and change the face of health information technology. However, as with any innovation, cloud computing should be rigorously evaluated before its widespread adoption. This paper discusses the concept and its current place in health care, and uses 4 aspects (management, technology, security, and legal) to evaluate the opportunities and challenges of this computing model. Strategic planning that could be used by a health organization to determine its direction, strategy, and resource allocation when it has decided to migrate from traditional to cloud-based health services is also discussed.

  12. The Comparison of Computed Tomography Perfusion, Contrast-Enhanced Computed Tomography and Positron-Emission Tomography/Computed Tomography for the Detection of Primary Esophageal Carcinoma.

    Genc, Berhan; Kantarci, Mecit; Sade, Recep; Orsal, Ebru; Ogul, Hayri; Okur, Aylin; Aydin, Yener; Karaca, Leyla; Eroğlu, Atilla

    2016-01-01

    The purpose of this study was to investigate the efficiency of computed tomography perfusion (CTP), contrast-enhanced computed tomography (CECT) and 18F-fluoro-2-deoxy-D-glucose (18F-FDG) positron-emission tomography (PET/CT) in the diagnosis of esophageal cancer. This prospective study consisted of 33 patients with pathologically confirmed esophageal cancer, 2 of whom had an esophageal abscess. All the patients underwent CTP, CECT and PET/CT imaging and the imaging findings were evaluated. Sensitivity, specificity and positive and negative predictive values were calculated for each of the 3 imaging modalities relative to the histological diagnosis. Thirty-three tumors were visualized on CTP, 29 on CECT and 27 on PET/CT. Six tumors were stage 1, and 2 and 4 of these tumors were missed on CECT and PET/CT, respectively. Significant differences between CTP and CECT (p = 0.02), and between CTP and PET/CT (p = 0.04) were found for stage 1 tumors. Values for the sensitivity, specificity and positive and negative predictive values on CTP were 100, 100, 100 and 100%, respectively. Corresponding values on CECT were 93.94, 0, 93.94 and 0%, respectively, and those on PET/CT were 87.88, 0, 93.55 and 0%, respectively. Hence, the sensitivity, specificity and positive and negative predictive values of CTP were better than those of CECT and PET/CT. CTP had an advantage over CECT and PET/CT in detecting small lesions. CTP was valuable, especially in detecting stage 1 tumors. © 2016 S. Karger AG, Basel.

  13. Chrysler improved numerical differencing analyzer for third generation computers CINDA-3G

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1972-01-01

    New and versatile method has been developed to supplement or replace use of original CINDA thermal analyzer program in order to take advantage of improved systems software and machine speeds of third generation computers. CINDA-3G program options offer variety of methods for solution of thermal analog models presented in network format.

  14. Computer-Based Instruction for Improving Student Nurses' General Numeracy: Is It Effective? Two Randomised Trials

    Ainsworth, Hannah; Gilchrist, Mollie; Grant, Celia; Hewitt, Catherine; Ford, Sue; Petrie, Moira; Torgerson, Carole J.; Torgerson, David J.

    2012-01-01

    In response to concern over the numeracy skills deficit displayed by student nurses, an online computer programme, "Authentic World[R]", which aims to simulate a real-life clinical environment and improve the medication dosage calculation skills of users, was developed (Founded in 2004 Authentic World Ltd is a spin out company of…

  15. Spatial image modulation to improve performance of computed tomography imaging spectrometer

    Bearman, Gregory H. (Inventor); Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor)

    2010-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having patterns for imposing spatial structure are provided. The pattern may be imposed either directly on the object scene being imaged or at the field stop aperture. The use of the pattern improves the accuracy of the captured spatial and spectral information.

  16. Play for Performance: Using Computer Games to Improve Motivation and Test-Taking Performance

    Dennis, Alan R.; Bhagwatwar, Akshay; Minas, Randall K.

    2013-01-01

    The importance of testing, especially certification and high-stakes testing, has increased substantially over the past decade. Building on the "serious gaming" literature and the psychology "priming" literature, we developed a computer game designed to improve test-taking performance using psychological priming. The game primed…

  17. Autogenic Feedback Training (Body Fortran) with Biofeedback and the Computer for Self-Improvement and Change.

    Cassel, Russell N.; Sumintardja, Elmira Nasrudin

    1983-01-01

    Describes autogenic feedback training, which provides the basis whereby an individual is able to improve on well being through use of a technique described as "body fortran," implying that you program self as one programs a computer. Necessary requisites are described including relaxation training and the management of stress. (JAC)

  18. A 3D edge detection technique for surface extraction in computed tomography for dimensional metrology applications

    Yagüe-Fabra, J.A.; Ontiveros, S.; Jiménez, R.

    2013-01-01

    Many factors influence the measurement uncertainty when using computed tomography for dimensional metrology applications. One of the most critical steps is the surface extraction phase. An incorrect determination of the surface may significantly increase the measurement uncertainty. This paper...... presents an edge detection method for the surface extraction based on a 3D Canny algorithm with sub-voxel resolution. The advantages of this method are shown in comparison with the most commonly used technique nowadays, i.e. the local threshold definition. Both methods are applied to reference standards...

  19. Stream computing for biomedical signal processing: A QRS complex detection case-study.

    Murphy, B M; O'Driscoll, C; Boylan, G B; Lightbody, G; Marnane, W P

    2015-01-01

    Recent developments in "Big Data" have brought significant gains in the ability to process large amounts of data on commodity server hardware. Stream computing is a relatively new paradigm in this area, addressing the need to process data in real time with very low latency. While this approach has been developed for dealing with large scale data from the world of business, security and finance, there is a natural overlap with clinical needs for physiological signal processing. In this work we present a case study of streams processing applied to a typical physiological signal processing problem: QRS detection from ECG data.

  20. Computed tomographic detection of sinusitis responsible for intracranial and extracranial infections

    Carter, B.L.; Bankoff, M.S.; Fisk, J.D.

    1983-01-01

    Computed tomography (CT) is now used extensively for the evaluation of orbital, facial, and intracranial infections. Nine patients are presented to illustrate the importance of detecting underlying and unsuspected sinusitis. Prompt treatment of the sinusitis is essential to minimize the morbidity and mortality associated with complications such as brain abscess, meningitis, orbital cellulitis, and osteomyelitis. A review of the literature documents the persistence of these complications despite the widespread use of antibiotic therapy. Recognition of the underlying sinusitis is now possible with CT if the region of the sinuses is included and bone-window settings are used during the examination of patients with orbital and intracranial infection

  1. Detection algorithm of infrared small target based on improved SUSAN operator

    Liu, Xingmiao; Wang, Shicheng; Zhao, Jing

    2010-10-01

    The methods of detecting small moving targets in infrared image sequences that contain moving nuisance objects and background noise is analyzed in this paper. A novel infrared small target detection algorithm based on improved SUSAN operator is put forward. The algorithm selects double templates for the infrared small target detection: one size is greater than the small target point size and another size is equal to the small target point size. First, the algorithm uses the big template to calculate the USAN of each pixel in the image and detect the small target, the edge of the image and isolated noise pixels; Then the algorithm uses the another template to calculate the USAN of pixels detected in the first step and improves the principles of SUSAN algorithm based on the characteristics of the small target so that the algorithm can only detect small targets and don't sensitive to the edge pixels of the image and isolated noise pixels. So the interference of the edge of the image and isolate noise points are removed and the candidate target points can be identified; At last, the target is detected by utilizing the continuity and consistency of target movement. The experimental results indicate that the improved SUSAN detection algorithm can quickly and effectively detect the infrared small targets.

  2. COMPUTING

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  3. Improving the Dictation in Attention Deficit Hyperactivity Disorder by Using Computer Based Interventions: A Clinical Trial

    Mahdi Tehranidoost

    2006-07-01

    Full Text Available Objective: The aim of the current study was to assess the impact of computer games and computer-assisted type instruction on dictation scores of elementary school children with attention deficit – hyperactivity disorder (ADHD. Method: In this single-blind clinical trial, 37 elementary school children with ADHD, selected by convenience sampling and divided into group I (n=17 and group II (n=20, underwent eight one-hour sessions (3 sessions per week of intervention by computer games versus computer-assisted type instruction, respectively. 12 school dictation scores were considered: 4 scores preintervention, 4 scores during interventions, and 4 scores post-intervention. Dictation test was taken during each session. Data was analyzed using repeated measure ANOVA. Results: Two groups were matched for age, gender, school grade, medication, IQ, parent’s and teacher’s Conners’ scale scores, having computer at home, history of working with computer, and mean dictation scores. There was no significant difference in dictation scores before and after interventions and also between the study groups. The improvement in school dictation scores had no significant correlation with age, gender, Ritalin use, owning a computer at home and past history of computer work, baseline dictation scores, Ritalin dose, educational status, IQ, and the total score of parent’s and teacher’s Conners’ rating scale. Conclusion: Absence of significant improvement in dictation scores in study groups may be due to the confounding effect of other variables with known impact on dictation scores. Further studies in this field should also assess the change of attention and memory.

  4. DESIGN AND DEVELOP A COMPUTER AIDED DESIGN FOR AUTOMATIC EXUDATES DETECTION FOR DIABETIC RETINOPATHY SCREENING

    C. A. SATHIYAMOORTHY

    2016-04-01

    Full Text Available Diabetic Retinopathy is a severe and widely spread eye disease which can lead to blindness. One of the main symptoms for vision loss is Exudates and it could be prevented by applying an early screening process. In the Existing systems, a Fuzzy C-Means Clustering technique is used for detecting the exudates for analyzation. The main objective of this paper is, to improve the efficiency of the Exudates detection in diabetic retinopathy images. To do this, a three Stage – [TS] approach is introduced for detecting and extracting the exudates automatically from the retinal images for screening the Diabetic retinopathy. TS functions on the image in three levels such as Pre-processing the image, enhancing the image and detecting the Exudates accurately. After successful detection, the detected exudates are classified using GLCM method for finding the accuracy. The TS approach is experimented using MATLAB software and the performance evaluation can be proved by comparing the results with the existing approach’s result and with the hand-drawn ground truths images from the expert ophthalmologist.

  5. A method of detection to the grinding wheel layer thickness based on computer vision

    Ji, Yuchen; Fu, Luhua; Yang, Dujuan; Wang, Lei; Liu, Changjie; Wang, Zhong

    2018-01-01

    This paper proposed a method of detection to the grinding wheel layer thickness based on computer vision. A camera is used to capture images of grinding wheel layer on the whole circle. Forward lighting and back lighting are used to enables a clear image to be acquired. Image processing is then executed on the images captured, which consists of image preprocessing, binarization and subpixel subdivision. The aim of binarization is to help the location of a chord and the corresponding ring width. After subpixel subdivision, the thickness of the grinding layer can be calculated finally. Compared with methods usually used to detect grinding wheel wear, method in this paper can directly and quickly get the information of thickness. Also, the eccentric error and the error of pixel equivalent are discussed in this paper.

  6. Detection of abdominal lymph node metastases from esophageal and cardia cancer by computed tomography

    Shima, S; Sugiura, Y; Yonekawa, H; Ogata, T [National Defence Medical Coll., Tokorosawa, Saitama (Japan)

    1982-03-01

    In order to evaluate the sensitivity of computed tomography (CT) scan in detecting the abdominal lymph node metastases, preoperative CT scan was performed in 16 patients with carcinoma of the esophagus and gastric cardia. Ten patients (62.5%) had pathological evidence of lymph node metastases in the abdominal cavity and 4 of them were identified to involve the para-aortic nodes. CT scan correctly demonstrated the lymph node metastases in the para-aortic and celiac axisis areas, but failed to detect other abdominal lymph node involvements, which were small enough to be excised by operation. The para-aortic nodes on the CT scan showed the following two features; one was nodular mass in shape, which did not obscure the aorta or inferior vena cava, and the other was conglomerated mass, which was difficult to be distinguished from the aorta. The former was resectable and the latter was not.

  7. Computer-assisted detection of colonic polyps with CT colonography using neural networks and binary classification trees

    Jerebko, Anna K.; Summers, Ronald M.; Malley, James D.; Franaszek, Marek; Johnson, C. Daniel

    2003-01-01

    Detection of colonic polyps in CT colonography is problematic due to complexities of polyp shape and the surface of the normal colon. Published results indicate the feasibility of computer-aided detection of polyps but better classifiers are needed to improve specificity. In this paper we compare the classification results of two approaches: neural networks and recursive binary trees. As our starting point we collect surface geometry information from three-dimensional reconstruction of the colon, followed by a filter based on selected variables such as region density, Gaussian and average curvature and sphericity. The filter returns sites that are candidate polyps, based on earlier work using detection thresholds, to which the neural nets or the binary trees are applied. A data set of 39 polyps from 3 to 25 mm in size was used in our investigation. For both neural net and binary trees we use tenfold cross-validation to better estimate the true error rates. The backpropagation neural net with one hidden layer trained with Levenberg-Marquardt algorithm achieved the best results: sensitivity 90% and specificity 95% with 16 false positives per study

  8. Benefit of computer-aided detection analysis for the detection of subsolid and solid lung nodules on thin- and thick-section CT.

    Godoy, Myrna C B; Kim, Tae Jung; White, Charles S; Bogoni, Luca; de Groot, Patricia; Florin, Charles; Obuchowski, Nancy; Babb, James S; Salganicoff, Marcos; Naidich, David P; Anand, Vikram; Park, Sangmin; Vlahos, Ioannis; Ko, Jane P

    2013-01-01

    The objective of our study was to evaluate the impact of computer-aided detection (CAD) on the identification of subsolid and solid lung nodules on thin- and thick-section CT. For 46 chest CT examinations with ground-glass opacity (GGO) nodules, CAD marks computed using thin data were evaluated in two phases. First, four chest radiologists reviewed thin sections (reader(thin)) for nodules and subsequently CAD marks (reader(thin) + CAD(thin)). After 4 months, the same cases were reviewed on thick sections (reader(thick)) and subsequently with CAD marks (reader(thick) + CAD(thick)). Sensitivities were evaluated. Additionally, reader(thick) sensitivity with assessment of CAD marks on thin sections was estimated (reader(thick) + CAD(thin)). For 155 nodules (mean, 5.5 mm; range, 4.0-27.5 mm)-74 solid nodules, 22 part-solid (part-solid nodules), and 59 GGO nodules-CAD stand-alone sensitivity was 80%, 95%, and 71%, respectively, with three false-positives on average (0-12) per CT study. Reader(thin) + CAD(thin) sensitivities were higher than reader(thin) for solid nodules (82% vs 57%, p thick), reader(thick) + CAD(thick), reader(thick) + CAD(thin) were 40%, 58% (p thick); false-positive rates were 1.17, 1.19, and 1.26 per case for reader(thick), reader(thick) + CAD(thick), and reader(thick) + CAD(thin), respectively. Detection of GGO nodules and solid nodules is significantly improved with CAD. When interpretation is performed on thick sections, the benefit is greater when CAD marks are reviewed on thin rather than thick sections.

  9. Clinical application of low-dose CT combined with computer-aided detection in lung cancer screening

    Xu Zushan; Hou Hongjun; Xu Yan; Ma Daqing

    2010-01-01

    Objective: To investigate the clinical value of chest low-dose CT (LDCT) combined with computer-aided detection (CAD) system for lung cancer screening in high risk population. Methods: Two hundred and nineteen healthy candidates underwent 64-slice LDCT scan. All images were reviewed in consensus by two radiologists with 15 years of thoracic CT diagnosis experience. Then the image data were analyzed with CAD alone. Finally images were reviewed by two radiologists with 5 years of CT diagnosis experience with and without CT Viewer software. The sensitivity, false positive rate of CAD for pulmonary nodule detection were calculated. SPSS 11.5 software and Chi-square test were used for the statistics. Results: Of 219 candidates ,104(47.5% ) were detected with lung nodules. There were 366 true nodules confirmed by the senior radiologists. The CAD system detected 271 (74.0%) true nodules and 424 false-positive nodules. The false-positive rate was 1.94/per case. The two junior radiologists indentifid 292 (79.8%), 286(78.1%) nodules without CAD and 336 (91.8%), 333 (91.0%) nodules with CAD respectively. There were significant differences for radiologists in indentifying nodules with or without CAD system (P<0.01). Conclusions: CAD is more sensitive than radiologists for indentifying the nodules in the central area or in the hilar region of the lung. While radiologists are more sensitive for the peripheral and sub-pleural nodules,or ground glass opacity nodules, or nodules smaller than 4 mm. CAD can not be used alone. The detection rate can be improved with the combination of radiologist and CAD in LDCT screen. (authors)

  10. An automated and fast approach to detect single-trial visual evoked potentials with application to brain-computer interface.

    Tu, Yiheng; Hung, Yeung Sam; Hu, Li; Huang, Gan; Hu, Yong; Zhang, Zhiguo

    2014-12-01

    This study aims (1) to develop an automated and fast approach for detecting visual evoked potentials (VEPs) in single trials and (2) to apply the single-trial VEP detection approach in designing a real-time and high-performance brain-computer interface (BCI) system. The single-trial VEP detection approach uses common spatial pattern (CSP) as a spatial filter and wavelet filtering (WF) a temporal-spectral filter to jointly enhance the signal-to-noise ratio (SNR) of single-trial VEPs. The performance of the joint spatial-temporal-spectral filtering approach was assessed in a four-command VEP-based BCI system. The offline classification accuracy of the BCI system was significantly improved from 67.6±12.5% (raw data) to 97.3±2.1% (data filtered by CSP and WF). The proposed approach was successfully implemented in an online BCI system, where subjects could make 20 decisions in one minute with classification accuracy of 90%. The proposed single-trial detection approach is able to obtain robust and reliable VEP waveform in an automatic and fast way and it is applicable in VEP based online BCI systems. This approach provides a real-time and automated solution for single-trial detection of evoked potentials or event-related potentials (EPs/ERPs) in various paradigms, which could benefit many applications such as BCI and intraoperative monitoring. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Computer-aided detection (CAD) of lung nodules in CT scans: radiologist performance and reading time with incremental CAD assistance

    Roos, Justus E.; Paik, David; Olsen, David; Liu, Emily G.; Leung, Ann N.; Mindelzun, Robert; Choudhury, Kingshuk R.; Napel, Sandy; Rubin, Geoffrey D.; Chow, Lawrence C.; Naidich, David P.

    2010-01-01

    The diagnostic performance of radiologists using incremental CAD assistance for lung nodule detection on CT and their temporal variation in performance during CAD evaluation was assessed. CAD was applied to 20 chest multidetector-row computed tomography (MDCT) scans containing 190 non-calcified ≥3-mm nodules. After free search, three radiologists independently evaluated a maximum of up to 50 CAD detections/patient. Multiple free-response ROC curves were generated for free search and successive CAD evaluation, by incrementally adding CAD detections one at a time to the radiologists' performance. The sensitivity for free search was 53% (range, 44%-59%) at 1.15 false positives (FP)/patient and increased with CAD to 69% (range, 59-82%) at 1.45 FP/patient. CAD evaluation initially resulted in a sharp rise in sensitivity of 14% with a minimal increase in FP over a time period of 100 s, followed by flattening of the sensitivity increase to only 2%. This transition resulted from a greater prevalence of true positive (TP) versus FP detections at early CAD evaluation and not by a temporal change in readers' performance. The time spent for TP (9.5 s ± 4.5 s) and false negative (FN) (8.4 s ± 6.7 s) detections was similar; FP decisions took two- to three-times longer (14.4 s ± 8.7 s) than true negative (TN) decisions (4.7 s ± 1.3 s). When CAD output is ordered by CAD score, an initial period of rapid performance improvement slows significantly over time because of non-uniformity in the distribution of TP CAD output and not to a changing reader performance over time. (orig.)

  12. Computer-aided detection of breast carcinoma in standard mammographic projections with digital mammography

    Destounis, S.; Hanson, S.

    2007-01-01

    This study was conducted to retrospectively evaluate a computer-aided detection system's ability to detect breast carcinoma in multiple standard mammographic projections. Forty-five lesions in 44 patients imaged with digital mammography (Selenia registered , Hologic, Bedford, MA; Senographe registered , GE, Milwaukee, WI) and had computer-aided detection (CAD, Image-checker registered V 8.3.15, Hologic/R2, Santa Clara, CA) applied at the time of examination were identified for review; all were subsequently recommended to biopsy where cancer was revealed. These lesions were determined by the study Radiologist to be visible in both standard mammographic images (mediolateral oblique, MLO; craniocaudal, CC). For each patient, case data included patient age, tissue density, lesion type, BIRADS registered assessment, lesion size, lesion visibility-visible on MLO and/or CC view, ability of CAD to correctly mark the cancerous lesion, number of CAD marks per image, needle core biopsy results and surgical pathologic correlation. For this study cohort. CAD lesion/case sensitivity of 87% (n = 39) was found and image sensitivity was found to be 69% (n = 31) for MLO view and 78% (n = 35) for the CC view. For the study cohort, cases presented with a median of four marks per cases (range 0-13). Eighty-four percent (n = 38) of lesions proceeded to excision; initial needle biopsy pathology was upgraded at surgical excision from in situ disease to invasive for 24% (n = 9) lesions. CAD has demonstrated the potential to detect mammographically visible cancers in multiple standard mammographic projections in all categories of lesions in this study cohort. (orig.)

  13. Usefulness of Cone-Beam Computed Tomography and Automatic Vessel Detection Software in Emergency Transarterial Embolization

    Carrafiello, Gianpaolo, E-mail: gcarraf@gmail.com; Ierardi, Anna Maria, E-mail: amierardi@yahoo.it; Duka, Ejona, E-mail: ejonaduka@hotmail.com [Insubria University, Department of Radiology, Interventional Radiology (Italy); Radaelli, Alessandro, E-mail: alessandro.radaelli@philips.com [Philips Healthcare (Netherlands); Floridi, Chiara, E-mail: chiara.floridi@gmail.com [Insubria University, Department of Radiology, Interventional Radiology (Italy); Bacuzzi, Alessandro, E-mail: alessandro.bacuzzi@ospedale.varese.it [University of Insubria, Anaesthesia and Palliative Care (Italy); Bucourt, Maximilian de, E-mail: maximilian.de-bucourt@charite.de [Charité - University Medicine Berlin, Department of Radiology (Germany); Marchi, Giuseppe De, E-mail: giuseppedemarchi@email.it [Insubria University, Department of Radiology, Interventional Radiology (Italy)

    2016-04-15

    BackgroundThis study was designed to evaluate the utility of dual phase cone beam computed tomography (DP-CBCT) and automatic vessel detection (AVD) software to guide transarterial embolization (TAE) of angiographically challenging arterial bleedings in emergency settings.MethodsTwenty patients with an arterial bleeding at computed tomography angiography and an inconclusive identification of the bleeding vessel at the initial 2D angiographic series were included. Accuracy of DP-CBCT and AVD software were defined as the ability to detect the bleeding site and the culprit arterial bleeder, respectively. Technical success was defined as the correct positioning of the microcatheter using AVD software. Clinical success was defined as the successful embolization. Total volume of iodinated contrast medium and overall procedure time were registered.ResultsThe bleeding site was not detected by initial angiogram in 20 % of cases, while impossibility to identify the bleeding vessel was the reason for inclusion in the remaining cases. The bleeding site was detected by DP-CBCT in 19 of 20 (95 %) patients; in one case CBCT-CT fusion was required. AVD software identified the culprit arterial branch in 18 of 20 (90 %) cases. In two cases, vessel tracking required manual marking of the candidate arterial bleeder. Technical success was 95 %. Successful embolization was achieved in all patients. Mean contrast volume injected for each patient was 77.5 ml, and mean overall procedural time was 50 min.ConclusionsC-arm CBCT and AVD software during TAE of angiographically challenging arterial bleedings is feasible and may facilitate successful embolization. Staff training in CBCT imaging and software manipulation is necessary.

  14. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  15. Computer aided detection of brain micro-bleeds in traumatic brain injury

    van den Heuvel, T. L. A.; Ghafoorian, M.; van der Eerden, A. W.; Goraj, B. M.; Andriessen, T. M. J. C.; ter Haar Romeny, B. M.; Platel, B.

    2015-03-01

    Brain micro-bleeds (BMBs) are used as surrogate markers for detecting diffuse axonal injury in traumatic brain injury (TBI) patients. The location and number of BMBs have been shown to influence the long-term outcome of TBI. To further study the importance of BMBs for prognosis, accurate localization and quantification are required. The task of annotating BMBs is laborious, complex and prone to error, resulting in a high inter- and intra-reader variability. In this paper we propose a computer-aided detection (CAD) system to automatically detect BMBs in MRI scans of moderate to severe neuro-trauma patients. Our method consists of four steps. Step one: preprocessing of the data. Both susceptibility (SWI) and T1 weighted MRI scans are used. The images are co-registered, a brain-mask is generated, the bias field is corrected, and the image intensities are normalized. Step two: initial candidates for BMBs are selected as local minima in the processed SWI scans. Step three: feature extraction. BMBs appear as round or ovoid signal hypo-intensities on SWI. Twelve features are computed to capture these properties of a BMB. Step four: Classification. To identify BMBs from the set of local minima using their features, different classifiers are trained on a database of 33 expert annotated scans and 18 healthy subjects with no BMBs. Our system uses a leave-one-out strategy to analyze its performance. With a sensitivity of 90% and 1.3 false positives per BMB, our CAD system shows superior results compared to state-of-the-art BMB detection algorithms (developed for non-trauma patients).

  16. Comparison of computer workstation with light box for detecting setup errors from portal images

    Boxwala, Aziz A.; Chaney, Edward L.; Fritsch, Daniel S.; Raghavan, Suraj; Coffey, Christopher S.; Major, Stacey A.; Muller, Keith E.

    1999-01-01

    Purpose: Observer studies were conducted to test the hypothesis that radiation oncologists using a computer workstation for portal image analysis can detect setup errors at least as accurately as when following standard clinical practice of inspecting portal films on a light box. Methods and Materials: In a controlled observer study, nine radiation oncologists used a computer workstation, called PortFolio, to detect setup errors in 40 realistic digitally reconstructed portal radiograph (DRPR) images. PortFolio is a prototype workstation for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools for image enhancement; alignment of crosshairs, field edges, and anatomic structures on reference and acquired images; measurement of distances and angles; and viewing registered images superimposed on one another. The test DRPRs contained known in-plane translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Test images used in the study were also printed on film for observers to view on a light box and interpret using standard clinical practice. The mean accuracy for error detection for each approach was measured and the results were compared using repeated measures analysis of variance (ANOVA) with the Geisser-Greenhouse test statistic. Results: The results indicate that radiation oncologists participating in this study could detect and quantify in-plane rotation and translation errors more accurately with PortFolio compared to standard clinical practice. Conclusions: Based on the results of this limited study, it is reasonable to conclude that workstations similar to PortFolio can be used efficaciously in clinical practice

  17. Designing of a Computer Software for Detection of Approximal Caries in Posterior Teeth

    Valizadeh, Solmaz; Goodini, Mostafa; Ehsani, Sara; Mohseni, Hadis; Azimi, Fateme; Bakhshandeh, Hooman

    2015-01-01

    Radiographs, adjunct to clinical examination are always valuable complementary methods for dental caries detection. Recently, progressing in digital imaging system provides possibility of software designing for automatically dental caries detection. The aim of this study was to develop and assess the function of diagnostic computer software designed for evaluation of approximal caries in posterior teeth. This software should be able to indicate the depth and location of caries on digital radiographic images. Digital radiographs were obtained of 93 teeth including 183 proximal surfaces. These images were used as a database for designing the software and training the software designer. In the design phase, considering the summed density of pixels in rows and columns of the images, the teeth were separated from each other and the unnecessary regions; for example, the root area in the alveolar bone was eliminated. Therefore, based on summed intensities, each image was segmented such that each segment contained only one tooth. Subsequently, based on the fuzzy logic, a well-known data-clustering algorithm named fuzzy c-means (FCM) was applied to the images to cluster or segment each tooth. This algorithm is referred to as a soft clustering method, which assigns data elements to one or more clusters with a specific membership function. Using the extracted clusters, the tooth border was determined and assessed for cavity. The results of histological analysis were used as the gold standard for comparison with the results obtained from the software. Depth of caries was measured, and finally Intraclass Correlation Coefficient (ICC) and Bland-Altman plot were used to show the agreement between the methods. The software diagnosed 60% of enamel caries. The ICC (for detection of enamel caries) between the computer software and histological analysis results was determined as 0.609 (95% confidence interval [CI] = 0.159-0.849) (P = 0.006). Also, the computer program diagnosed 97% of

  18. Problems of detection method of coronary arterial stenosis on cineangiograms by computer image processing

    Sugahara, Tetsuo; Yanagihara, Yoshio; Sugimoto, Naozou; Uyama, Chikao; Maeda, Hirofumi.

    1988-01-01

    For the evaluation of the coronary arterial stenosis (CAS), the detection method of CAS were estimated on the coronary cineangiograms by computer image processing. For correlation of the accuracy of measurement of diameter on the image of resolution of 30 and 4 μm/pixel were measured the diameter on the vessel model images using sum of first and second differential method. The accuracy of measurement on resolution image of 30 and 4 μm/pixel on 3 mm diameter is 4.7 % and 2.3 %, respectively. Threshold method was used for the detection of the arterial wall on the subtraction images. For the detection of CAS, measurement method of the branch segment and determination method of the radius and normal vessel diameter were evaluated. A matter of special importance is determination method of the normal diameter. In view of the fact that this is a matter of great importance, it caused error to the measurement of prerent stenosis and stenotic length. This resulted that the detection of CAS was important not only the accuracy of measurement of the vessel diameter but also determination method of the normal diameter. (author)

  19. Comparison of low contrast detectability of computed tomography and screen/film mammography systems

    Noriah Jamal; Kwan Hoong Ng; McLean, D.

    2006-01-01

    The objective of this study was to compare low contrast detectability of computed radiography (CR) and screen/film (SF) mammography systems. The Nijimegen contrast detail test object (CDMAM type 3.4) was imaged at 28 kV, in automatic exposure control mode separately. Six medical imaging physicists read each CDMAM phantom image. Contrast detail curves were plotted to compare low contrast detectability of CR (soft copy and hard copy) and SF mammography systems. Effect of varying exposure parameters, namely kV, object position inside the breast phantom, and entrance surface exposure (ESE) on the contrast detail curve were also investigated using soft copy CR. The significant of the difference of contrast between CR and SF, and for each exposure parameter was tested using non-parametric Kruskal-Wallis test. We found that the low contrast detectability of CR (soft copy and hard copy) system is not significantly different to that of SF system (p>0.05, Kruskal-Wallis test). For CR soft copy, no significant relationship (p>0.05, Kruskal-Wallis test) was seen for variation of kV, object position inside the breast phantom and ESE. This indicates that CR is comparable with SF for useful detection and visualization of low contrast objects such as small low contrast areas corresponding to breast pathology

  20. Comparison of low-contrast detectability of computed radiography and screen/ film mammography systems

    Noriah Jamal; Kwan-Hoong Ng; McLean, D.; McLean, D.

    2008-01-01

    The objective of this study is to compare low-contrast detectability of computed radiography (CR) and screen/ film (SF) mammography systems. The Nijimegen contrast detail test object (CDMAM type 3.4) was imaged at 28 kV, in automatic exposure control mode separately. Six medical imaging physicists read each CDMAM phantom image. Contrast detail curves were plotted to compare low-contrast detectability of CR (soft copy and hard copy) and SF mammography systems. Effect of varying exposure parameters, namely kV, object position inside the breast phantom, and entrance surface exposure (ESE) on the contrast detail curve were also investigated using soft copy CR. The significance of the difference in contrast between CR and SF, and for each exposure parameter, was tested using non-parametric Kruskal-Wallis test. The low-contrast detectability of the CR (soft copy and hard copy) system was found to be not significantly different to that of the SF system (p> 0.05, Kruskal-Wallis test).For CR soft copy, no significant relationship (p>0.05, Kruskal-Wallis test) was seen for variation of kV, object position inside the breast phantom and ESE. This indicates that CR is comparable with SF for useful detection and visualization of low-contrast objects such as small low-contrast areas corresponding to breast pathology. (Author)

  1. Accuracy of digital peripical radiography and cone-beam computed tomography in detecting external root resorption

    Creanga, Adriana Gabriela [Division of Dental Diagnostic Science, Rutgers School of Dental Medicine, Newark (United States); Geha, Hassem; Sankar, Vidya; Mcmahan, Clyde Alex; Noujeim, Marcel [University of Texas Health Science Center San Antonio, San Antonio (United States); Teixeira, Fabrico B. [Dept. of Endodontics, University of Iowa, Iowa City (United States)

    2015-09-15

    The purpose of this study was to evaluate and compare the efficacy of cone-beam computed tomography (CBCT) and digital intraoral radiography in diagnosing simulated small external root resorption cavities. Cavities were drilled in 159 roots using a small spherical bur at different root levels and on all surfaces. The teeth were imaged both with intraoral digital radiography using image plates and with CBCT. Two sets of intraoral images were acquired per tooth: orthogonal (PA) which was the conventional periapical radiograph and mesioangulated (SET). Four readers were asked to rate their confidence level in detecting and locating the lesions. Receiver operating characteristic (ROC) analysis was performed to assess the accuracy of each modality in detecting the presence of lesions, the affected surface, and the affected level. Analysis of variation was used to compare the results and kappa analysis was used to evaluate interobserver agreement. A significant difference in the area under the ROC curves was found among the three modalities (P=0.0002), with CBCT (0.81) having a significantly higher value than PA (0.71) or SET (0.71). PA was slightly more accurate than SET, but the difference was not statistically significant. CBCT was also superior in locating the affected surface and level. CBCT has already proven its superiority in detecting multiple dental conditions, and this study shows it to likewise be superior in detecting and locating incipient external root resorption.

  2. Agreement between ultrasonography and computed tomography in detecting intracranial calcifications in congenital toxoplasmosis

    Lago, E.G. [Department of Pediatrics, Pontificia Universidade Catolica do Rio Grande do Sul School of Medicine, Sao Lucas Hospital, Porto Alegre (Brazil)], E-mail: eglago@pucrs.br; Baldisserotto, M.; Hoefel Filho, J.R.; Santiago, D. [Department of Radiology, Pontificia Universidade Catolica do Rio Grande do Sul School of Medicine, Sao Lucas Hospital, Porto Alegre (Brazil); Jungblut, R. [Department of Pediatrics, Pontificia Universidade Catolica do Rio Grande do Sul School of Medicine, Sao Lucas Hospital, Porto Alegre (Brazil)

    2007-10-15

    Aim: To evaluate the agreement between ultrasound (US) and computed tomography (CT) in detecting intracranial calcification in infants with congenital toxoplasmosis. Materials and methods: Forty-four infants referred for investigation of congenital toxoplasmosis were prospectively evaluated, and the diagnosis was confirmed or ruled out by serological testing and by follow-up in the first year of life. The investigation protocol included cranial US and cranial CT, and examinations were conducted and interpreted by two radiologists blinded to the results of the other imaging test and to the diagnostic confirmation. Results: The diagnosis of congenital toxoplasmosis was confirmed in 33 patients, and agreement between US and CT findings was found in 31 of these cases. Both methods detected calcifications in 18 patients, and neither detected calcifications in 13 patients. Overall agreement was 94% and the kappa coefficient was 0.88 (95% confidence interval: 0.71, 1; p < 0.001), which revealed almost perfect agreement between the two diagnostic methods. Conclusion: In this study, US and CT demonstrated equal sensitivity in the detection of intracranial calcification in infants with congenital toxoplasmosis.

  3. Comparison of digital tomosynthesis and computed tomography for lung nodule detection in SOS screening program.

    Grosso, Maurizio; Priotto, Roberto; Ghirardo, Donatella; Talenti, Alberto; Roberto, Emanuele; Bertolaccini, Luca; Terzi, Alberto; Chauvie, Stéphane

    2017-08-01

    To compare the lung nodules' detection of digital tomosynthesis (DTS) and computed tomography (CT) in the context of the SOS (Studio OSservazionale) prospective screening program for lung cancer detection. One hundred and thirty-two of the 1843 subjects enrolled in the SOS study underwent CT because non-calcified nodules with diameters larger than 5 mm and/or multiple nodules were present in DTS. Two expert radiologists reviewed the exams classifying the nodules based on their radiological appearance and their dimension. LUNG-RADS classification was applied to compare receiver operator characteristics curve between CT and DTS with respect to final diagnosis. CT was used as gold standard. DTS and CT detected 208 and 179 nodules in the 132 subjects, respectively. Of these 208 nodules, 189 (91%) were solid, partially solid, and ground glass opacity. CT confirmed 140/189 (74%) of these nodules but found 4 nodules that were not detected by DTS. DTS and CT were concordant in 62% of the cases applying the 5-point LUNG-RADS scale. The concordance rose to 86% on a suspicious/non-suspicious binary scale. The areas under the curve in receiver operator characteristics were 0.89 (95% CI 0.83-0.94) and 0.80 (95% CI 0.72-0.89) for CT and DTS, respectively. The mean effective dose was 0.09 ± 0.04 mSv for DTS and 4.90 ± 1.20 mSv for CT. The use of a common classification for nodule detection in DTS and CT helps in comparing the two technologies. DTS detected and correctly classified 74% of the nodules seen by CT but lost 4 nodules identified by CT. Concordance between DTS and CT rose to 86% of the nodules when considering LUNG-RADS on a binary scale.

  4. Computer-assisted detection of pulmonary embolism: performance evaluation in consensus with experienced and inexperienced chest radiologists

    Engelke, Christoph; Marten, Katharina; Schmidt, Stephan; Auer, Florian; Bakai, Annemarie

    2008-01-01

    The value of a computer-aided detection tool (CAD) as second reader in combination with experienced and inexperienced radiologists for the diagnosis of acute pulmonary embolism (PE) was assessed prospectively. Computed tomographic angiography (CTA) scans (64 x 0.6 mm collimation; 61.4 mm/rot table feed) of 56 patients (31 women, 34-89 years, mean = 66 years) with suspected PE were analysed by two experienced (R1, R2) and two inexperienced (R3, R4) radiologists for the presence and distribution of emboli using a five-point confidence rating, and by CAD. Informed consent was obtained from all patients. Results were compared with an independent reference standard. Inter-observer agreement was calculated by kappa, confidence assessed by ROC analysis. A total of 1,116 emboli [within mediastinal (n = 72), lobar (n 133), segmental (n = 465) and subsegmental arteries (n = 455)] were included. CAD detected 343 emboli (sensitivity = 30.74%, correct-positive rate 6.13/patient; false-positive rate = 4.1/patient). Inter-observer agreement was good (R1, R2: κ = 0.84, 95% CI = 0.81-0.87; R3, R4: κ = 0.79, 95% CI = 0.76-0.81). Extended inter-observer agreement was higher in mediastinal and lobar than in segmental and subsegmental arteries (κ 0.84-0.86 and κ = 0.51-0.58 for mediastinal/lobar and segmental/subsegmental arteries, respectively P 0.05). Particularly inexperienced readers benefit from consensus with CAD data, greatly improving detection of segmental and subsegmental emboli. This system is advocated as a second reader. (orig.)

  5. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  6. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System

    Anbang Zhao

    2017-02-01

    Full Text Available In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  7. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous

  8. Computer aided detection system for Osteoporosis using low dose thoracic 3D CT images

    Tsuji, Daisuke; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Harada, Masafumi; Kusumoto, Masahiko; Tsuchida, Takaaki; Eguchi, Kenji; Kaneko, Masahiro

    2018-02-01

    The patient of osteoporosis is about 13 million people in Japan and it is one of healthy life problems in the aging society. It is necessary to do early stage detection and treatment in order to prevent the osteoporosis. Multi-slice CT technology has been improving the three dimensional (3D) image analysis with higher resolution and shorter scan time. The 3D image analysis of thoracic vertebra can be used for supporting to diagnosis of osteoporosis. This analysis can be used for lung cancer detection at the same time. We develop method of shape analysis and CT values of spongy bone for the detection osteoporosis. Osteoporosis and lung cancer screening show high extraction rate by the thoracic vertebral evaluation CT images. In addition, we created standard pattern of CT value per thoracic vertebra for male age group using 298 low dose data.

  9. Performance improvement of multi-class detection using greedy algorithm for Viola-Jones cascade selection

    Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.

    2018-04-01

    This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.

  10. A strategy for improved computational efficiency of the method of anchored distributions

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  11. Confabulation Based Real-time Anomaly Detection for Wide-area Surveillance Using Heterogeneous High Performance Computing Architecture

    2015-06-01

    CONFABULATION BASED REAL-TIME ANOMALY DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE SYRACUSE...DETECTION FOR WIDE-AREA SURVEILLANCE USING HETEROGENEOUS HIGH PERFORMANCE COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-12-1-0251 5b. GRANT...processors including graphic processor units (GPUs) and Intel Xeon Phi processors. Experimental results showed significant speedups, which can enable

  12. Do ergonomics improvements increase computer workers' productivity?: an intervention study in a call centre.

    Smith, Michael J; Bayehi, Antoinette Derjani

    2003-01-15

    This paper examines whether improving physical ergonomics working conditions affects worker productivity in a call centre with computer-intensive work. A field study was conducted at a catalogue retail service organization to explore the impact of ergonomics improvements on worker production. There were three levels of ergonomics interventions, each adding incrementally to the previous one. The first level was ergonomics training for all computer users accompanied by workstation ergonomics analysis leading to specific customized adjustments to better fit each worker (Group C). The second level added specific workstation accessories to improve the worker fit if the ergonomics analysis indicated a need for them (Group B). The third level met Group B requirements plus an improved chair (Group A). Productivity data was gathered from 72 volunteer participants who received ergonomics improvements to their workstations and 370 control subjects working in the same departments. Daily company records of production outputs for each worker were taken before ergonomics intervention (baseline) and 12 months after ergonomics intervention. Productivity improvement from baseline to 12 months post-intervention was examined across all ergonomics conditions combined, and also compared to the control group. The findings showed that worker performance increased for 50% of the ergonomics improvement participants and decreased for 50%. Overall, there was a 4.87% output increase for the ergonomics improvement group as compared to a 3.46% output decrease for the control group. The level of productivity increase varied by the type of the ergonomics improvements with Group C showing the best improvement (9.43%). Even though the average production improved, caution must be used in interpreting the findings since the ergonomics interventions were not successful for one-half of the participants.

  13. Sensitivity of the improved Dutch tube diffusion test for detection of ...

    The sensitivity of the improved two-tube test for detection of antimicrobial residues in Kenyan milk was investigated by comparison with the commercial Delvo test SP. Suspect positive milk samples (n =244) from five milk collection centers, were analyzed with the improved two-tube and the commercial Delvo SP test as per ...

  14. Improved explosive collection and detection with rationally assembled surface sampling materials

    Chouyyok, Wilaiwan; Bays, J. Timothy; Gerasimenko, Aleksandr A.; Cinson, Anthony D.; Ewing, Robert G.; Atkinson, David A.; Addleman, R. Shane

    2016-01-01

    Sampling and detection of trace explosives is a key analytical process in modern transportation safety. In this work we have explored some of the fundamental analytical processes for collection and detection of trace level explosive on surfaces with the most widely utilized system, thermal desorption IMS. The performance of the standard muslin swipe material was compared with chemically modified fiberglass cloth. The fiberglass surface was modified to include phenyl functional groups. When compared to standard muslin, the phenyl functionalized fiberglass sampling material showed better analyte release from the sampling material as well as improved response and repeatability from multiple uses of the same swipe. The improved sample release of the functionalized fiberglass swipes resulted in a significant increase in sensitivity. Various physical and chemical properties were systematically explored to determine optimal performance. The results herein have relevance to improving the detection of other explosive compounds and potentially to a wide range of other chemical sampling and field detection challenges.

  15. Improvement in Limit of Detection of Enzymatic Biogas Sensor Utilizing Chromatography Paper for Breath Analysis.

    Motooka, Masanobu; Uno, Shigeyasu

    2018-02-02

    Breath analysis is considered to be an effective method for point-of-care diagnosis due to its noninvasiveness, quickness and simplicity. Gas sensors for breath analysis require detection of low-concentration substances. In this paper, we propose that reduction of the background current improves the limit of detection of enzymatic biogas sensors utilizing chromatography paper. After clarifying the cause of the background current, we reduced the background current by improving the fabrication process of the sensors utilizing paper. Finally, we evaluated the limit of detection of the sensor with the sample vapor of ethanol gas. The experiment showed about a 50% reduction of the limit of detection compared to previously-reported sensor. This result presents the possibility of the sensor being applied in diagnosis, such as for diabetes, by further lowering the limit of detection.

  16. Psychomotor Impairment Detection via Finger Interactions with a Computer Keyboard During Natural Typing

    Giancardo, L.; Sánchez-Ferro, A.; Butterworth, I.; Mendoza, C. S.; Hooker, J. M.

    2015-04-01

    Modern digital devices and appliances are capable of monitoring the timing of button presses, or finger interactions in general, with a sub-millisecond accuracy. However, the massive amount of high resolution temporal information that these devices could collect is currently being discarded. Multiple studies have shown that the act of pressing a button triggers well defined brain areas which are known to be affected by motor-compromised conditions. In this study, we demonstrate that the daily interaction with a computer keyboard can be employed as means to observe and potentially quantify psychomotor impairment. We induced a psychomotor impairment via a sleep inertia paradigm in 14 healthy subjects, which is detected by our classifier with an Area Under the ROC Curve (AUC) of 0.93/0.91. The detection relies on novel features derived from key-hold times acquired on standard computer keyboards during an uncontrolled typing task. These features correlate with the progression to psychomotor impairment (p < 0.001) regardless of the content and language of the text typed, and perform consistently with different keyboards. The ability to acquire longitudinal measurements of subtle motor changes from a digital device without altering its functionality may allow for early screening and follow-up of motor-compromised neurodegenerative conditions, psychological disorders or intoxication at a negligible cost in the general population.

  17. Automated detection of lung nodules in low-dose computed tomography

    Cascio, D.; Cheran, S.C.; Chincarini, A.; De Nunzio, G.; Delogu, P.; Fantacci, M.E.; Gargano, G.; Gori, I.; Retico, A.; Masala, G.L.; Preite Martinez, A.; Santoro, M.; Spinelli, C.; Tarantino, T.

    2007-01-01

    A computer-aided detection (CAD) system for the identification of pulmonary nodules in low-dose multi-detector computed-tomography (CT) images has been developed in the framework of the MAGIC-5 Italian project. One of the main goals of this project is to build a distributed database of lung CT scans in order to enable automated image analysis through a data and cpu GRID infrastructure. The basic modules of our lung-CAD system, consisting in a 3D dot-enhancement filter for nodule detection and a neural classifier for false-positive finding reduction, are described. The system was designed and tested for both internal and sub-pleural nodules. The database used in this study consists of 17 low-dose CT scans reconstructed with thin slice thickness (∝300 slices/scan). The preliminary results are shown in terms of the FROC analysis reporting a good sensitivity (85% range) for both internal and sub-pleural nodules at an acceptable level of false positive findings (1-9 FP/scan); the sensitivity value remains very high (75% range) even at 1-6 FP/scan. (orig.)

  18. A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance.

    von Trapp, Gardiner; Buran, Bradley N; Sen, Kamal; Semple, Malcolm N; Sanes, Dan H

    2016-10-26

    The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability

  19. Improved CT-detection of acute bowel ischemia using frequency selective non-linear image blending.

    Schneeweiss, Sven; Esser, Michael; Thaiss, Wolfgang; Boesmueller, Hans; Ditt, Hendrik; Nikolau, Konstantin; Horger, Marius

    2017-07-01

    Computed tomography (CT) as a fast and reliable diagnostic technique is the imaging modality of choice for acute bowel ischemia. However, diagnostic is often difficult mainly due to low attenuation differences between ischemic and perfused segments. To compare the diagnostic efficacy of a new post-processing tool based on frequency selective non-linear blending with that of conventional linear contrast-enhanced CT (CECT) image blending for the detection of bowel ischemia. Twenty-seven consecutive patients (19 women; mean age = 73.7 years, age range = 50-94 years) with acute bowel ischemia were scanned using multidetector CT (120 kV; 100-200 mAs). Pre-contrast and portal venous scans (65-70 s delay) were acquired. All patients underwent surgery for acute bowel ischemia and intraoperative diagnosis as well as histologic evaluation of explanted bowel segments was considered "gold standard." First, two radiologists read the conventional CECT images in which linear blending was adapted for optimal contrast, and second (three weeks later) the frequency selective non-linear blending (F-NLB) image. Attenuation values were compared, both in the involved and non-involved bowel segments creating ratios between unenhanced and CECT. The mean attenuation difference between ischemic and non-ischemic wall in the portal venous scan was 69.54 HU (reader 2 = 69.01 HU) higher for F-NLB compared with conventional CECT. Also, the attenuation ratio between contrast-enhanced and pre-contrast CT data for the non-ischemic walls showed significantly higher values for the F-NLB image (CECT: reader 1 = 2.11 (reader 2 = 3.36), F-NLB: reader 1 = 4.46 (reader 2 = 4.98)]. Sensitivity in detecting ischemic areas increased significantly for both readers using F-NLB (CECT: reader 1/2 = 53%/65% versus F-NLB: reader 1/2 = 62%/75%). Frequency selective non-linear blending improves detection of bowel ischemia compared with conventional CECT by increasing

  20. Hepatic changes caused by exposure to telecobalt rays as detected by scintigraphy and computed tomography

    Lueth, I.

    1987-01-01

    Hepatic scintiscans obtained in a cohort of 111 patients subjected to partial irradiation of the liver using telecobalt showed low density spots for 53 of those individuals. Comparative assessments in a control group proved the liver's accumulation behaviour to be totally unrelated to factors like age, sex and dose administered. The liver is only to a very limited extent capable of recovering from radiation damage that is severe enough to be detected by scintigraphy or computed tomography. In the group examined here, spontaneous recovery was seen in no more than 7.5% of cases. Long-term plotting of the hepatic radioactivity levels seen in scintigrammes showed these to be reduced for periods of up to nine years. Such pathological changes were already observed at radiation levels as low as 12 Gy, even though a definite dose-dependency of defective accumulation, as shown by scintigraphy or computed tomography, could not be established. Particular mention should here be made of the fact that the losses of activity seen in hepatic scintiscans were not necessarily confirmed by pathological findings revealed at the same time on the basis of computed tomography. Liver function tests in the serum permitted no links to be established between the occurrence of low density spots in the scintiscans or tomograms and typical enzyme patterns that may be interpreted as being suggestive of radiation injury. (orig.) [de