WorldWideScience

Sample records for semi-automatic image analysis

  1. Evaluation of ventricular dysfunction using semi-automatic longitudinal strain analysis of four-chamber cine MR imaging.

    Science.gov (United States)

    Kawakubo, Masateru; Nagao, Michinobu; Kumazawa, Seiji; Yamasaki, Yuzo; Chishaki, Akiko S; Nakamura, Yasuhiko; Honda, Hiroshi; Morishita, Junji

    2016-02-01

    The aim of this study was to evaluate ventricular dysfunction using the longitudinal strain analysis in 4-chamber (4CH) cine MR imaging, and to investigate the agreement between the semi-automatic and manual measurements in the analysis. Fifty-two consecutive patients with ischemic, or non-ischemic cardiomyopathy and repaired tetralogy of Fallot who underwent cardiac MR examination incorporating cine MR imaging were retrospectively enrolled. The LV and RV longitudinal strain values were obtained by semi-automatically and manually. Receiver operating characteristic (ROC) analysis was performed to determine the optimal cutoff of the minimum longitudinal strain value for the detection of patients with cardiac dysfunction. The correlations between manual and semi-automatic measurements for LV and RV walls were analyzed by Pearson coefficient analysis. ROC analysis demonstrated the optimal cut-off of the minimum longitudinal strain values (εL_min) for diagnoses the LV and RV dysfunction at a high accuracy (LV εL_min = -7.8 %: area under the curve, 0.89; sensitivity, 83 %; specificity, 91 %, RV εL_min = -15.7 %: area under the curve, 0.82; sensitivity, 92 %; specificity, 68 %). Excellent correlations between manual and semi-automatic measurements for LV and RV free wall were observed (LV, r = 0.97, p cine MR imaging can evaluate LV and RV dysfunction with simply and easy measurements. The strain analysis could have extensive application in cardiac imaging for various clinical cases.

  2. Semi-automatic system for UV images analysis of historical musical instruments

    Science.gov (United States)

    Dondi, Piercarlo; Invernizzi, Claudia; Licchelli, Maurizio; Lombardi, Luca; Malagodi, Marco; Rovetta, Tommaso

    2015-06-01

    The selection of representative areas to be analyzed is a common problem in the study of Cultural Heritage items. UV fluorescence photography is an extensively used technique to highlight specific surface features which cannot be observed in visible light (e.g. restored parts or treated with different materials), and it proves to be very effective in the study of historical musical instruments. In this work we propose a new semi-automatic solution for selecting areas with the same perceived color (a simple clue of similar materials) on UV photos, using a specifically designed interactive tool. The proposed method works in two steps: (i) users select a small rectangular area of the image; (ii) program automatically highlights all the areas that have the same color of the selected input. The identification is made by the analysis of the image in HSV color model, the most similar to the human perception. The achievable result is more accurate than a manual selection, because it can detect also points that users do not recognize as similar due to perception illusion. The application has been developed following the rules of usability, and Human Computer Interface has been improved after a series of tests performed by expert and non-expert users. All the experiments were performed on UV imagery of the Stradivari violins collection stored by "Museo del Violino" in Cremona.

  3. Semi-automatic image analysis methodology for the segmentation of bubbles and drops in complex dispersions occurring in bioreactors

    Science.gov (United States)

    Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.

    2006-09-01

    Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.

  4. Semi-automatic measures of activity in selected south polar regions of Mars using morphological image analysis

    Science.gov (United States)

    Aye, Klaus-Michael; Portyankina, Ganna; Pommerol, Antoine; Thomas, Nicolas

    results of these semi-automatically determined seasonal fan count evolutions for Inca City, Ithaca and Manhattan ROIs, compare these evolutionary patterns with each other and with surface reflectance evolutions of both HiRISE and CRISM for the same locations. References: Aye, K.-M. et. al. (2010), LPSC 2010, 2707 Hansen, C. et. al (2010) Icarus, 205, Issue 1, p. 283-295 Kieffer, H.H. (2007), JGR 112 Portyankina, G. et. al. (2010), Icarus, 205, Issue 1, p. 311-320 Thomas, N. et. Al. (2009), Vol. 4, EPSC2009-478

  5. Semi-automatic motion compensation of contrast-enhanced ultrasound images from abdominal organs for perfusion analysis

    Czech Academy of Sciences Publication Activity Database

    Schafer, S.; Nylund, K.; Saevik, F.; Engjom, T.; Mézl, M.; Jiřík, Radovan; Dimcevski, G.; Gilja, O.H.; Tönnies, K.

    2015-01-01

    Roč. 63, AUG 1 (2015), s. 229-237 ISSN 0010-4825 R&D Projects: GA ČR GAP102/12/2380 Institutional support: RVO:68081731 Keywords : ultrasonography * motion analysis * motion compensation * registration * CEUS * contrast-enhanced ultrasound * perfusion * perfusion modeling Subject RIV: FS - Medical Facilities ; Equipment Impact factor: 1.521, year: 2015

  6. Semi-automatic construction of reference standards for evaluation of image registration

    NARCIS (Netherlands)

    Murphy, K.; Ginneken, van B.; Klein, S.; Staring, M.; Hoop, de B.J.; Viergever, M.A.; Pluim, J.P.W.

    2011-01-01

    Quantitative evaluation of image registration algorithms is a difficult and under-addressed issue due to the lack of a reference standard in most registration problems. In this work a method is presented whereby detailed reference standard data may be constructed in an efficient semi-automatic

  7. Semi-Automatic Image Labelling Using Depth Information

    Directory of Open Access Journals (Sweden)

    Mostafa Pordel

    2015-05-01

    Full Text Available Image labeling tools help to extract objects within images to be used as ground truth for learning and testing in object detection processes. The inputs for such tools are usually RGB images. However with new widely available low-cost sensors like Microsoft Kinect it is possible to use depth images in addition to RGB images. Despite many existing powerful tools for image labeling, there is a need for RGB-depth adapted tools. We present a new interactive labeling tool that partially automates image labeling, with two major contributions. First, the method extends the concept of image segmentation from RGB to RGB-depth using Fuzzy C-Means clustering, connected component labeling and superpixels, and generates bounding pixels to extract the desired objects. Second, it minimizes the interaction time needed for object extraction by doing an efficient segmentation in RGB-depth space. Very few clicks are needed for the entire procedure compared to existing, tools. When the desired object is the closest object to the camera, which is often the case in robotics applications, no clicks at all are required to accurately extract the object.

  8. Semi-automatic watershed medical image segmentation methods for customized cancer radiation treatment planning simulation

    International Nuclear Information System (INIS)

    Kum Oyeon; Kim Hye Kyung; Max, N.

    2007-01-01

    A cancer radiation treatment planning simulation requires image segmentation to define the gross tumor volume, clinical target volume, and planning target volume. Manual segmentation, which is usual in clinical settings, depends on the operator's experience and may, in addition, change for every trial by the same operator. To overcome this difficulty, we developed semi-automatic watershed medical image segmentation tools using both the top-down watershed algorithm in the insight segmentation and registration toolkit (ITK) and Vincent-Soille's bottom-up watershed algorithm with region merging. We applied our algorithms to segment two- and three-dimensional head phantom CT data and to find pixel (or voxel) numbers for each segmented area, which are needed for radiation treatment optimization. A semi-automatic method is useful to avoid errors incurred by both human and machine sources, and provide clear and visible information for pedagogical purpose. (orig.)

  9. Semi-automatic ROI placement system for analysis of brain PET images based on elastic model. Application to diagnosis of Alzheimer's disease

    International Nuclear Information System (INIS)

    Ohyama, Masashi; Mishina, Masahiro; Kitamura, Shin; Katayama, Yasuo; Senda, Michio; Tanizaki, Naoki; Ishii, Kenji

    2000-01-01

    PET with 18F-fluorodeoxyglucose (FDG) is a useful technique to image cerebral glucose metabolism and to detect patients with Alzheimer's disease in the early stage, in which characteristic temporoparietal hypometabolism is visualized. We have developed a new system, in which the standard brain ROI atlas made of networks of segments is elastically transformed to match the subject brain images, so that standard ROIs defined on the segments are placed on the individual brain images and are used to measure radioactivity over each brain region. We applied this methods to Alzheimer's disease. This method was applied to the images of 10 normal subjects (ages 55 +/- 12) and 21 patients clinically diagnosed as Alzheimer's disease (age 61 +/- 10). The FDG uptake reflecting glucose metabolism was evaluated with SUV, i.e. decay corrected radioactivity divided by injected dose per body weight in (Bq/ml)/(Bq/g). The system worked all right in every subject including those with extensive hypometabolism. Alzheimer patients showed markedly lower in the parietal cortex (4.0-4.1). When the threshold value of FDG uptake in the parietal lobe was set as 5 (Bq/ml)/(Bq/g), we could discriminate the patients with Alzheimer's disease from the normal subjects. The sensitivity was 86% and the specificity was 90%. This system can assist diagnosis of FDG images and may be useful for treating data of a large number of subjects; e.g. when PET is applied to health screening. (author)

  10. Efficient Semi-Automatic 3D Segmentation for Neuron Tracing in Electron Microscopy Images

    Science.gov (United States)

    Jones, Cory; Liu, Ting; Cohan, Nathaniel Wood; Ellisman, Mark; Tasdizen, Tolga

    2015-01-01

    0.1. Background In the area of connectomics, there is a significant gap between the time required for data acquisition and dense reconstruction of the neural processes contained in the same dataset. Automatic methods are able to eliminate this timing gap, but the state-of-the-art accuracy so far is insufficient for use without user corrections. If completed naively, this process of correction can be tedious and time consuming. 0.2. New Method We present a new semi-automatic method that can be used to perform 3D segmentation of neurites in EM image stacks. It utilizes an automatic method that creates a hierarchical structure for recommended merges of superpixels. The user is then guided through each predicted region to quickly identify errors and establish correct links. 0.3. Results We tested our method on three datasets with both novice and expert users. Accuracy and timing were compared with published automatic, semi-automatic, and manual results. 0.4. Comparison with Existing Methods Post-automatic correction methods have also been used in [1] and [2]. These methods do not provide navigation or suggestions in the manner we present. Other semi-automatic methods require user input prior to the automatic segmentation such as [3] and [4] and are inherently different than our method. 0.5. Conclusion Using this method on the three datasets, novice users achieved accuracy exceeding state-of-the-art automatic results, and expert users achieved accuracy on par with full manual labeling but with a 70% time improvement when compared with other examples in publication. PMID:25769273

  11. Quantitative analysis of the patellofemoral motion pattern using semi-automatic processing of 4D CT data.

    Science.gov (United States)

    Forsberg, Daniel; Lindblom, Maria; Quick, Petter; Gauffin, Håkan

    2016-09-01

    To present a semi-automatic method with minimal user interaction for quantitative analysis of the patellofemoral motion pattern. 4D CT data capturing the patellofemoral motion pattern of a continuous flexion and extension were collected for five patients prone to patellar luxation both pre- and post-surgically. For the proposed method, an observer would place landmarks in a single 3D volume, which then are automatically propagated to the other volumes in a time sequence. From the landmarks in each volume, the measures patellar displacement, patellar tilt and angle between femur and tibia were computed. Evaluation of the observer variability showed the proposed semi-automatic method to be favorable over a fully manual counterpart, with an observer variability of approximately 1.5[Formula: see text] for the angle between femur and tibia, 1.5 mm for the patellar displacement, and 4.0[Formula: see text]-5.0[Formula: see text] for the patellar tilt. The proposed method showed that surgery reduced the patellar displacement and tilt at maximum extension with approximately 10-15 mm and 15[Formula: see text]-20[Formula: see text] for three patients but with less evident differences for two of the patients. A semi-automatic method suitable for quantification of the patellofemoral motion pattern as captured by 4D CT data has been presented. Its observer variability is on par with that of other methods but with the distinct advantage to support continuous motions during the image acquisition.

  12. A semi-automatic method for extracting thin line structures in images as rooted tree network

    Energy Technology Data Exchange (ETDEWEB)

    Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  13. Semi-automatic breast ultrasound image segmentation based on mean shift and graph cuts.

    Science.gov (United States)

    Zhou, Zhuhuang; Wu, Weiwei; Wu, Shuicai; Tsui, Po-Hsiang; Lin, Chung-Chih; Zhang, Ling; Wang, Tianfu

    2014-10-01

    Computerized tumor segmentation on breast ultrasound (BUS) images remains a challenging task. In this paper, we proposed a new method for semi-automatic tumor segmentation on BUS images using Gaussian filtering, histogram equalization, mean shift, and graph cuts. The only interaction required was to select two diagonal points to determine a region of interest (ROI) on an input image. The ROI image was shrunken by a factor of 2 using bicubic interpolation to reduce computation time. The shrunken image was smoothed by a Gaussian filter and then contrast-enhanced by histogram equalization. Next, the enhanced image was filtered by pyramid mean shift to improve homogeneity. The object and background seeds for graph cuts were automatically generated on the filtered image. Using these seeds, the filtered image was then segmented by graph cuts into a binary image containing the object and background. Finally, the binary image was expanded by a factor of 2 using bicubic interpolation, and the expanded image was processed by morphological opening and closing to refine the tumor contour. The method was implemented with OpenCV 2.4.3 and Visual Studio 2010 and tested for 38 BUS images with benign tumors and 31 BUS images with malignant tumors from different ultrasound scanners. Experimental results showed that our method had a true positive rate (TP) of 91.7%, a false positive (FP) rate of 11.9%, and a similarity (SI) rate of 85.6%. The mean run time on Intel Core 2.66 GHz CPU and 4 GB RAM was 0.49 ± 0.36 s. The experimental results indicate that the proposed method may be useful in BUS image segmentation. © The Author(s) 2014.

  14. Semi-automatic mapping for identifying complex geobodies in seismic images

    Science.gov (United States)

    Domínguez-C, Raymundo; Romero-Salcedo, Manuel; Velasquillo-Martínez, Luis G.; Shemeretov, Leonid

    2017-03-01

    Seismic images are composed of positive and negative seismic wave traces with different amplitudes (Robein 2010 Seismic Imaging: A Review of the Techniques, their Principles, Merits and Limitations (Houten: EAGE)). The association of these amplitudes together with a color palette forms complex visual patterns. The color intensity of such patterns is directly related to impedance contrasts: the higher the contrast, the higher the color intensity. Generally speaking, low impedance contrasts are depicted with low tone colors, creating zones with different patterns whose features are not evident for a 3D automated mapping option available on commercial software. In this work, a workflow for a semi-automatic mapping of seismic images focused on those areas with low-intensity colored zones that may be associated with geobodies of petroleum interest is proposed. The CIE L*A*B* color space was used to perform the seismic image processing, which helped find small but significant differences between pixel tones. This process generated binary masks that bound color regions to low-intensity colors. The three-dimensional-mask projection allowed the construction of 3D structures for such zones (geobodies). The proposed method was applied to a set of digital images from a seismic cube and tested on four representative study cases. The obtained results are encouraging because interesting geobodies are obtained with a minimum of information.

  15. Semi-automatic analysis of standard uptake values in serial PET/CT studies in patients with lung cancer and lymphoma

    Directory of Open Access Journals (Sweden)

    Ly John

    2012-04-01

    Full Text Available Abstract Background Changes in maximum standardised uptake values (SUVmax between serial PET/CT studies are used to determine disease progression or regression in oncologic patients. To measure these changes manually can be time consuming in a clinical routine. A semi-automatic method for calculation of SUVmax in serial PET/CT studies was developed and compared to a conventional manual method. The semi-automatic method first aligns the serial PET/CT studies based on the CT images. Thereafter, the reader selects an abnormal lesion in one of the PET studies. After this manual step, the program automatically detects the corresponding lesion in the other PET study, segments the two lesions and calculates the SUVmax in both studies as well as the difference between the SUVmax values. The results of the semi-automatic analysis were compared to that of a manual SUVmax analysis using a Philips PET/CT workstation. Three readers did the SUVmax readings in both methods. Sixteen patients with lung cancer or lymphoma who had undergone two PET/CT studies were included. There were a total of 26 lesions. Results Linear regression analysis of changes in SUVmax show that intercepts and slopes are close to the line of identity for all readers (reader 1: intercept = 1.02, R2 = 0.96; reader 2: intercept = 0.97, R2 = 0.98; reader 3: intercept = 0.99, R2 = 0.98. Manual and semi-automatic method agreed in all cases whether SUVmax had increased or decreased between the serial studies. The average time to measure SUVmax changes in two serial PET/CT examinations was four to five times longer for the manual method compared to the semi-automatic method for all readers (reader 1: 53.7 vs. 10.5 s; reader 2: 27.3 vs. 6.9 s; reader 3: 47.5 vs. 9.5 s; p Conclusions Good agreement was shown in assessment of SUVmax changes between manual and semi-automatic method. The semi-automatic analysis was four to five times faster to perform than the manual analysis. These findings show the

  16. Segmentation of Multi-Isotope Imaging Mass Spectrometry Data for Semi-Automatic Detection of Regions of Interest

    Science.gov (United States)

    Poczatek, J. Collin; Turck, Christoph W.; Lechene, Claude

    2012-01-01

    Multi-isotope imaging mass spectrometry (MIMS) associates secondary ion mass spectrometry (SIMS) with detection of several atomic masses, the use of stable isotopes as labels, and affiliated quantitative image-analysis software. By associating image and measure, MIMS allows one to obtain quantitative information about biological processes in sub-cellular domains. MIMS can be applied to a wide range of biomedical problems, in particular metabolism and cell fate [1], [2], [3]. In order to obtain morphologically pertinent data from MIMS images, we have to define regions of interest (ROIs). ROIs are drawn by hand, a tedious and time-consuming process. We have developed and successfully applied a support vector machine (SVM) for segmentation of MIMS images that allows fast, semi-automatic boundary detection of regions of interests. Using the SVM, high-quality ROIs (as compared to an expert's manual delineation) were obtained for 2 types of images derived from unrelated data sets. This automation simplifies, accelerates and improves the post-processing analysis of MIMS images. This approach has been integrated into “Open MIMS,” an ImageJ-plugin for comprehensive analysis of MIMS images that is available online at http://www.nrims.hms.harvard.edu/NRIMS_ImageJ.php. PMID:22347386

  17. Semi-automatic delineation using weighted CT-MRI registered images for radiotherapy of nasopharyngeal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Fitton, I. [European Georges Pompidou Hospital, Department of Radiology, 20 rue Leblanc, 75015, Paris (France); Cornelissen, S. A. P. [Image Sciences Institute, UMC, Department of Radiology, P.O. Box 85500, 3508 GA Utrecht (Netherlands); Duppen, J. C.; Rasch, C. R. N.; Herk, M. van [The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Department of Radiotherapy, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Steenbakkers, R. J. H. M. [University Medical Center Groningen, Department of Radiation Oncology, Hanzeplein 1, 9713 GZ Groningen (Netherlands); Peeters, S. T. H. [UZ Gasthuisberg, Herestraat 49, 3000 Leuven, Belgique (Belgium); Hoebers, F. J. P. [Maastricht University Medical Center, Department of Radiation Oncology (MAASTRO clinic), GROW School for Oncology and Development Biology Maastricht, 6229 ET Maastricht (Netherlands); Kaanders, J. H. A. M. [UMC St-Radboud, Department of Radiotherapy, Geert Grooteplein 32, 6525 GA Nijmegen (Netherlands); Nowak, P. J. C. M. [ERASMUS University Medical Center, Department of Radiation Oncology,Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2011-08-15

    Purpose: To develop a delineation tool that refines physician-drawn contours of the gross tumor volume (GTV) in nasopharynx cancer, using combined pixel value information from x-ray computed tomography (CT) and magnetic resonance imaging (MRI) during delineation. Methods: Operator-guided delineation assisted by a so-called ''snake'' algorithm was applied on weighted CT-MRI registered images. The physician delineates a rough tumor contour that is continuously adjusted by the snake algorithm using the underlying image characteristics. The algorithm was evaluated on five nasopharyngeal cancer patients. Different linear weightings CT and MRI were tested as input for the snake algorithm and compared according to contrast and tumor to noise ratio (TNR). The semi-automatic delineation was compared with manual contouring by seven experienced radiation oncologists. Results: A good compromise for TNR and contrast was obtained by weighing CT twice as strong as MRI. The new algorithm did not notably reduce interobserver variability, it did however, reduce the average delineation time by 6 min per case. Conclusions: The authors developed a user-driven tool for delineation and correction based a snake algorithm and registered weighted CT image and MRI. The algorithm adds morphological information from CT during the delineation on MRI and accelerates the delineation task.

  18. Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation

    Science.gov (United States)

    Lu, Kongkuo; Hall, Christopher S.

    2014-03-01

    Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.

  19. SEMI-AUTOMATIC BUILDING MODELS AND FAÇADE TEXTURE MAPPING FROM MOBILE PHONE IMAGES

    Directory of Open Access Journals (Sweden)

    J. Jeong

    2016-06-01

    Full Text Available Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  20. Application of a semi-automatic ROI setting system for brain PET images to animal PET studies

    International Nuclear Information System (INIS)

    Kuge, Yuji; Akai, Nobuo; Tamura, Koji

    1998-01-01

    ProASSIST, a semi-automatic ROI (region of interest) setting system for human brain PET images, has been modified for use with the canine brain, and the performance of the obtained system was evaluated by comparing the operational simplicity for ROI setting and the consistency of ROI values obtained with those by a conventional manual procedure. Namely, we created segment maps for the canine brain by making reference to the coronal section atlas of the canine brain by Lim et al., and incorporated them into the ProASSIST system. For the performance test, CBF (cerebral blood flow) and CMRglc (cerebral metabolic rate in glucose) images in dogs with or without focal cerebral ischemia were used. In ProASSIST, brain contours were defined semiautomatically. In the ROI analysis of the test image, manual modification of the contour was necessary in half cases examined (8/16). However, the operation was rather simple so that the operation time per one brain section was significantly shorter than that in the manual operation. The ROI values determined by the system were comparable with those by the manual procedure, confirming the applicability of the system to these animal studies. The use of the system like the present one would also merit the more objective data acquisition for the quantitative ROI analysis, because no manual procedure except for some specifications of the anatomical features is required for ROI setting. (author)

  1. Semi-Automatic Removal of Foreground Stars from Images of Galaxies

    Science.gov (United States)

    Frei, Zsolt

    1996-07-01

    A new procedure, designed to remove foreground stars from galaxy proviles is presented here. Although several programs exist for stellar and faint object photometry, none of them treat star removal from the images very carefully. I present my attempt to develop such a system, and briefly compare the performance of my software to one of the well-known stellar photometry packages, DAOPhot (Stetson 1987). Major steps in my procedure are: (1) automatic construction of an empirical 2D point spread function from well separated stars that are situated off the galaxy; (2) automatic identification of those peaks that are likely to be foreground stars, scaling the PSF and removing these stars, and patching residuals (in the automatically determined smallest possible area where residuals are truly significant); and (3) cosmetic fix of remaining degradations in the image. The algorithm and software presented here is significantly better for automatic removal of foreground stars from images of galaxies than DAOPhot or similar packages, since: (a) the most suitable stars are selected automatically from the image for the PSF fit; (b) after star-removal an intelligent and automatic procedure removes any possible residuals; (c) unlimited number of images can be cleaned in one run without any user interaction whatsoever. (SECTION: Computing and Data Analysis)

  2. Semi-automatic geographic atrophy segmentation for SD-OCT images

    OpenAIRE

    Chen, Qiang; de Sisternes, Luis; Leng, Theodore; Zheng, Luoluo; Kutzscher, Lauren; Rubin, Daniel L.

    2013-01-01

    Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in advanced stages of non-exudative age-related macular degeneration (AMD) and can lead to vision loss. We present a semi-automated GA segmentation algorithm for spectral-domain optical coherence tomography (SD-OCT) images. The method first identifies and segments a surface between the RPE and the choroid to generate retinal projection images in wh...

  3. Semi-automatic geographic atrophy segmentation for SD-OCT images.

    Science.gov (United States)

    Chen, Qiang; de Sisternes, Luis; Leng, Theodore; Zheng, Luoluo; Kutzscher, Lauren; Rubin, Daniel L

    2013-01-01

    Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in advanced stages of non-exudative age-related macular degeneration (AMD) and can lead to vision loss. We present a semi-automated GA segmentation algorithm for spectral-domain optical coherence tomography (SD-OCT) images. The method first identifies and segments a surface between the RPE and the choroid to generate retinal projection images in which the projection region is restricted to a sub-volume of the retina where the presence of GA can be identified. Subsequently, a geometric active contour model is employed to automatically detect and segment the extent of GA in the projection images. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to qualitatively and quantitatively evaluate the proposed GA segmentation method. Experimental results suggest that the proposed algorithm can achieve high segmentation accuracy. The mean GA overlap ratios between our proposed method and outlines drawn in the SD-OCT scans, our method and outlines drawn in the fundus auto-fluorescence (FAF) images, and the commercial software (Carl Zeiss Meditec proprietary software, Cirrus version 6.0) and outlines drawn in FAF images were 72.60%, 65.88% and 59.83%, respectively.

  4. Semi-Automatic Classification Of Histopathological Images: Dealing With Inter-Slide Variations

    Directory of Open Access Journals (Sweden)

    Michael Gadermayr

    2016-06-01

    In case of 50 available labelled sample patches of a certain whole slide image, the overall classification rate increased from 92 % to 98 % through including the interactive labelling step. Even with only 20 labelled patches, accuracy already increased to 97 %. Without a pre-trained model, if training is performed on target domain data only, 88 % (20 labelled samples and 95 % (50 labelled samples accuracy, respectively, were obtained. If enough target domain data was available (about 20 images, the amount of source domain data was of minor relevance. The difference in outcome between a source domain training data set containing 100 patches from one whole slide image and a set containing 700 patches from seven images was lower than 1 %. Contrarily, without target domain data, the difference in accuracy was 10 % (82 % compared to 92 % between these two settings. Execution runtime between two interaction steps is significantly below one second (0.23 s, which is an important usability criterion. It proved to be beneficial to select specific target domain data in an active learning sense based on the currently available trained model. While experimental evaluation provided strong empirical evidence for increased classification performance with the proposed method, the additional manual effort can be kept at a low level. The labelling of e.g. 20 images per slide is surely less time consuming than the validation of a complete whole slide image processed with a fully automatic, but less reliable, segmentation approach. Finally, it should be highlighted that the proposed interaction protocol could easily be adapted to other histopathological classification or segmentation tasks, also for implementation in a clinical system.  

  5. A semi-automatic procedure for texturing of laser scanning point clouds with google streetview images

    NARCIS (Netherlands)

    Lichtenauer, J.F.; Sirmacek, B.

    2015-01-01

    We introduce a method to texture 3D urban models with photographs that even works for Google Streetview images and can be done with currently available free software. This allows realistic texturing, even when it is not possible or cost-effective to (re)visit a scanned site to take textured scans or

  6. Examination of the semi-automatic calculation technique of vegetation cover rate by digital camera images.

    Science.gov (United States)

    Takemine, S.; Rikimaru, A.; Takahashi, K.

    The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed

  7. A deformable-model approach to semi-automatic segmentation of CT images demonstrated by application to the spinal canal

    International Nuclear Information System (INIS)

    Burnett, Stuart S.C.; Starkschall, George; Stevens, Craig W.; Liao Zhongxing

    2004-01-01

    Because of the importance of accurately defining the target in radiation treatment planning, we have developed a deformable-template algorithm for the semi-automatic delineation of normal tissue structures on computed tomography (CT) images. We illustrate the method by applying it to the spinal canal. Segmentation is performed in three steps: (a) partial delineation of the anatomic structure is obtained by wavelet-based edge detection; (b) a deformable-model template is fitted to the edge set by chamfer matching; and (c) the template is relaxed away from its original shape into its final position. Appropriately chosen ranges for the model parameters limit the deformations of the template, accounting for interpatient variability. Our approach differs from those used in other deformable models in that it does not inherently require the modeling of forces. Instead, the spinal canal was modeled using Fourier descriptors derived from four sets of manually drawn contours. Segmentation was carried out, without manual intervention, on five CT data sets and the algorithm's performance was judged subjectively by two radiation oncologists. Two assessments were considered: in the first, segmentation on a random selection of 100 axial CT images was compared with the corresponding contours drawn manually by one of six dosimetrists, also chosen randomly; in the second assessment, the segmentation of each image in the five evaluable CT sets (a total of 557 axial images) was rated as either successful, unsuccessful, or requiring further editing. Contours generated by the algorithm were more likely than manually drawn contours to be considered acceptable by the oncologists. The mean proportions of acceptable contours were 93% (automatic) and 69% (manual). Automatic delineation of the spinal canal was deemed to be successful on 91% of the images, unsuccessful on 2% of the images, and requiring further editing on 7% of the images. Our deformable template algorithm thus gives a robust

  8. SMASH - semi-automatic muscle analysis using segmentation of histology: a MATLAB application.

    Science.gov (United States)

    Smith, Lucas R; Barton, Elisabeth R

    2014-01-01

    Histological assessment of skeletal muscle tissue is commonly applied to many areas of skeletal muscle physiological research. Histological parameters including fiber distribution, fiber type, centrally nucleated fibers, and capillary density are all frequently quantified measures of skeletal muscle. These parameters reflect functional properties of muscle and undergo adaptation in many muscle diseases and injuries. While standard operating procedures have been developed to guide analysis of many of these parameters, the software to freely, efficiently, and consistently analyze them is not readily available. In order to provide this service to the muscle research community we developed an open source MATLAB script to analyze immunofluorescent muscle sections incorporating user controls for muscle histological analysis. The software consists of multiple functions designed to provide tools for the analysis selected. Initial segmentation and fiber filter functions segment the image and remove non-fiber elements based on user-defined parameters to create a fiber mask. Establishing parameters set by the user, the software outputs data on fiber size and type, centrally nucleated fibers, and other structures. These functions were evaluated on stained soleus muscle sections from 1-year-old wild-type and mdx mice, a model of Duchenne muscular dystrophy. In accordance with previously published data, fiber size was not different between groups, but mdx muscles had much higher fiber size variability. The mdx muscle had a significantly greater proportion of type I fibers, but type I fibers did not change in size relative to type II fibers. Centrally nucleated fibers were highly prevalent in mdx muscle and were significantly larger than peripherally nucleated fibers. The MATLAB code described and provided along with this manuscript is designed for image processing of skeletal muscle immunofluorescent histological sections. The program allows for semi-automated fiber detection

  9. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    Science.gov (United States)

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-07-28

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  10. SplitRacer - a semi-automatic tool for the analysis and interpretation of teleseismic shear-wave splitting

    Science.gov (United States)

    Reiss, Miriam Christina; Rümpker, Georg

    2017-04-01

    We present a semi-automatic, graphical user interface tool for the analysis and interpretation of teleseismic shear-wave splitting in MATLAB. Shear wave splitting analysis is a standard tool to infer seismic anisotropy, which is often interpreted as due to lattice-preferred orientation of e.g. mantle minerals or shape-preferred orientation caused by cracks or alternating layers in the lithosphere and hence provides a direct link to the earth's kinematic processes. The increasing number of permanent stations and temporary experiments result in comprehensive studies of seismic anisotropy world-wide. Their successive comparison with a growing number of global models of mantle flow further advances our understanding the earth's interior. However, increasingly large data sets pose the inevitable question as to how to process them. Well-established routines and programs are accurate but often slow and impractical for analyzing a large amount of data. Additionally, shear wave splitting results are seldom evaluated using the same quality criteria which complicates a straight-forward comparison. SplitRacer consists of several processing steps: i) download of data per FDSNWS, ii) direct reading of miniSEED-files and an initial screening and categorizing of XKS-waveforms using a pre-set SNR-threshold. iii) an analysis of the particle motion of selected phases and successive correction of the sensor miss-alignment based on the long-axis of the particle motion. iv) splitting analysis of selected events: seismograms are first rotated into radial and transverse components, then the energy-minimization method is applied, which provides the polarization and delay time of the phase. To estimate errors, the analysis is done for different randomly-chosen time windows. v) joint-splitting analysis for all events for one station, where the energy content of all phases is inverted simultaneously. This allows to decrease the influence of noise and to increase robustness of the measurement

  11. Stiffness analysis of spring mechanism for semi automatic gripper motion of tendon driven remote manipulator

    International Nuclear Information System (INIS)

    Yu, Seung Nam; Lee, Jong Kwang

    2012-01-01

    Remote handling manipulators are widely used for performing hazardous tasks, and it is essential to ensure the reliable performance of such systems. Toward this end, tendon driven mechanisms are adopted in such systems to reduce the weight of the distal parts of the manipulator while maintaining the handling performance. In this study, several approaches for the design of a gripper system for a tendon driven remote handling system are introduced. Basically, this gripper has an underactuated spring mechanism that is combined with a slave manipulator triggered by a master operator. Based on the requirements under the specified tendon driven mechanism, the connecting position of the spring system on the gripper mechanism and kinematic influence coefficient (KIC) analysis are performed. As a result, a suitable combination of components for the proper design of the target system is presented and verified

  12. Semi-automatized segmentation method using image-based flow cytometry to study sperm physiology: the case of capacitation-induced tyrosine phosphorylation.

    Science.gov (United States)

    Matamoros-Volante, Arturo; Moreno-Irusta, Ayelen; Torres-Rodriguez, Paulina; Giojalas, Laura; Gervasi, María G; Visconti, Pablo E; Treviño, Claudia L

    2018-02-01

    Is image-based flow cytometry a useful tool to study intracellular events in human sperm such as protein tyrosine phosphorylation or signaling processes? Image-based flow cytometry is a powerful tool to study intracellular events in a relevant number of sperm cells, which enables a robust statistical analysis providing spatial resolution in terms of the specific subcellular localization of the labeling. Sperm capacitation is required for fertilization. During this process, spermatozoa undergo numerous physiological changes, via activation of different signaling pathways, which are not completely understood. Classical approaches for studying sperm physiology include conventional microscopy, flow cytometry and Western blotting. These techniques present disadvantages for obtaining detailed subcellular information of signaling pathways in a relevant number of cells. This work describes a new semi-automatized analysis using image-based flow cytometry which enables the study, at the subcellular and population levels, of different sperm parameters associated with signaling. The increase in protein tyrosine phosphorylation during capacitation is presented as an example. Sperm cells were isolated from seminal plasma by the swim-up technique. We evaluated the intensity and distribution of protein tyrosine phosphorylation in sperm incubated in non-capacitation and capacitation-supporting media for 1 and 18 h under different experimental conditions. We used an antibody against FER kinase and pharmacological inhibitors in an attempt to identify the kinases involved in protein tyrosine phosphorylation during human sperm capacitation. Semen samples from normospermic donors were obtained by masturbation after 2-3 days of sexual abstinence. We used the innovative technique image-based flow cytometry and image analysis tools to segment individual images of spermatozoa. We evaluated and quantified the regions of sperm where protein tyrosine phosphorylation takes place at the

  13. Semi-automatic fluoroscope

    International Nuclear Information System (INIS)

    Tarpley, M.W.

    1976-10-01

    Extruded aluminum-clad uranium-aluminum alloy fuel tubes must pass many quality control tests before irradiation in Savannah River Plant nuclear reactors. Nondestructive test equipment has been built to automatically detect high and low density areas in the fuel tubes using x-ray absorption techniques with a video analysis system. The equipment detects areas as small as 0.060-in. dia with 2 percent penetrameter sensitivity. These areas are graded as to size and density by an operator using electronic gages. Video image enhancement techniques permit inspection of ribbed cylindrical tubes and make possible the testing of areas under the ribs. Operation of the testing machine, the special low light level television camera, and analysis and enhancement techniques are discussed

  14. Semi-automatic segmentation of gated blood pool emission tomographic images by watersheds: application to the determination of right and left ejection fractions

    International Nuclear Information System (INIS)

    Mariano-Goulart, D.; Collet, H.; Kotzki, P.-O.; Zanca, M.; Rossi, M.

    1998-01-01

    Tomographic multi-gated blood pool scintigraphy (TMUGA) is a widely available method which permits simultaneous assessment of right and left ventricular ejection fractions. However, the widespread clinical use of this technique is impeded by the lack of segmentation methods dedicated to an automatic analysis of ventricular activities. In this study we evaluated how a watershed algorithm succeeds in providing semi-automatic segmentation of ventricular activities in order to measure right and left ejection fractions by TMUGA. The left ejection fractions of 30 patients were evaluated both with TMUGA and with planar multi-gated blood pool scintigraphy (PMUGA). Likewise, the right ejection fractions of 25 patients were evaluated with first-pass scintigraphy (FP) and with TMUGA. The watershed algorithm was applied to the reconstructed slices in order to group together the voxels whose activity came from one specific cardiac cavity. First, the results of the watershed algorithm were compared with manual drawing around left and right ventricles. Left ejection fractions evaluated by TMUGA with the watershed procedure were not significantly different (p=0.30) from manual outlines whereas a small but significant difference was found for right ejection fractions (p=0.004). Then right and left ejection fractions evaluated by TMUGA (with the semi-automatic segmentation procedure) were compared with the results obtained by FP or PMUGA. Left ventricular ejection fractions evaluated by TMUGA showed an excellent correlation with those evaluated by PMUGA (r=0.93; SEE=5.93%; slope=0.99; intercept = 4.17%). The measurements of these ejection fractions were significantly higher with TMUGA than with PMUGA (P<0.01). The interoperator variability for the measurement of left ejection fractions by TMUGA was 4.6%. Right ventricular ejection fractions evaluated by TMUGA showed a good correlation with those evaluated by FP (r = 0.81; SEE = 6.68%; slope = 1.00; intercept = 0.85%) and were not

  15. A new generic method for the semi-automatic extraction of river and road networks in low and mid-resolution satellite images

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [PNNL; Soille, Pierre [EC JRC

    2010-10-21

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  16. Performance Evaluation of Three Different High Resolution Satellite Images in Semi-Automatic Urban Illegal Building Detection

    Science.gov (United States)

    Khalilimoghadama, N.; Delavar, M. R.; Hanachi, P.

    2017-09-01

    The problem of overcrowding of mega cities has been bolded in recent years. To meet the need of housing this increased population, which is of great importance in mega cities, a huge number of buildings are constructed annually. With the ever-increasing trend of building constructions, we are faced with the growing trend of building infractions and illegal buildings (IBs). Acquiring multi-temporal satellite images and using change detection techniques is one of the proper methods of IB monitoring. Using the type of satellite images with different spatial and spectral resolutions has always been an issue in efficient detection of the building changes. In this research, three bi-temporal high-resolution satellite images of IRS-P5, GeoEye-1 and QuickBird sensors acquired from the west of metropolitan area of Tehran, capital of Iran, in addition to city maps and municipality property database were used to detect the under construction buildings with improved performance and accuracy. Furthermore, determining the employed bi-temporal satellite images to provide better performance and accuracy in the case of IB detection is the other purpose of this research. The Kappa coefficients of 70 %, 64 %, and 68 % were obtained for producing change image maps using GeoEye-1, IRS-P5, and QuickBird satellite images, respectively. In addition, the overall accuracies of 100 %, 6 %, and 83 % were achieved for IB detection using the satellite images, respectively. These accuracies substantiate the fact that the GeoEye-1 satellite images had the best performance among the employed images in producing change image map and detecting the IBs.

  17. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  18. ROADS CENTRE-AXIS EXTRACTION IN AIRBORNE SAR IMAGES: AN APPROACH BASED ON ACTIVE CONTOUR MODEL WITH THE USE OF SEMI-AUTOMATIC SEEDING

    Directory of Open Access Journals (Sweden)

    R. G. Lotte

    2013-05-01

    Full Text Available Research works dealing with computational methods for roads extraction have considerably increased in the latest two decades. This procedure is usually performed on optical or microwave sensors (radar imagery. Radar images offer advantages when compared to optical ones, for they allow the acquisition of scenes regardless of atmospheric and illumination conditions, besides the possibility of surveying regions where the terrain is hidden by the vegetation canopy, among others. The cartographic mapping based on these images is often manually accomplished, requiring considerable time and effort from the human interpreter. Maps for detecting new roads or updating the existing roads network are among the most important cartographic products to date. There are currently many studies involving the extraction of roads by means of automatic or semi-automatic approaches. Each of them presents different solutions for different problems, making this task a scientific issue still open. One of the preliminary steps for roads extraction can be the seeding of points belonging to roads, what can be done using different methods with diverse levels of automation. The identified seed points are interpolated to form the initial road network, and are hence used as an input for an extraction method properly speaking. The present work introduces an innovative hybrid method for the extraction of roads centre-axis in a synthetic aperture radar (SAR airborne image. Initially, candidate points are fully automatically seeded using Self-Organizing Maps (SOM, followed by a pruning process based on specific metrics. The centre-axis are then detected by an open-curve active contour model (snakes. The obtained results were evaluated as to their quality with respect to completeness, correctness and redundancy.

  19. The Reliability of Technical and Tactical Tagging Analysis Conducted by a Semi-Automatic VTS in Soccer.

    Science.gov (United States)

    Beato, Marco; Jamil, Mikael; Devereux, Gavin

    2018-06-01

    The Video Tracking multiple cameras system (VTS) is a technology that records two-dimensional position data (x and y) at high sampling rates (over 25 Hz). The VTS is of great interest because it can record external load variables as well as collect technical and tactical parameters. Performance analysis is mainly focused on physical demands, yet less attention has been afforded to technical and tactical factors. Digital.Stadium® VTS is a performance analysis device widely used at national and international levels (i.e. Italian Serie A, Euro 2016) and the reliability evaluation of its technical tagging analysis (e.g. shots, passes, assists, set pieces) could be paramount for its application at elite level competitions, as well as in research studies. Two professional soccer teams, with 30 male players (age 23 ± 5 years, body mass 78.3 ± 6.9 kg, body height 1.81 ± 0.06 m), were monitored in the 2016 season during a friendly match and data analysis was performed immediately after the game ended. This process was then replicated a week later (4 operators conducted the data analysis in each week). This study reports a near perfect relationship between Match and its Replication. R2 coefficients (relationships between Match and Replication) were highly significant for each of the technical variables considered (p technical tagging data accurately.

  20. A semi-automatic technique for measurement of arterial wall from black blood MRI

    International Nuclear Information System (INIS)

    Ladak, Hanif M.; Thomas, Jonathan B.; Mitchell, J. Ross; Rutt, Brian K.; Steinman, David A.

    2001-01-01

    Black blood magnetic resonance imaging (MRI) has become a popular technique for imaging the artery wall in vivo. Its noninvasiveness and high resolution make it ideal for studying the progression of early atherosclerosis in normal volunteers or asymptomatic patients with mild disease. However, the operator variability inherent in the manual measurement of vessel wall area from MR images hinders the reliable detection of relatively small changes in the artery wall over time. In this paper we present a semi-automatic method for segmenting the inner and outer boundary of the artery wall, and evaluate its operator variability using analysis of variance (ANOVA). In our approach, a discrete dynamic contour is approximately initialized by an operator, deformed to the inner boundary, dilated, and then deformed to the outer boundary. A group of four operators performed repeated measurements on 12 images from normal human subjects using both our semi-automatic technique and a manual approach. Results from the ANOVA indicate that the inter-operator standard error of measurement (SEM) of total wall area decreased from 3.254 mm2 (manual) to 1.293 mm2 (semi-automatic), and the intra-operator SEM decreased from 3.005 mm2 to 0.958 mm2. Operator reliability coefficients increased from less than 69% to more than 91% (inter-operator) and 95% (intra-operator). The minimum detectable change in wall area improved from more than 8.32 mm2 (intra-operator, manual) to less than 3.59 mm2 (inter-operator, semi-automatic), suggesting that it is better to have multiple operators measure wall area with our semi-automatic technique than to have a single operator make repeated measurements manually. Similar improvements in wall thickness and lumen radius measurements were also recorded. Since the semi-automatic technique has effectively ruled out the effect of the operator on these measurements, it may be possible to use such techniques to expand prospective studies of atherogenesis to multiple

  1. Semi-automatic drawings surveying system

    International Nuclear Information System (INIS)

    Andriamampianina, Lala

    1983-01-01

    A system for the semi-automatic survey of drawings is presented. Its design has been oriented to the reduction of the stored information required for the drawing reproduction. This equipment consists mainly of a plotter driven by a micro-computer, but the pen of the plotter is replaced by a circular photodiode array. Line drawings are first viewed as a concatenation of vectors, with constant angle between the two vectors, and then divided in arcs of circles and line segments. A dynamic analysis of line intersections with the circular sensor permits to identify starting points and end points in a line, for the purpose of automatically following connected lines in drawing. The advantage of the method described is that precision practically depends only on the plotter performance, the sensor resolution being only considered for the thickness of strokes and the distance between two strokes. (author) [fr

  2. Semi-automatic film processing unit

    International Nuclear Information System (INIS)

    Mohamad Annuar Assadat Husain; Abdul Aziz Bin Ramli; Mohd Khalid Matori

    2005-01-01

    The design concept applied in the development of an semi-automatic film processing unit needs creativity and user support in channelling the required information to select materials and operation system that suit the design produced. Low cost and efficient operation are the challenges that need to be faced abreast with the fast technology advancement. In producing this processing unit, there are few elements which need to be considered in order to produce high quality image. Consistent movement and correct time coordination for developing and drying are a few elements which need to be controlled. Other elements which need serious attentions are temperature, liquid density and the amount of time for the chemical liquids to react. Subsequent chemical reaction that take place will cause the liquid chemical to age and this will adversely affect the quality of image produced. This unit is also equipped with liquid chemical drainage system and disposal chemical tank. This unit would be useful in GP clinics especially in rural area which practice manual system for developing and require low operational cost. (Author)

  3. Semi-automatic logarithmic converter of logs

    International Nuclear Information System (INIS)

    Gol'dman, Z.A.; Bondar's, V.V.

    1974-01-01

    Semi-automatic logarithmic converter of logging charts. An original semi-automatic converter was developed for use in converting BK resistance logging charts and the time interval, ΔT, of acoustic logs from a linear to a logarithmic scale with a specific ratio for subsequent combining of them with neutron-gamma logging charts in operative interpretation of logging materials by a normalization method. The converter can be used to increase productivity by giving curves different from those obtained in manual, pointwise processing. The equipment operates reliably and is simple in use. (author)

  4. A Supporting Platform for Semi-Automatic Hyoid Bone Tracking and Parameter Extraction from Videofluoroscopic Images for the Diagnosis of Dysphagia Patients.

    Science.gov (United States)

    Lee, Jun Chang; Nam, Kyoung Won; Jang, Dong Pyo; Paik, Nam Jong; Ryu, Ju Seok; Kim, In Young

    2017-04-01

    Conventional kinematic analysis of videofluoroscopic (VF) swallowing image, most popular for dysphagia diagnosis, requires time-consuming and repetitive manual extraction of diagnostic information from multiple images representing one swallowing period, which results in a heavy work load for clinicians and excessive hospital visits for patients to receive counseling and prescriptions. In this study, a software platform was developed that can assist in the VF diagnosis of dysphagia by automatically extracting a two-dimensional moving trajectory of the hyoid bone as well as 11 temporal and kinematic parameters. Fifty VF swallowing videos containing both non-mandible-overlapped and mandible-overlapped cases from eight patients with dysphagia of various etiologies and 19 videos from ten healthy controls were utilized for performance verification. Percent errors of hyoid bone tracking were 1.7 ± 2.1% for non-overlapped images and 4.2 ± 4.8% for overlapped images. Correlation coefficients between manually extracted and automatically extracted moving trajectories of the hyoid bone were 0.986 ± 0.017 (X-axis) and 0.992 ± 0.006 (Y-axis) for non-overlapped images, and 0.988 ± 0.009 (X-axis) and 0.991 ± 0.006 (Y-axis) for overlapped images. Based on the experimental results, we believe that the proposed platform has the potential to improve the satisfaction of both clinicians and patients with dysphagia.

  5. Semi-Automatic Rename Refactoring for JavaScript

    DEFF Research Database (Denmark)

    Feldthaus, Asger; Møller, Anders

    2013-01-01

    and interaction with the programmer. With this pragmatic approach, we can provide scalable and effective refactoring support for real-world code, including libraries and incomplete applications. Through a series of experiments that estimate how much manual effort our technique demands from the programmer, we show......Modern IDEs support automated refactoring for many programming languages, but support for JavaScript is still primitive. To perform renaming, which is one of the fundamental refactorings, there is often no practical alternative to simple syntactic search-and-replace. Although more sophisticated...... alternatives have been developed, they are limited by whole-program assumptions and poor scalability. We propose a technique for semi-automatic refactoring for JavaScript, with a focus on renaming. Unlike traditional refactoring algorithms, semi-automatic refactoring works by a combination of static analysis...

  6. Accessories for Enhancement of the Semi-Automatic Welding Processes

    National Research Council Canada - National Science Library

    Wheeler, Douglas M; Sawhill, James M

    2000-01-01

    The project's objective is to identify specific areas of the semi-automatic welding operation that is performed with the major semi-automatic processes, which would be more productive if a suitable...

  7. SU-F-BRA-01: A Procedure for the Fast Semi-Automatic Localization of Catheters Using An Electromagnetic Tracker (EMT) for Image-Guided Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Damato, A; Viswanathan, A; Cormack, R [Brigham and Women’s Hospital, Boston, MA (United States)

    2015-06-15

    Purpose: To evaluate the feasibility of brachytherapy catheter localization through use of an EMT and 3D image set. Methods: A 15-catheter phantom mimicking an interstitial implantation was built and CT-scanned. Baseline catheter reconstruction was performed manually. An EMT was used to acquire the catheter coordinates in the EMT frame of reference. N user-identified catheter tips, without catheter number associations, were used to establish registration with the CT frame of reference. Two algorithms were investigated: brute-force registration (BFR), in which all possible permutation of N identified tips with the EMT tips were evaluated; and signature-based registration (SBR), in which a distance matrix was used to generate a list of matching signatures describing possible N-point matches with the registration points. Digitization error (average of the distance between corresponding EMT and baseline dwell positions; average, standard deviation, and worst-case scenario over all possible registration-point selections) and algorithm inefficiency (maximum number of rigid registrations required to find the matching fusion for all possible selections of registration points) were calculated. Results: Digitization errors on average <2 mm were observed for N ≥5, with standard deviation <2 mm for N ≥6, and worst-case scenario error <2 mm for N ≥11. Algorithm inefficiencies were: N = 5, 32,760 (BFR) and 9900 (SBR); N = 6, 360,360 (BFR) and 21,660 (SBR); N = 11, 5.45*1010 (BFR) and 12 (SBR). Conclusion: A procedure was proposed for catheter reconstruction using EMT and only requiring user identification of catheter tips without catheter localization. Digitization errors <2 mm were observed on average with 5 or more registration points, and in any scenario with 11 or more points. Inefficiency for N = 11 was 9 orders of magnitude lower for SBR than for BFR. Funding: Kaye Family Award.

  8. Diagnostic accuracy of semi-automatic quantitative metrics as an alternative to expert reading of CT myocardial perfusion in the CORE320 study.

    Science.gov (United States)

    Ostovaneh, Mohammad R; Vavere, Andrea L; Mehra, Vishal C; Kofoed, Klaus F; Matheson, Matthew B; Arbab-Zadeh, Armin; Fujisawa, Yasuko; Schuijf, Joanne D; Rochitte, Carlos E; Scholte, Arthur J; Kitagawa, Kakuya; Dewey, Marc; Cox, Christopher; DiCarli, Marcelo F; George, Richard T; Lima, Joao A C

    2018-04-03

    To determine the diagnostic accuracy of semi-automatic quantitative metrics compared to expert reading for interpretation of computed tomography perfusion (CTP) imaging. The CORE320 multicenter diagnostic accuracy clinical study enrolled patients between 45 and 85 years of age who were clinically referred for invasive coronary angiography (ICA). Computed tomography angiography (CTA), CTP, single photon emission computed tomography (SPECT), and ICA images were interpreted manually in blinded core laboratories by two experienced readers. Additionally, eight quantitative CTP metrics as continuous values were computed semi-automatically from myocardial and blood attenuation and were combined using logistic regression to derive a final quantitative CTP metric score. For the reference standard, hemodynamically significant coronary artery disease (CAD) was defined as a quantitative ICA stenosis of 50% or greater and a corresponding perfusion defect by SPECT. Diagnostic accuracy was determined by area under the receiver operating characteristic curve (AUC). Of the total 377 included patients, 66% were male, median age was 62 (IQR: 56, 68) years, and 27% had prior myocardial infarction. In patient based analysis, the AUC (95% CI) for combined CTA-CTP expert reading and combined CTA-CTP semi-automatic quantitative metrics was 0.87(0.84-0.91) and 0.86 (0.83-0.9), respectively. In vessel based analyses the AUC's were 0.85 (0.82-0.88) and 0.84 (0.81-0.87), respectively. No significant difference in AUC was found between combined CTA-CTP expert reading and CTA-CTP semi-automatic quantitative metrics in patient based or vessel based analyses(p > 0.05 for all). Combined CTA-CTP semi-automatic quantitative metrics is as accurate as CTA-CTP expert reading to detect hemodynamically significant CAD. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  9. A web based semi automatic frame work for astrobiological researches

    Directory of Open Access Journals (Sweden)

    P.V. Arun

    2013-12-01

    Full Text Available Astrobiology addresses the possibility of extraterrestrial life and explores measures towards its recognition. Researches in this context are founded upon the premise that indicators of life encountered in space will be recognizable. However, effective recognition can be accomplished through a universal adaptation of life signatures without restricting solely to those attributes that represent local solutions to the challenges of survival. The life indicators should be modelled with reference to temporal and environmental variations specific to each planet and time. In this paper, we investigate a semi-automatic open source frame work for the accurate detection and interpretation of life signatures by facilitating public participation, in a similar way as adopted by SETI@home project. The involvement of public in identifying patterns can bring a thrust to the mission and is implemented using semi-automatic framework. Different advanced intelligent methodologies may augment the integration of this human machine analysis. Automatic and manual evaluations along with dynamic learning strategy have been adopted to provide accurate results. The system also helps to provide a deep public understanding about space agency’s works and facilitate a mass involvement in the astrobiological studies. It will surely help to motivate young eager minds to pursue a career in this field.

  10. Myocardial blood flow quantification by Rb-82 cardiac PET/CT: A detailed reproducibility study between two semi-automatic analysis programs.

    OpenAIRE

    Dunet, V.; Klein, R.; Allenbach, G.; Renaud, J.; deKemp, R.A.; Prior, J.O.

    2016-01-01

    Background Several analysis software packages for myocardial blood flow (MBF) quantification from cardiac PET studies exist, but they have not been compared using concordance analysis, which can characterize precision and bias separately. Reproducible measurements are needed for quantification to fully develop its clinical potential. Methods Fifty-one patients underwent dynamic Rb-82 PET at rest and during adenosine stress. Data were processed with PMOD and FlowQuant (Lortie model). MBF and m...

  11. A semi-automatic traffic sign detection, classification and positioning system

    NARCIS (Netherlands)

    Creusen, I.M.; Hazelhoff, L.; With, de P.H.N.; Said, A.; Guleryuz, O.G.; Stevenson, R.L.

    2012-01-01

    The availability of large-scale databases containing street-level panoramic images offers the possibility to perform semi-automatic surveying of real-world objects such as traffic signs. These inventories can be performed significantly more efficiently than using conventional methods. Governmental

  12. Comparison of manual and semi-automatic measuring techniques in MSCT scans of patients with lymphoma: a multicentre study

    Energy Technology Data Exchange (ETDEWEB)

    Hoeink, A.J.; Wessling, J.; Schuelke, C.; Kohlhase, N.; Wassenaar, L.; Heindel, W.; Buerke, B. [University Hospital Muenster, Department of Clinical Radiology, Muenster (Germany); Koch, R. [University of Muenster, Institute of Biostatistics and Clinical Research (IBKF), Muenster (Germany); Mesters, R.M. [University Hospital Muenster, Department of Haematology and Oncology, Muenster (Germany); D' Anastasi, M.; Graser, A.; Karpitschka, M. [University Hospital Muenchen (LMU), Institute of Clinical Radiology, Muenchen (Germany); Fabel, M.; Wulff, A. [University Hospital Kiel, Department of Clinical Radiology, Kiel (Germany); Pinto dos Santos, D. [University Hospital Mainz, Department of Diagnostic and Interventional Radiology, Mainz (Germany); Kiessling, A. [University Hospital Marburg, Department of Diagnostic and Interventional Radiology, Marburg (Germany); Dicken, V.; Bornemann, L. [Institute of Medical Imaging Computing, Fraunhofer MeVis, Bremen (Germany)

    2014-11-15

    Multicentre evaluation of the precision of semi-automatic 2D/3D measurements in comparison to manual, linear measurements of lymph nodes regarding their inter-observer variability in multi-slice CT (MSCT) of patients with lymphoma. MSCT data of 63 patients were interpreted before and after chemotherapy by one/tworadiologists in five university hospitals. In 307 lymph nodes, short (SAD)/long (LAD) axis diameter and WHO area were determined manually and semi-automatically. Volume was solely calculated semi-automatically. To determine the precision of the individual parameters, a mean was calculated for every lymph node/parameter. Deviation of the measured parameters from this mean was evaluated separately. Statistical analysis entailed intraclass correlation coefficients (ICC) and Kruskal-Wallis tests. Median relative deviations of semi-automatic parameters were smaller than deviations of manually assessed parameters, e.g. semi-automatic SAD 5.3 vs. manual 6.5 %. Median variations among different study sites were smaller if the measurement was conducted semi-automatically, e. g. manual LAD 5.7/4.2 % vs. semi-automatic 3.4/3.4 %. Semi-automatic volumetry was superior to the other parameters (2.8 %). Semi-automatic determination of different lymph node parameters is (compared to manually assessed parameters) associated with a slightly greater precision and a marginally lower inter-observer variability. These results are with regard to the increasing mobility of patients among different medical centres and in relation to the quality management of multicentre trials of importance. (orig.)

  13. Myocardial blood flow quantification by Rb-82 cardiac PET/CT: A detailed reproducibility study between two semi-automatic analysis programs.

    Science.gov (United States)

    Dunet, Vincent; Klein, Ran; Allenbach, Gilles; Renaud, Jennifer; deKemp, Robert A; Prior, John O

    2016-06-01

    Several analysis software packages for myocardial blood flow (MBF) quantification from cardiac PET studies exist, but they have not been compared using concordance analysis, which can characterize precision and bias separately. Reproducible measurements are needed for quantification to fully develop its clinical potential. Fifty-one patients underwent dynamic Rb-82 PET at rest and during adenosine stress. Data were processed with PMOD and FlowQuant (Lortie model). MBF and myocardial flow reserve (MFR) polar maps were quantified and analyzed using a 17-segment model. Comparisons used Pearson's correlation ρ (measuring precision), Bland and Altman limit-of-agreement and Lin's concordance correlation ρc = ρ·C b (C b measuring systematic bias). Lin's concordance and Pearson's correlation values were very similar, suggesting no systematic bias between software packages with an excellent precision ρ for MBF (ρ = 0.97, ρc = 0.96, C b = 0.99) and good precision for MFR (ρ = 0.83, ρc = 0.76, C b = 0.92). On a per-segment basis, no mean bias was observed on Bland-Altman plots, although PMOD provided slightly higher values than FlowQuant at higher MBF and MFR values (P < .0001). Concordance between software packages was excellent for MBF and MFR, despite higher values by PMOD at higher MBF values. Both software packages can be used interchangeably for quantification in daily practice of Rb-82 cardiac PET.

  14. Semi-automatic laser beam microdissection of the Y chromosome and analysis of Y chromosome DNA in a dioecious plant, Silene latifolia

    International Nuclear Information System (INIS)

    Matsunaga, S.; Kawano, S.; Michimoto, T.; Higashiyama, T.; Nakao, S.; Sakai, A.; Kuroiwa, T.

    1999-01-01

    Silene latifolia has heteromorphic sex chromosomes, the X and Y chromosomes. The Y chromosome, which is thought to carry the male determining gene, was isolated by UV laser microdissection and amplified by degenerate oligonucleotide-primed PCR. In situ chromosome suppression of the amplified Y chromosome DNA in the presence of female genomic DNA as a competitor showed that the microdissected Y chromosome DNA did not specifically hybridize to the Y chromosome, but-hybridized to all chromosomes. This result suggests that the Y chromosome does not contain Y chromosome-enriched repetitive sequences. A repetitive sequence in the microdissected Y chromosome, RMY1, was isolated while screening repetitive sequences in the amplified Y chromosome. Part of the nucleotide sequence shared a similarity to that of X-43.1, which was isolated from microdissected X chromosomes. Since fluorescence in situ hybridization analysis with RMY1 demonstrated that RMY1 was localized at the ends of the chromosome, RMY1 may be a subtelomeric repetitive sequence. Regarding the sex chromosomes, RMY1 was detected at both ends of the X chromosome and at one end near the pseudoautosomal region of the Y chromosome. The different localization of RMY1 on the sex chromosomes provides a clue to the problem of how the sex chromosomes arose from autosomes

  15. Semi-Automatic Rating Method for Neutrophil Alkaline Phosphatase Activity.

    Science.gov (United States)

    Sugano, Kanae; Hashi, Kotomi; Goto, Misaki; Nishi, Kiyotaka; Maeda, Rie; Kono, Keigo; Yamamoto, Mai; Okada, Kazunori; Kaga, Sanae; Miwa, Keiko; Mikami, Taisei; Masauzi, Nobuo

    2017-01-01

    The neutrophil alkaline phosphatase (NAP) score is a valuable test for the diagnosis of myeloproliferative neoplasms, but it has still manually rated. Therefore, we developed a semi-automatic rating method using Photoshop ® and Image-J, called NAP-PS-IJ. Neutrophil alkaline phosphatase staining was conducted with Tomonaga's method to films of peripheral blood taken from three healthy volunteers. At least 30 neutrophils with NAP scores from 0 to 5+ were observed and taken their images. From which the outer part of neutrophil was removed away with Image-J. These were binarized with two different procedures (P1 and P2) using Photoshop ® . NAP-positive area (NAP-PA) and granule (NAP-PGC) were measured and counted with Image-J. The NAP-PA in images binarized with P1 significantly (P < 0.05) differed between images with NAP scores from 0 to 3+ (group 1) and those from 4+ to 5+ (group 2). The original images in group 1 were binarized with P2. NAP-PGC of them significantly (P < 0.05) differed among all four NAP score groups. The mean NAP-PGC with NAP-PS-IJ indicated a good correlation (r = 0.92, P < 0.001) to results by human examiners. The sensitivity and specificity of NAP-PS-IJ were 60% and 92%, which might be considered as a prototypic method for the full-automatic rating NAP score. © 2016 Wiley Periodicals, Inc.

  16. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    Science.gov (United States)

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-10-01

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  17. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    Science.gov (United States)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  18. Semi-automatic Data Integration using Karma

    Science.gov (United States)

    Garijo, D.; Kejriwal, M.; Pierce, S. A.; Houser, P. I. Q.; Peckham, S. D.; Stanko, Z.; Hardesty Lewis, D.; Gil, Y.; Pennington, D. D.; Knoblock, C.

    2017-12-01

    Data integration applications are ubiquitous in scientific disciplines. A state-of-the-art data integration system accepts both a set of data sources and a target ontology as input, and semi-automatically maps the data sources in terms of concepts and relationships in the target ontology. Mappings can be both complex and highly domain-specific. Once such a semantic model, expressing the mapping using community-wide standard, is acquired, the source data can be stored in a single repository or database using the semantics of the target ontology. However, acquiring the mapping is a labor-prone process, and state-of-the-art artificial intelligence systems are unable to fully automate the process using heuristics and algorithms alone. Instead, a more realistic goal is to develop adaptive tools that minimize user feedback (e.g., by offering good mapping recommendations), while at the same time making it intuitive and easy for the user to both correct errors and to define complex mappings. We present Karma, a data integration system that has been developed over multiple years in the information integration group at the Information Sciences Institute, a research institute at the University of Southern California's Viterbi School of Engineering. Karma is a state-of-the-art data integration tool that supports an interactive graphical user interface, and has been featured in multiple domains over the last five years, including geospatial, biological, humanities and bibliographic applications. Karma allows a user to import their own ontology and datasets using widely used formats such as RDF, XML, CSV and JSON, can be set up either locally or on a server, supports a native backend database for prototyping queries, and can even be seamlessly integrated into external computational pipelines, including those ingesting data via streaming data sources, Web APIs and SQL databases. We illustrate a Karma workflow at a conceptual level, along with a live demo, and show use cases of

  19. Colon wall motility: comparison of novel quantitative semi-automatic measurements using cine MRI.

    Science.gov (United States)

    Hoad, C L; Menys, A; Garsed, K; Marciani, L; Hamy, V; Murray, K; Costigan, C; Atkinson, D; Major, G; Spiller, R C; Taylor, S A; Gowland, P A

    2016-03-01

    Recently, cine magnetic resonance imaging (MRI) has shown promise for visualizing movement of the colonic wall, although assessment of data has been subjective and observer dependent. This study aimed to develop an objective and semi-automatic imaging metric of ascending colonic wall movement, using image registration techniques. Cine balanced turbo field echo MRI images of ascending colonic motility were acquired over 2 min from 23 healthy volunteers (HVs) at baseline and following two different macrogol stimulus drinks (11 HVs drank 1 L and 12 HVs drank 2 L). Motility metrics derived from large scale geometric and small scale pixel movement parameters following image registration were developed using the post ingestion data and compared to observer grading of wall motion. Inter and intra-observer variability in the highest correlating metric was assessed using Bland-Altman analysis calculated from two separate observations on a subset of data. All the metrics tested showed significant correlation with the observer rating scores. Line analysis (LA) produced the highest correlation coefficient of 0.74 (95% CI: 0.55-0.86), p cine MRI registered data provides a quick, accurate and non-invasive method to detect wall motion within the ascending colon following a colonic stimulus in the form of a macrogol drink. © 2015 John Wiley & Sons Ltd.

  20. Semi-automatic assessment of skin capillary density: proof of principle and validation.

    Science.gov (United States)

    Gronenschild, E H B M; Muris, D M J; Schram, M T; Karaca, U; Stehouwer, C D A; Houben, A J H M

    2013-11-01

    Skin capillary density and recruitment have been proven to be relevant measures of microvascular function. Unfortunately, the assessment of skin capillary density from movie files is very time-consuming, since this is done manually. This impedes the use of this technique in large-scale studies. We aimed to develop a (semi-) automated assessment of skin capillary density. CapiAna (Capillary Analysis) is a newly developed semi-automatic image analysis application. The technique involves four steps: 1) movement correction, 2) selection of the frame range and positioning of the region of interest (ROI), 3) automatic detection of capillaries, and 4) manual correction of detected capillaries. To gain insight into the performance of the technique, skin capillary density was measured in twenty participants (ten women; mean age 56.2 [42-72] years). To investigate the agreement between CapiAna and the classic manual counting procedure, we used weighted Deming regression and Bland-Altman analyses. In addition, intra- and inter-observer coefficients of variation (CVs), and differences in analysis time were assessed. We found a good agreement between CapiAna and the classic manual method, with a Pearson's correlation coefficient (r) of 0.95 (Pdifferences between the two methods, with an intercept of the Deming regression of 1.75 (-6.04; 9.54), while the Bland-Altman analysis showed a mean difference (bias) of 2.0 (-13.5; 18.4) capillaries/mm(2). The intra- and inter-observer CVs of CapiAna were 2.5% and 5.6% respectively, while for the classic manual counting procedure these were 3.2% and 7.2%, respectively. Finally, the analysis time for CapiAna ranged between 25 and 35min versus 80 and 95min for the manual counting procedure. We have developed a semi-automatic image analysis application (CapiAna) for the assessment of skin capillary density, which agrees well with the classic manual counting procedure, is time-saving, and has a better reproducibility as compared to the

  1. A dorsolateral prefrontal cortex semi-automatic segmenter

    Science.gov (United States)

    Al-Hakim, Ramsey; Fallon, James; Nain, Delphine; Melonakos, John; Tannenbaum, Allen

    2006-03-01

    Structural, functional, and clinical studies in schizophrenia have, for several decades, consistently implicated dysfunction of the prefrontal cortex in the etiology of the disease. Functional and structural imaging studies, combined with clinical, psychometric, and genetic analyses in schizophrenia have confirmed the key roles played by the prefrontal cortex and closely linked "prefrontal system" structures such as the striatum, amygdala, mediodorsal thalamus, substantia nigra-ventral tegmental area, and anterior cingulate cortices. The nodal structure of the prefrontal system circuit is the dorsal lateral prefrontal cortex (DLPFC), or Brodmann area 46, which also appears to be the most commonly studied and cited brain area with respect to schizophrenia. 1, 2, 3, 4 In 1986, Weinberger et. al. tied cerebral blood flow in the DLPFC to schizophrenia.1 In 2001, Perlstein et. al. demonstrated that DLPFC activation is essential for working memory tasks commonly deficient in schizophrenia. 2 More recently, groups have linked morphological changes due to gene deletion and increased DLPFC glutamate concentration to schizophrenia. 3, 4 Despite the experimental and clinical focus on the DLPFC in structural and functional imaging, the variability of the location of this area, differences in opinion on exactly what constitutes DLPFC, and inherent difficulties in segmenting this highly convoluted cortical region have contributed to a lack of widely used standards for manual or semi-automated segmentation programs. Given these implications, we developed a semi-automatic tool to segment the DLPFC from brain MRI scans in a reproducible way to conduct further morphological and statistical studies. The segmenter is based on expert neuroanatomist rules (Fallon-Kindermann rules), inspired by cytoarchitectonic data and reconstructions presented by Rajkowska and Goldman-Rakic. 5 It is semi-automated to provide essential user interactivity. We present our results and provide details on

  2. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds.

    Science.gov (United States)

    Cristiano, Bárbara F G; Delgado, José Ubiratan; da Silva, José Wanderley S; de Barros, Pedro D; de Araújo, Radier M S; Dias, Fábio C; Lopes, Ricardo T

    2012-09-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Evaluation of Semi-Automatic Metadata Generation Tools: A Survey of the Current State of the Art

    Directory of Open Access Journals (Sweden)

    Jung-ran Park

    2015-09-01

    Full Text Available Assessment of the current landscape of semi-automatic metadata generation tools is particularly important considering the rapid development of digital repositories and the recent explosion of big data. Utilization of (semiautomatic metadata generation is critical in addressing these environmental changes and may be unavoidable in the future considering the costly and complex operation of manual metadata creation. To address such needs, this study examines the range of semi-automatic metadata generation tools (n=39 while providing an analysis of their techniques, features, and functions. The study focuses on open-source tools that can be readily utilized in libraries and other memory institutions.  The challenges and current barriers to implementation of these tools were identified. The greatest area of difficulty lies in the fact that  the piecemeal development of most semi-automatic generation tools only addresses part of the issue of semi-automatic metadata generation, providing solutions to one or a few metadata elements but not the full range elements.  This indicates that significant local efforts will be required to integrate the various tools into a coherent set of a working whole.  Suggestions toward such efforts are presented for future developments that may assist information professionals with incorporation of semi-automatic tools within their daily workflows.

  4. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds

    International Nuclear Information System (INIS)

    Cristiano, Bárbara F.G.; Delgado, José Ubiratan; Wanderley S da Silva, José; Barros, Pedro D. de; Araújo, Radier M.S. de; Dias, Fábio C.; Lopes, Ricardo T.

    2012-01-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. - Highlights: ► A semi-automatic potentiometric titration method was developed for U charaterization. ► K 2 Cr 2 O 7 was the only certified reference material used. ► Values obtained for U 3 O 8 samples were consistent with certified. ► Uncertainty of 0.01% was useful for characterization and intercomparison program.

  5. Semi-Automatic Detection of Indigenous Settlement Features on Hispaniola through Remote Sensing Data

    Directory of Open Access Journals (Sweden)

    Till F. Sonnemann

    2017-12-01

    Full Text Available Satellite imagery has had limited application in the analysis of pre-colonial settlement archaeology in the Caribbean; visible evidence of wooden structures perishes quickly in tropical climates. Only slight topographic modifications remain, typically associated with middens. Nonetheless, surface scatters, as well as the soil characteristics they produce, can serve as quantifiable indicators of an archaeological site, detectable by analyzing remote sensing imagery. A variety of pre-processed, very diverse data sets went through a process of image registration, with the intention to combine multispectral bands to feed two different semi-automatic direct detection algorithms: a posterior probability, and a frequentist approach. Two 5 × 5 km2 areas in the northwestern Dominican Republic with diverse environments, having sufficient imagery coverage, and a representative number of known indigenous site locations, served each for one approach. Buffers around the locations of known sites, as well as areas with no likely archaeological evidence were used as samples. The resulting maps offer quantifiable statistical outcomes of locations with similar pixel value combinations as the identified sites, indicating higher probability of archaeological evidence. These still very experimental and rather unvalidated trials, as they have not been subsequently groundtruthed, show variable potential of this method in diverse environments.

  6. Evaluation of semi-automatic arterial stenosis quantification

    International Nuclear Information System (INIS)

    Hernandez Hoyos, M.; Universite Claude Bernard Lyon 1, 69 - Villeurbanne; Univ. de los Andes, Bogota; Serfaty, J.M.; Douek, P.C.; Universite Claude Bernard Lyon 1, 69 - Villeurbanne; Hopital Cardiovasculaire et Pneumologique L. Pradel, Bron; Maghiar, A.; Mansard, C.; Orkisz, M.; Magnin, I.; Universite Claude Bernard Lyon 1, 69 - Villeurbanne

    2006-01-01

    Object: To assess the accuracy and reproducibility of semi-automatic vessel axis extraction and stenosis quantification in 3D contrast-enhanced Magnetic Resonance Angiography (CE-MRA) of the carotid arteries (CA). Materials and methods: A total of 25 MRA datasets was used: 5 phantoms with known stenoses, and 20 patients (40 CAs) drawn from a multicenter trial database. Maracas software extracted vessel centerlines and quantified the stenoses, based on boundary detection in planes perpendicular to the centerline. Centerline accuracy was visually scored. Semi-automatic measurements were compared with: (1) theoretical phantom morphometric values, and (2) stenosis degrees evaluated by two independent radiologists. Results: Exploitable centerlines were obtained in 97% of CA and in all phantoms. In phantoms, the software achieved a better agreement with theoretic stenosis degrees (weighted kappa Κ W = 0.91) than the radiologists (Κ W = 0.69). In patients, agreement between software and radiologists varied from Κ W =0.67 to 0.90. In both, Maracas was substantially more reproducible than the readers. Mean operating time was within 1 min/ CA. Conclusion: Maracas software generates accurate 3D centerlines of vascular segments with minimum user intervention. Semi-automatic quantification of CA stenosis is also accurate, except in very severe stenoses that cannot be segmented. It substantially reduces the inter-observer variability. (orig.)

  7. Comparison Of Semi-Automatic And Automatic Slick Detection Algorithms For Jiyeh Power Station Oil Spill, Lebanon

    Science.gov (United States)

    Osmanoglu, B.; Ozkan, C.; Sunar, F.

    2013-10-01

    After air strikes on July 14 and 15, 2006 the Jiyeh Power Station started leaking oil into the eastern Mediterranean Sea. The power station is located about 30 km south of Beirut and the slick covered about 170 km of coastline threatening the neighboring countries Turkey and Cyprus. Due to the ongoing conflict between Israel and Lebanon, cleaning efforts could not start immediately resulting in 12 000 to 15 000 tons of fuel oil leaking into the sea. In this paper we compare results from automatic and semi-automatic slick detection algorithms. The automatic detection method combines the probabilities calculated for each pixel from each image to obtain a joint probability, minimizing the adverse effects of atmosphere on oil spill detection. The method can readily utilize X-, C- and L-band data where available. Furthermore wind and wave speed observations can be used for a more accurate analysis. For this study, we utilize Envisat ASAR ScanSAR data. A probability map is generated based on the radar backscatter, effect of wind and dampening value. The semi-automatic algorithm is based on supervised classification. As a classifier, Artificial Neural Network Multilayer Perceptron (ANN MLP) classifier is used since it is more flexible and efficient than conventional maximum likelihood classifier for multisource and multi-temporal data. The learning algorithm for ANN MLP is chosen as the Levenberg-Marquardt (LM). Training and test data for supervised classification are composed from the textural information created from SAR images. This approach is semiautomatic because tuning the parameters of classifier and composing training data need a human interaction. We point out the similarities and differences between the two methods and their results as well as underlining their advantages and disadvantages. Due to the lack of ground truth data, we compare obtained results to each other, as well as other published oil slick area assessments.

  8. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    Science.gov (United States)

    Vho, Alice; Bistacchi, Andrea

    2015-04-01

    A quantitative analysis of fault-rock distribution is of paramount importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation along faults at depth. Here we present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM). This workflow has been developed on a real case of study: the strike-slip Gole Larghe Fault Zone (GLFZ). It consists of a fault zone exhumed from ca. 10 km depth, hosted in granitoid rocks of Adamello batholith (Italian Southern Alps). Individual seismogenic slip surfaces generally show green cataclasites (cemented by the precipitation of epidote and K-feldspar from hydrothermal fluids) and more or less well preserved pseudotachylytes (black when well preserved, greenish to white when altered). First of all, a digital model for the outcrop is reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs, processed with VisualSFM software. By using high resolution photographs the DOM can have a much higher resolution than with LIDAR surveys, up to 0.2 mm/pixel. Then, image processing is performed to map the fault-rock distribution with the ImageJ-Fiji package. Green cataclasites and epidote/K-feldspar veins can be quite easily separated from the host rock (tonalite) using spectral analysis. Particularly, band ratio and principal component analysis have been tested successfully. The mapping of black pseudotachylyte veins is more tricky because the differences between the pseudotachylyte and biotite spectral signature are not appreciable. For this reason we have tested different morphological processing tools aimed at identifying (and subtracting) the tiny biotite grains. We propose a solution based on binary images involving a combination of size and circularity thresholds. Comparing the results with manually segmented images, we noticed that major problems occur only when pseudotachylyte veins are very thin and discontinuous. After

  9. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    International Nuclear Information System (INIS)

    Mazzurana, M; Sandrini, L; Vaccari, A; Malacarne, C; Cristoforetti, L; Pontalti, R

    2003-01-01

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight

  10. Bouncy knee in a semi-automatic knee lock prosthesis.

    Science.gov (United States)

    Fisher, L D; Lord, M

    1986-04-01

    The Bouncy Knee concept has previously proved of value when fitted to stabilised knee units of active amputees. The stance phase flex-extend action afforded by a Bouncy Knee increased the symmetry of gait and also gave better tolerance to slopes and uneven ground. A bouncy function has now been incorporated into a knee of the semi-automatic knee lock design in a pilot laboratory trial involving six patients. These less active patients did not show consistent changes in symmetry of gait, but demonstrated an improved ability to walk on slopes and increased their walking range. Subjective response was positive, as noted in the previous trials.

  11. Implementation of a microcontroller-based semi-automatic coagulator.

    Science.gov (United States)

    Chan, K; Kirumira, A; Elkateeb, A

    2001-01-01

    The coagulator is an instrument used in hospitals to detect clot formation as a function of time. Generally, these coagulators are very expensive and therefore not affordable by a doctors' office and small clinics. The objective of this project is to design and implement a low cost semi-automatic coagulator (SAC) prototype. The SAC is capable of assaying up to 12 samples and can perform the following tests: prothrombin time (PT), activated partial thromboplastin time (APTT), and PT/APTT combination. The prototype has been tested successfully.

  12. Semi-automatic quantitative measurements of intracranial internal carotid artery stenosis and calcification using CT angiography

    International Nuclear Information System (INIS)

    Bleeker, Leslie; Berg, Rene van den; Majoie, Charles B.; Marquering, Henk A.; Nederkoorn, Paul J.

    2012-01-01

    Intracranial carotid artery atherosclerotic disease is an independent predictor for recurrent stroke. However, its quantitative assessment is not routinely performed in clinical practice. In this diagnostic study, we present and evaluate a novel semi-automatic application to quantitatively measure intracranial internal carotid artery (ICA) degree of stenosis and calcium volume in CT angiography (CTA) images. In this retrospective study involving CTA images of 88 consecutive patients, intracranial ICA stenosis was quantitatively measured by two independent observers. Stenoses were categorized with cutoff values of 30% and 50%. The calcification in the intracranial ICA was qualitatively categorized as absent, mild, moderate, or severe and quantitatively measured using the semi-automatic application. Linear weighted kappa values were calculated to assess the interobserver agreement of the stenosis and calcium categorization. The average and the standard deviation of the quantitative calcium volume were calculated for the calcium categories. For the stenosis measurements, the CTA images of 162 arteries yielded an interobserver correlation of 0.78 (P < 0.001). Kappa values of the categorized stenosis measurements were moderate: 0.45 and 0.58 for cutoff values of 30% and 50%, respectively. The kappa value for the calcium categorization was 0.62, with a good agreement between the qualitative and quantitative calcium assessment. Quantitative degree of stenosis measurement of the intracranial ICA on CTA is feasible with a good interobserver agreement ICA. Qualitative calcium categorization agrees well with quantitative measurements. (orig.)

  13. Development and evaluation of new semi-automatic TLD reader software

    International Nuclear Information System (INIS)

    Pathan, M.S.; Pradhan, S.M.; Palani Selvam, T.; Datta, D.

    2018-01-01

    Nowadays, all technology advancement is primarily focused on creating the user-friendly environment while operating any machine, also minimizing the human errors by automation of procedures. In the present study development and evaluation of new software for semi-automatic TLD badge reader (TLDBR-7B) is presented. The software provides an interactive interface and is compatible with latest windows OS as well as USB mode of data communication. Important new features of the software are automatic glow curve analysis for identifying any abnormality, event log register, user defined limits on TL count and time of temperature stabilization for readout interruption and auto reading resumption options

  14. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds

    International Nuclear Information System (INIS)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da; Lopes, Ricardo T.

    2011-01-01

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  15. Semi Automatic Ontology Instantiation in the domain of Risk Management

    Science.gov (United States)

    Makki, Jawad; Alquier, Anne-Marie; Prince, Violaine

    One of the challenging tasks in the context of Ontological Engineering is to automatically or semi-automatically support the process of Ontology Learning and Ontology Population from semi-structured documents (texts). In this paper we describe a Semi-Automatic Ontology Instantiation method from natural language text, in the domain of Risk Management. This method is composed from three steps 1 ) Annotation with part-of-speech tags, 2) Semantic Relation Instances Extraction, 3) Ontology instantiation process. It's based on combined NLP techniques using human intervention between steps 2 and 3 for control and validation. Since it heavily relies on linguistic knowledge it is not domain dependent which is a good feature for portability between the different fields of risk management application. The proposed methodology uses the ontology of the PRIMA1 project (supported by the European community) as a Generic Domain Ontology and populates it via an available corpus. A first validation of the approach is done through an experiment with Chemical Fact Sheets from Environmental Protection Agency2.

  16. Semi-automatic parking slot marking recognition for intelligent parking assist systems

    Directory of Open Access Journals (Sweden)

    Ho Gi Jung

    2014-01-01

    Full Text Available This paper proposes a semi-automatic parking slot marking-based target position designation method for parking assist systems in cases where the parking slot markings are of a rectangular type, and its efficient implementation for real-time operation. After the driver observes a rearview image captured by a rearward camera installed at the rear of the vehicle through a touchscreen-based human machine interface, a target parking position is designated by touching the inside of a parking slot. To ensure the proposed method operates in real-time in an embedded environment, access of the bird's-eye view image is made efficient: image-wise batch transformation is replaced with pixel-wise instantaneous transformation. The proposed method showed a 95.5% recognition rate in 378 test cases with 63 test images. Additionally, experiments confirmed that the pixel-wise instantaneous transformation reduced execution time by 92%.

  17. A novel region-growing based semi-automatic segmentation protocol for three-dimensional condylar reconstruction using cone beam computed tomography (CBCT.

    Directory of Open Access Journals (Sweden)

    Tong Xi

    Full Text Available OBJECTIVE: To present and validate a semi-automatic segmentation protocol to enable an accurate 3D reconstruction of the mandibular condyles using cone beam computed tomography (CBCT. MATERIALS AND METHODS: Approval from the regional medical ethics review board was obtained for this study. Bilateral mandibular condyles in ten CBCT datasets of patients were segmented using the currently proposed semi-automatic segmentation protocol. This segmentation protocol combined 3D region-growing and local thresholding algorithms. The segmentation of a total of twenty condyles was performed by two observers. The Dice-coefficient and distance map calculations were used to evaluate the accuracy and reproducibility of the segmented and 3D rendered condyles. RESULTS: The mean inter-observer Dice-coefficient was 0.98 (range [0.95-0.99]. An average 90th percentile distance of 0.32 mm was found, indicating an excellent inter-observer similarity of the segmented and 3D rendered condyles. No systematic errors were observed in the currently proposed segmentation protocol. CONCLUSION: The novel semi-automated segmentation protocol is an accurate and reproducible tool to segment and render condyles in 3D. The implementation of this protocol in the clinical practice allows the CBCT to be used as an imaging modality for the quantitative analysis of condylar morphology.

  18. A semi-automatic method for positioning a femoral bone reconstruction for strict view generation.

    Science.gov (United States)

    Milano, Federico; Ritacco, Lucas; Gomez, Adrian; Gonzalez Bernaldo de Quiros, Fernan; Risk, Marcelo

    2010-01-01

    In this paper we present a semi-automatic method for femoral bone positioning after 3D image reconstruction from Computed Tomography images. This serves as grounding for the definition of strict axial, longitudinal and anterior-posterior views, overcoming the problem of patient positioning biases in 2D femoral bone measuring methods. After the bone reconstruction is aligned to a standard reference frame, new tomographic slices can be generated, on which unbiased measures may be taken. This could allow not only accurate inter-patient comparisons but also intra-patient comparisons, i.e., comparisons of images of the same patient taken at different times. This method could enable medical doctors to diagnose and follow up several bone deformities more easily.

  19. Micronuclei frequency in circulating erythrocytes from rainbow trout (Oncorhynchus mykiss) subjected to radiation, an image analysis and flow cytometric study

    International Nuclear Information System (INIS)

    Schultz, N.; Norrgren, L.; Grawe, J.; Johannisson, A.; Medhage, O.

    1993-01-01

    Rainbow trout (oncorhynchus mykiss) were exposed to a single X-ray dose of 4 Gy. The frequency of micronuclei in the peripheral erythrocytes was investigated at regular intervals up to 58 days after the exposure. A flow cytometric method and a semi-automatic image analysis method were used to estimate the micronuclei frequency. The results show that both methods can detect an increased frequency of micronuclei in peripheral erythrocytes from exposed fish. However, the semi-automatic image analysis method was the most stable and sensitive. (Author)

  20. Semi-automatic synthesis and biological evaluation of 18F-FCH as an oncologic PET tracer

    International Nuclear Information System (INIS)

    Wu Zhanhong; Wang Shizhen; Zhou Qian; Fu Zhe; Qiu Feichan; Huo Li

    2005-01-01

    18 F-fluromethylcholine ( 18 F-FCH) as a PET tracer is synthesized. The semi-automatic synthesis assembly of 18 F-FCH is modified from CPCU(CTI). The radiochemical purity is measured by analytical HPLC. The radiochemical yield and the radiochemical purity of 18 F-FCH are 15% and >99%, respectively. The total radiosynthesis time is 55 min after EOB. The labeled product exhibited low toxicity. The biodistribution in normal mice and the toxicity are studied. PET imaging with 18 F-FCH is performed on tumor xenograft murine model. The semi-automatic synthesis assembly is promising to be used for routine clinic radiopharmaceutical preparation and preliminary study has shown the usefulness of 18 F-FCH as an oncologic PET tracer. (authors)

  1. Neuromantic - from semi manual to semi automatic reconstruction of neuron morphology

    Directory of Open Access Journals (Sweden)

    Darren eMyatt

    2012-03-01

    Full Text Available The ability to create accurate geometric models of neuronal morphologyis important for understanding the role of shape in informationprocessing. Despite a significant amount of research on automating neuronreconstructions from image stacks obtained via microscopy, in practice mostdata are still collected manually.This paper describes Neuromantic, an open source system for threedimensional digital tracing of neurites. Neuromantic reconstructions arecomparable in quality to those of existing commercial and freeware systemswhile balancing speed and accuracy of manual reconstruction. Thecombination of semi-automatic tracing, intuitive editing, and ability ofvisualising large image stacks on standard computing platforms providesa versatile tool that can help address the reconstructions availabilitybottleneck. Practical considerations for reducing the computational time andspace requirements of the extended algorithm are also discussed.

  2. The Semi-automatic Synthesis of 18F-fluoroethyl-choline by Domestic FDG Synthesizer

    Directory of Open Access Journals (Sweden)

    ZHOU Ming

    2016-02-01

    Full Text Available As an important complementary imaging agent for 18F-FDG, 18F-fluoroethyl-choline (18F-FECH has been demonstrated to be promising in brain and prostate cancer imaging. By using domestic PET-FDG-TI-I CPCU synthesizer, 18F-FECH was synthesized by different reagents and consumable supplies. The C18 column was added before the product collection bottle to remove K2.2.2. The 18F-FECH was synthesized by PET-FDG-IT-I synthesizer efficiently about 30 minutes by radiochemical yield of 42.0% (no decay corrected, n=5, and the radiochemical purity was still more than 99.0% after 6 hours. The results showed the domestic PET-FDG-IT-I synthesizer could semi-automatically synthesize injectable 18F-FECH in high efficiency and radiochemical purity

  3. Usefulness of semi-automatic volumetry compared to established linear measurements in predicting lymph node metastases in MSCT

    Energy Technology Data Exchange (ETDEWEB)

    Buerke, Boris; Puesken, Michael; Heindel, Walter; Wessling, Johannes (Dept. of Clinical Radiology, Univ. of Muenster (Germany)), email: buerkeb@uni-muenster.de; Gerss, Joachim (Dept. of Medical Informatics and Biomathematics, Univ. of Muenster (Germany)); Weckesser, Matthias (Dept. of Nuclear Medicine, Univ. of Muenster (Germany))

    2011-06-15

    Background Volumetry of lymph nodes potentially better reflect asymmetric size alterations independently of lymph node orientation in comparison to metric parameters (e.g. long-axis diameter). Purpose To distinguish between benign and malignant lymph nodes by comparing 2D and semi-automatic 3D measurements in MSCT. Material and Methods FDG-18 PET-CT was performed in 33 patients prior to therapy for malignant melanoma at stage III/IV. One hundred and eighty-six cervico-axillary, abdominal and inguinal lymph nodes were evaluated independently by two radiologists, both manually and with the use of semi-automatic segmentation software. Long axis (LAD), short axis (SAD), maximal 3D diameter, volume and elongation were obtained. PET-CT, PET-CT follow-up and/or histology served as a combined reference standard. Statistics encompassed intra-class correlation coefficients and ROC curves. Results Compared to manual assessment, semi-automatic inter-observer variability was found to be lower, e.g. at 2.4% (95% CI 0.05-4.8) for LAD. The standard of reference revealed metastases in 90 (48%) of 186 lymph nodes. Semi-automatic prediction of lymph node metastases revealed highest areas under the ROC curves for volume (reader 1 0.77, 95%CI 0.64-0.90; reader 2 0.76, 95%CI 0.59-0.86) and SAD (reader 1 0.76, 95%CI 0.64-0.88; reader 2 0.75, 95%CI 0.62-0.89). The findings for LAD (reader 1 0.73, 95%CI 0.60-0.86; reader 2 0.71, 95%CI 0.71, 95%CI 0.57-0.85) and maximal 3D diameter (reader 1 0.70, 95%CI 0.53-0.86; reader 2 0.76, 95%CI 0.50-0.80) were found substantially lower and for elongation (reader 1 0.65, 95%CI 0.50-0.79; reader 2 0.66, 95%CI 0.52-0.81) significantly lower (p < 0.05). Conclusion Semi-automatic analysis of lymph nodes in malignant melanoma is supported by high segmentation quality and reproducibility. As compared to established SAD, semi-automatic lymph node volumetry does not have an additive role for categorizing lymph nodes as normal or metastatic in malignant

  4. Semi-automatic identification of punching areas for tissue microarray building: the tubular breast cancer pilot study

    Directory of Open Access Journals (Sweden)

    Beltrame Francesco

    2010-11-01

    Full Text Available Abstract Background Tissue MicroArray technology aims to perform immunohistochemical staining on hundreds of different tissue samples simultaneously. It allows faster analysis, considerably reducing costs incurred in staining. A time consuming phase of the methodology is the selection of tissue areas within paraffin blocks: no utilities have been developed for the identification of areas to be punched from the donor block and assembled in the recipient block. Results The presented work supports, in the specific case of a primary subtype of breast cancer (tubular breast cancer, the semi-automatic discrimination and localization between normal and pathological regions within the tissues. The diagnosis is performed by analysing specific morphological features of the sample such as the absence of a double layer of cells around the lumen and the decay of a regular glands-and-lobules structure. These features are analysed using an algorithm which performs the extraction of morphological parameters from images and compares them to experimentally validated threshold values. Results are satisfactory since in most of the cases the automatic diagnosis matches the response of the pathologists. In particular, on a total of 1296 sub-images showing normal and pathological areas of breast specimens, algorithm accuracy, sensitivity and specificity are respectively 89%, 84% and 94%. Conclusions The proposed work is a first attempt to demonstrate that automation in the Tissue MicroArray field is feasible and it can represent an important tool for scientists to cope with this high-throughput technique.

  5. A semi-automatic annotation tool for cooking video

    Science.gov (United States)

    Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe

    2013-03-01

    In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

  6. Resolving Carbonate Platform Geometries on the Island of Bonaire, Caribbean Netherlands through Semi-Automatic GPR Facies Classification

    Science.gov (United States)

    Bowling, R. D.; Laya, J. C.; Everett, M. E.

    2018-05-01

    The study of exposed carbonate platforms provides observational constraints on regional tectonics and sea-level history. In this work Miocene-aged carbonate platform units of the Seroe Domi Formation are investigated, on the island of Bonaire, located in the Southern Caribbean. Ground penetrating radar (GPR) was used to probe near-surface structural geometries associated with these lithologies. The single cross-island transect described herein allowed for continuous mapping of geologic structures on kilometer length scales. Numerical analysis was applied to the data in the form of k-means clustering of structure-parallel vectors derived from image structure tensors. This methodology enables radar facies along the survey transect to be semi-automatically mapped. The results provide subsurface evidence to support previous surficial and outcrop observations, and reveal complex stratigraphy within the platform. From the GPR data analysis, progradational clinoform geometries were observed on the northeast side of the island which supports the tectonics and depositional trends of the region. Furthermore, several leeward-side radar facies are identified which correlate to environments of deposition conducive to dolomitization via reflux mechanisms.

  7. Semi-automatic dimension and density measuring system for UO{sub 2} pellets

    Energy Technology Data Exchange (ETDEWEB)

    Subramanian, K S; Shyam, P G; Muralidhara Rao, J V; Laxminarayana, B; Suryaprakash, M [Nuclear Fuel Complex, Hyderabad (India)

    1994-12-31

    The parameters like diameter, length, L/D ratio and sintered density of cylindrical UO{sub 2} pellets are critical in both the PHWR and BWR fuels. A semi-automatic system is developed by interfacing a laser micrometer, a digital electronic balance with a PC-XT and incorporating menu-driven, user-friendly software developed in-house. The advantages are data storage, acquisition, statistical analysis with histograms and print out of acquired and computed values with respective set-up limits along with the production details like lot number, press number, furnace number etc. This paper describes the details of the above system and the software. 3 figs., 2 ills.

  8. Semi-automatic creation and exploitation of competence ontologies for trend aware profiling, matching and planning

    Directory of Open Access Journals (Sweden)

    H. Ulrich Hoppe

    2013-03-01

    Full Text Available Human resource managers are confronted with the problem that they have to fulfil the enterprise’s competence needs either by developing their current staff or by recruiting new employees. In both cases decisions about who to select for the new position and more often which competences are crucial for the future success. This is especially true for highly dynamic industries like the IT industry. This article presents our work from the KoPIWA project in the Digital Economy. Our approach is based on a conceptual model that encompasses the market level, the social context and relations between competences. This model is the foundation for the ontology based decision support system for human resource managers presented in this article. To semi-automatically create and update the competence ontology methods from the areas data mining, social network analysis and information retrieval are employed. The results of these methods with regard to recruiting and learning processes are presented.

  9. Semi-automatic detection and correction of body organ motion, particularly cardiac motion in SPECT studies

    International Nuclear Information System (INIS)

    Quintana, J.C.; Caceres, F.; Vargas, P.

    2002-01-01

    patient and artificially imposed). The method is fast (<20s) and robust as compared with manual or other semi-automatic detection of body organ motions in nuclear medicine studies. Conclusion: A fast and robust semi-automatic patient motion detection and correction for SPECT studies has been developed

  10. Semi-automatic classification of skeletal morphology in genetically altered mice using flat-panel volume computed tomography.

    Directory of Open Access Journals (Sweden)

    Christian Dullin

    2007-07-01

    Full Text Available Rapid progress in exploring the human and mouse genome has resulted in the generation of a multitude of mouse models to study gene functions in their biological context. However, effective screening methods that allow rapid noninvasive phenotyping of transgenic and knockout mice are still lacking. To identify murine models with bone alterations in vivo, we used flat-panel volume computed tomography (fpVCT for high-resolution 3-D imaging and developed an algorithm with a computational intelligence system. First, we tested the accuracy and reliability of this approach by imaging discoidin domain receptor 2- (DDR2- deficient mice, which display distinct skull abnormalities as shown by comparative landmark-based analysis. High-contrast fpVCT data of the skull with 200 microm isotropic resolution and 8-s scan time allowed segmentation and computation of significant shape features as well as visualization of morphological differences. The application of a trained artificial neuronal network to these datasets permitted a semi-automatic and highly accurate phenotype classification of DDR2-deficient compared to C57BL/6 wild-type mice. Even heterozygous DDR2 mice with only subtle phenotypic alterations were correctly determined by fpVCT imaging and identified as a new class. In addition, we successfully applied the algorithm to classify knockout mice lacking the DDR1 gene with no apparent skull deformities. Thus, this new method seems to be a potential tool to identify novel mouse phenotypes with skull changes from transgenic and knockout mice on the basis of random mutagenesis as well as from genetic models. However for this purpose, new neuronal networks have to be created and trained. In summary, the combination of fpVCT images with artificial neuronal networks provides a reliable, novel method for rapid, cost-effective, and noninvasive primary screening tool to detect skeletal phenotypes in mice.

  11. A simple semi-automatic approach for land cover classification from multispectral remote sensing imagery.

    Directory of Open Access Journals (Sweden)

    Dong Jiang

    Full Text Available Land cover data represent a fundamental data source for various types of scientific research. The classification of land cover based on satellite data is a challenging task, and an efficient classification method is needed. In this study, an automatic scheme is proposed for the classification of land use using multispectral remote sensing images based on change detection and a semi-supervised classifier. The satellite image can be automatically classified using only the prior land cover map and existing images; therefore human involvement is reduced to a minimum, ensuring the operability of the method. The method was tested in the Qingpu District of Shanghai, China. Using Environment Satellite 1(HJ-1 images of 2009 with 30 m spatial resolution, the areas were classified into five main types of land cover based on previous land cover data and spectral features. The results agreed on validation of land cover maps well with a Kappa value of 0.79 and statistical area biases in proportion less than 6%. This study proposed a simple semi-automatic approach for land cover classification by using prior maps with satisfied accuracy, which integrated the accuracy of visual interpretation and performance of automatic classification methods. The method can be used for land cover mapping in areas lacking ground reference information or identifying rapid variation of land cover regions (such as rapid urbanization with convenience.

  12. Analysis of carotid artery plaque and wall boundaries on CT images by using a semi-automatic method based on level set model

    International Nuclear Information System (INIS)

    Saba, Luca; Sannia, Stefano; Ledda, Giuseppe; Gao, Hao; Acharya, U.R.; Suri, Jasjit S.

    2012-01-01

    The purpose of this study was to evaluate the potentialities of a semi-automated technique in the detection and measurement of the carotid artery plaque. Twenty-two consecutive patients (18 males, 4 females; mean age 62 years) examined with MDCTA from January 2011 to March 2011 were included in this retrospective study. Carotid arteries are examined with a 16-multi-detector-row CT system, and for each patient, the most diseased carotid was selected. In the first phase, the carotid plaque was identified and one experienced radiologist manually traced the inner and outer boundaries by using polyline and radial distance method (PDM and RDM, respectively). In the second phase, the carotid inner and outer boundaries were traced with an automated algorithm: level-set-method (LSM). Data were compared by using Pearson rho correlation, Bland-Altman, and regression. A total of 715 slices were analyzed. The mean thickness of the plaque using the reference PDM was 1.86 mm whereas using the LSM-PDM was 1.96 mm; using the reference RDM was 2.06 mm whereas using the LSM-RDM was 2.03 mm. The correlation values between the references, the LSM, the PDM and the RDM were 0.8428, 0.9921, 0.745 and 0.6425. Bland-Altman demonstrated a very good agreement in particular with the RDM method. Results of our study indicate that LSM method can automatically measure the thickness of the plaque and that the best results are obtained with the RDM. Our results suggest that advanced computer-based algorithms can identify and trace the plaque boundaries like an experienced human reader. (orig.)

  13. Analysis of carotid artery plaque and wall boundaries on CT images by using a semi-automatic method based on level set model

    Energy Technology Data Exchange (ETDEWEB)

    Saba, Luca; Sannia, Stefano; Ledda, Giuseppe [University of Cagliari - Azienda Ospedaliero Universitaria di Cagliari, Department of Radiology, Monserrato, Cagliari (Italy); Gao, Hao [University of Strathclyde, Signal Processing Centre for Excellence in Signal and Image Processing, Department of Electronic and Electrical Engineering, Glasgow (United Kingdom); Acharya, U.R. [Ngee Ann Polytechnic University, Department of Electronics and Computer Engineering, Clementi (Singapore); Suri, Jasjit S. [Biomedical Technologies Inc., Denver, CO (United States); Idaho State University (Aff.), Pocatello, ID (United States)

    2012-11-15

    The purpose of this study was to evaluate the potentialities of a semi-automated technique in the detection and measurement of the carotid artery plaque. Twenty-two consecutive patients (18 males, 4 females; mean age 62 years) examined with MDCTA from January 2011 to March 2011 were included in this retrospective study. Carotid arteries are examined with a 16-multi-detector-row CT system, and for each patient, the most diseased carotid was selected. In the first phase, the carotid plaque was identified and one experienced radiologist manually traced the inner and outer boundaries by using polyline and radial distance method (PDM and RDM, respectively). In the second phase, the carotid inner and outer boundaries were traced with an automated algorithm: level-set-method (LSM). Data were compared by using Pearson rho correlation, Bland-Altman, and regression. A total of 715 slices were analyzed. The mean thickness of the plaque using the reference PDM was 1.86 mm whereas using the LSM-PDM was 1.96 mm; using the reference RDM was 2.06 mm whereas using the LSM-RDM was 2.03 mm. The correlation values between the references, the LSM, the PDM and the RDM were 0.8428, 0.9921, 0.745 and 0.6425. Bland-Altman demonstrated a very good agreement in particular with the RDM method. Results of our study indicate that LSM method can automatically measure the thickness of the plaque and that the best results are obtained with the RDM. Our results suggest that advanced computer-based algorithms can identify and trace the plaque boundaries like an experienced human reader. (orig.)

  14. Validation of a semi-automatic protocol for the assessment of the tear meniscus central area based on open-source software

    Science.gov (United States)

    Pena-Verdeal, Hugo; Garcia-Resua, Carlos; Yebra-Pimentel, Eva; Giraldez, Maria J.

    2017-08-01

    Purpose: Different lower tear meniscus parameters can be clinical assessed on dry eye diagnosis. The aim of this study was to propose and analyse the variability of a semi-automatic method for measuring lower tear meniscus central area (TMCA) by using open source software. Material and methods: On a group of 105 subjects, one video of the lower tear meniscus after fluorescein instillation was generated by a digital camera attached to a slit-lamp. A short light beam (3x5 mm) with moderate illumination in the central portion of the meniscus (6 o'clock) was used. Images were extracted from each video by a masked observer. By using an open source software based on Java (NIH ImageJ), a further observer measured in a masked and randomized order the TMCA in the short light beam illuminated area by two methods: (1) manual method, where TMCA images was "manually" measured; (2) semi-automatic method, where TMCA images were transformed in an 8-bit-binary image, then holes inside this shape were filled and on the isolated shape, the area size was obtained. Finally, both measurements, manual and semi-automatic, were compared. Results: Paired t-test showed no statistical difference between both techniques results (p = 0.102). Pearson correlation between techniques show a significant positive near to perfect correlation (r = 0.99; p Conclusions: This study showed a useful tool to objectively measure the frontal central area of the meniscus in photography by free open source software.

  15. Conceptual design of semi-automatic wheelbarrow to overcome ergonomics problems among palm oil plantation workers

    Science.gov (United States)

    Nawik, N. S. M.; Deros, B. M.; Rahman, M. N. A.; Sukadarin, E. H.; Nordin, N.; Tamrin, S. B. M.; Bakar, S. A.; Norzan, M. L.

    2015-12-01

    An ergonomics problem is one of the main issues faced by palm oil plantation workers especially during harvesting and collecting of fresh fruit bunches (FFB). Intensive manual handling and labor activities involved have been associated with high prevalence of musculoskeletal disorders (MSDs) among palm oil plantation workers. New and safe technology on machines and equipment in palm oil plantation are very important in order to help workers reduce risks and injuries while working. The aim of this research is to improve the design of a wheelbarrow, which is suitable for workers and a small size oil palm plantation. The wheelbarrow design was drawn using CATIA ergonomic features. The characteristic of ergonomics assessment is performed by comparing the existing design of wheelbarrow. Conceptual design was developed based on the problems that have been reported by workers. From the analysis of the problem, finally have resulting concept design the ergonomic quality of semi-automatic wheelbarrow with safe and suitable used for palm oil plantation workers.

  16. Construction and use of an optical semi-automatic titrator employing the technique of reflectance photometry

    International Nuclear Information System (INIS)

    Hwang, Hoon

    2001-01-01

    An optical semi-automatic titrator was constructed employing the technique of the reflectance spectrometry and was tested for the determination of the end points of the acid-base, precipitation, and EDTA titrations. And since the current optical semi-automatic titrator built on the principle of the reflectance spectrometry could be successfully used even for the determination of the end of the end point in the precipitation titration where the solid particles are formed during the titration process, it was found to be feasible that a completely automated optical titrator would be designed and built based on the current findings

  17. Research on Semi-automatic Bomb Fetching for an EOD Robot

    Directory of Open Access Journals (Sweden)

    Qian Jun

    2008-11-01

    Full Text Available An EOD robot system, SUPER-PLUS, which has a novel semi-automatic bomb fetching function is presented in this paper. With limited support of human, SUPER-PLUS scans the cluttered environment with a wrist-mounted laser distance sensor and plans the manipulator a collision free path to fetch the bomb. The model construction of manipulator, bomb and environment, C-space map, path planning and the operation procedure are introduced in detail. The semi-automatic bomb fetching function has greatly improved the operation performance of EOD robot.

  18. Research on Semi-Automatic Bomb Fetching for an EOD Robot

    Directory of Open Access Journals (Sweden)

    Zeng Jian-Jun

    2007-06-01

    Full Text Available An EOD robot system, SUPER-PLUS, which has a novel semi-automatic bomb fetching function is presented in this paper. With limited support of human, SUPER-PLUS scans the cluttered environment with a wrist-mounted laser distance sensor and plans the manipulator a collision free path to fetch the bomb. The model construction of manipulator, bomb and environment, C-space map, path planning and the operation procedure are introduced in detail. The semi-automatic bomb fetching function has greatly improved the operation performance of EOD robot.

  19. Accuracy and reproducibility of a novel semi-automatic segmentation technique for MR volumetry of the pituitary gland

    International Nuclear Information System (INIS)

    Renz, Diane M.; Hahn, Horst K.; Rexilius, Jan; Schmidt, Peter; Lentschig, Markus; Pfeil, Alexander; Sauner, Dieter; Fitzek, Clemens; Mentzel, Hans-Joachim; Kaiser, Werner A.; Reichenbach, Juergen R.; Boettcher, Joachim

    2011-01-01

    Although several reports about volumetric determination of the pituitary gland exist, volumetries have been solely performed by indirect measurements or manual tracing on the gland's boundaries. The purpose of this study was to evaluate the accuracy and reproducibility of a novel semi-automatic MR-based segmentation technique. In an initial technical investigation, T1-weighted 3D native magnetised prepared rapid gradient echo sequences (1.5 T) with 1 mm isotropic voxel size achieved high reliability and were utilised in different in vitro and in vivo studies. The computer-assisted segmentation technique was based on an interactive watershed transform after resampling and gradient computation. Volumetry was performed by three observers with different software and neuroradiologic experiences, evaluating phantoms of known volume (0.3, 0.9 and 1.62 ml) and healthy subjects (26 to 38 years; overall 135 volumetries). High accuracy of the volumetry was shown by phantom analysis; measurement errors were 0.05). The analysed semi-automatic MR volumetry of the pituitary gland is a valid, reliable and fast technique. Possible clinical applications are hyperplasia or atrophy of the gland in pathological circumstances either by a single assessment or by monitoring in follow-up studies. (orig.)

  20. Semi-Automatic Construction of Skeleton Concept Maps from Case Judgments

    NARCIS (Netherlands)

    Boer, A.; Sijtsma, B.; Winkels, R.; Lettieri, N.

    2014-01-01

    This paper proposes an approach to generating Skeleton Conceptual Maps (SCM) semi automatically from legal case documents provided by the United Kingdom’s Supreme Court. SCM are incomplete knowledge representations for the purpose of scaffolding learning. The proposed system intends to provide

  1. A Semi-Automatic, Remote-Controlled Video Observation System for Transient Luminous Events

    DEFF Research Database (Denmark)

    Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen

    2003-01-01

    In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed...

  2. Method and apparatus for mounting or dismounting a semi-automatic twist-lock

    NARCIS (Netherlands)

    Klein Breteler, A.J.; Tekeli, G.

    2001-01-01

    The invention relates to a method for mounting or dismounting a semi-automatic twistlock at a corner of a deck container, wherein the twistlock is mounted or dismounted on a quayside where a ship may be docked for loading or unloading, in a loading or unloading terminal installed on the quayside,

  3. Semi-automatic ultrasonic inspection of PWR upper internal immersed components

    International Nuclear Information System (INIS)

    Dombret, P.; Coquette, A.; Cermak, J.; Verspeelt, D.

    1985-01-01

    The present paper describes the characteristics of a semi-automatic ultrasonic inspection system. Components inspected are the so-called flexures, small pins located at the upper part of control rod tube-guide, some of which happened to broke in a few Westinghouse type PWR's. Inspection results and other system capabilities are also mentioned

  4. Semi-automatic bubble counting system for superheated droplet detectors

    International Nuclear Information System (INIS)

    Reina, Luiz C.; Bellido, Luis F.; Ramos, Paulo R.; Silva, Ademir X. da; Facure, Alessandro; Dantas, Jose E.R.

    2009-01-01

    Neutron dose rate measurements are normally performed by means of PADC, CR-39 and TLD detectors. Although, none of these devices can give instant reading of the neutron dose, recently new kind of detectors are being developed, based on the formation of tiny drops in a superheated liquid suspended in a polymer or gel solution, called superheated droplet detector (SDD) or also as bubble detectors (BD), with no response for gamma radiation. This work describes the experimental setup and the developed procedures for acquiring and processing digital images obtained with bubble detector spectrometer (BDS), developed by Bubble Technology Industries, for personal neutron dosimeter and/or neutron energy fluence measurements in nuclear facilities. The results of the neutron measurements obtained during the F-18 production, at the RDS-111 cyclotron, are presented. These neutron measurements were the first ones with this type of BDS detectors in a particle accelerator facility in Brazil and it was very important to estimate neutron dose rate received by occupationally exposed individuals. (author)

  5. Preliminary Investigation on the Effects of Shockwaves on Water Samples Using a Portable Semi-Automatic Shocktube

    Science.gov (United States)

    Wessley, G. Jims John

    2017-10-01

    The propagation of shock waves through any media results in an instantaneous increase in pressure and temperature behind the shockwave. The scope of utilizing this sudden rise in pressure and temperature in new industrial, biological and commercial areas has been explored and the opportunities are tremendous. This paper presents the design and testing of a portable semi-automatic shock tube on water samples mixed with salt. The preliminary analysis shows encouraging results as the salinity of water samples were reduced up to 5% when bombarded with 250 shocks generated using a pressure ratio of 2. 5. Paper used for normal printing is used as the diaphragm to generate the shocks. The impact of shocks of much higher intensity obtained using different diaphragms will lead to more reduction in the salinity of the sea water, thus leading to production of potable water from saline water, which is the need of the hour.

  6. Design and development of semi-automatic radiation test and calibration facility

    International Nuclear Information System (INIS)

    Yadav, Ashok Kumar; Chouhan, V.K.; Narayan, Pradeep

    2008-01-01

    Semi-automatic gamma radiation test and calibration facility have been designed, developed and commissioned at Defence Laboratory Jodhpur (DLJ). The facility comprises of medium and high dose rate range setup using 30 Ci Cobalt-60 source, in a portable remotely operated Techops camera and a 15000 Ci 60 Co source in a Tele-therapy machine. The radiation instruments can be positioned at any desired position using a computer controlled positioner having three translational and one rotational motion. User friendly software helps in positioning the Device Under Test (DUT) at any desired dose rate or distance and acquire the data automatically. The servo and stepper motor controlled positioner helps in achieving the required precision and accuracy for the radiation calibration of the instruments. This paper describes the semi-automatic radiation test and calibration facility commissioned at DLJ. (author)

  7. Accuracy and reproducibility of a novel semi-automatic segmentation technique for MR volumetry of the pituitary gland

    Energy Technology Data Exchange (ETDEWEB)

    Renz, Diane M. [Charite University Medicine Berlin, Campus Virchow Clinic, Department of Radiology, Berlin (Germany); Hahn, Horst K.; Rexilius, Jan [Institute for Medical Image Computing, Fraunhofer MEVIS, Bremen (Germany); Schmidt, Peter [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Department of Neuroradiology, Jena (Germany); Lentschig, Markus [MR- and PET/CT Centre Bremen, Bremen (Germany); Pfeil, Alexander [Friedrich-Schiller-University, Jena University Hospital, Department of Internal Medicine III, Jena (Germany); Sauner, Dieter [St. Georg Clinic Leipzig, Hospital Hubertusburg, Department of Radiology, Wermsdorf (Germany); Fitzek, Clemens [Asklepios Clinic Brandenburg, Department of Radiology and Neuroradiology, Brandenburg an der Havel (Germany); Mentzel, Hans-Joachim [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Department of Pediatric Radiology, Jena (Germany); Kaiser, Werner A. [Friedrich-Schiller-University, Jena University Hospital, Institute of Diagnostic and Interventional Radiology, Jena (Germany); Reichenbach, Juergen R. [Friedrich-Schiller-University, Jena University Hospital, Medical Physics Group, Institute of Diagnostic and Interventional Radiology, Jena (Germany); Boettcher, Joachim [SRH Clinic Gera, Institute of Diagnostic and Interventional Radiology, Gera (Germany)

    2011-04-15

    Although several reports about volumetric determination of the pituitary gland exist, volumetries have been solely performed by indirect measurements or manual tracing on the gland's boundaries. The purpose of this study was to evaluate the accuracy and reproducibility of a novel semi-automatic MR-based segmentation technique. In an initial technical investigation, T1-weighted 3D native magnetised prepared rapid gradient echo sequences (1.5 T) with 1 mm isotropic voxel size achieved high reliability and were utilised in different in vitro and in vivo studies. The computer-assisted segmentation technique was based on an interactive watershed transform after resampling and gradient computation. Volumetry was performed by three observers with different software and neuroradiologic experiences, evaluating phantoms of known volume (0.3, 0.9 and 1.62 ml) and healthy subjects (26 to 38 years; overall 135 volumetries). High accuracy of the volumetry was shown by phantom analysis; measurement errors were <4% with a mean error of 2.2%. In vitro, reproducibility was also promising with intra-observer variability of 0.7% for observer 1 and 0.3% for observers 2 and 3; mean inter-observer variability was in vitro 1.2%. In vivo, scan-rescan, intra-observer and inter-observer variability showed mean values of 3.2%, 1.8% and 3.3%, respectively. Unifactorial analysis of variance demonstrated no significant differences between pituitary volumes for various MR scans or software calculations in the healthy study groups (p > 0.05). The analysed semi-automatic MR volumetry of the pituitary gland is a valid, reliable and fast technique. Possible clinical applications are hyperplasia or atrophy of the gland in pathological circumstances either by a single assessment or by monitoring in follow-up studies. (orig.)

  8. Semi-Automatic Construction of Skeleton Concept Maps from Case Judgments

    OpenAIRE

    Boer, A.; Sijtsma, B.; Winkels, R.; Lettieri, N.

    2014-01-01

    This paper proposes an approach to generating Skeleton Conceptual Maps (SCM) semi automatically from legal case documents provided by the United Kingdom’s Supreme Court. SCM are incomplete knowledge representations for the purpose of scaffolding learning. The proposed system intends to provide students with a tool to pre-process text and to extract knowledge from documents in a time saving manner. A combination of natural language processing methods and proposition extraction algorithms are u...

  9. Hera-FFX: a Firefox add-on for Semi-automatic Web Accessibility Evaluation

    OpenAIRE

    Fuertes Castro, José Luis; González, Ricardo; Gutiérrez, Emmanuelle; Martínez Normand, Loïc

    2009-01-01

    Website accessibility evaluation is a complex task requiring a combination of human expertise and software support. There are several online and offline tools to support the manual web accessibility evaluation process. However, they all have some weaknesses because none of them includes all the desired features. In this paper we present Hera-FFX, an add-on for the Firefox web browser that supports semi-automatic web accessibility evaluation.

  10. Semi-automatic tool to ease the creation and optimization of GPU programs

    DEFF Research Database (Denmark)

    Jepsen, Jacob

    2014-01-01

    We present a tool that reduces the development time of GPU-executable code. We implement a catalogue of common optimizations specific to the GPU architecture. Through the tool, the programmer can semi-automatically transform a computationally-intensive code section into GPU-executable form...... of the transformations can be performed automatically, which makes the tool usable for both novices and experts in GPU programming....

  11. Semi-Automatic Electronic Stent Register: a novel approach to preventing ureteric stents lost to follow up.

    Science.gov (United States)

    Macneil, James W H; Michail, Peter; Patel, Manish I; Ashbourne, Julie; Bariol, Simon V; Ende, David A; Hossack, Tania A; Lau, Howard; Wang, Audrey C; Brooks, Andrew J

    2017-10-01

    Ureteric stents are indispensable tools in modern urology; however, the risk of them not being followed-up once inserted poses medical and medico-legal risks. Stent registers are a common solution to mitigate this risk; however, manual registers are logistically challenging, especially for busy units. Western Sydney Local Health District developed a novel Semi-Automatic Electronic Stent Register (SAESR) utilizing billing information to track stent insertions. To determine the utility of this system, an audit was conducted comparing the 6 months before the introduction of the register to the first 6 months of the register. In the first 6 months of the register, 457 stents were inserted. At the time of writing, two of these are severely delayed for removal, representing a rate of 0.4%. In the 6 months immediately preceding the introduction of the register, 497 stents were inserted, and six were either missed completely or severely delayed in their removal, representing a rate of 1.2%. A non-inferiority analysis found this to be no worse than the results achieved before the introduction of the register. The SAESR allowed us to improve upon our better than expected rate of stents lost to follow up or severely delayed. We demonstrated non-inferiority in the rate of lost or severely delayed stents, and a number of other advantages including savings in personnel costs. The semi-automatic register represents an effective way of reducing the risk associated with a common urological procedure. We believe that this methodology could be implemented elsewhere. © 2017 Royal Australasian College of Surgeons.

  12. Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans

    International Nuclear Information System (INIS)

    Lassen, B C; Kuhnigk, J-M; Van Ginneken, B; Van Rikxoort, E M; Jacobs, C

    2015-01-01

    The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of

  13. Robust semi-automatic segmentation of pulmonary subsolid nodules in chest computed tomography scans

    Science.gov (United States)

    Lassen, B. C.; Jacobs, C.; Kuhnigk, J.-M.; van Ginneken, B.; van Rikxoort, E. M.

    2015-02-01

    The malignancy of lung nodules is most often detected by analyzing changes of the nodule diameter in follow-up scans. A recent study showed that comparing the volume or the mass of a nodule over time is much more significant than comparing the diameter. Since the survival rate is higher when the disease is still in an early stage it is important to detect the growth rate as soon as possible. However manual segmentation of a volume is time-consuming. Whereas there are several well evaluated methods for the segmentation of solid nodules, less work is done on subsolid nodules which actually show a higher malignancy rate than solid nodules. In this work we present a fast, semi-automatic method for segmentation of subsolid nodules. As minimal user interaction the method expects a user-drawn stroke on the largest diameter of the nodule. First, a threshold-based region growing is performed based on intensity analysis of the nodule region and surrounding parenchyma. In the next step the chest wall is removed by a combination of a connected component analyses and convex hull calculation. Finally, attached vessels are detached by morphological operations. The method was evaluated on all nodules of the publicly available LIDC/IDRI database that were manually segmented and rated as non-solid or part-solid by four radiologists (Dataset 1) and three radiologists (Dataset 2). For these 59 nodules the Jaccard index for the agreement of the proposed method with the manual reference segmentations was 0.52/0.50 (Dataset 1/Dataset 2) compared to an inter-observer agreement of the manual segmentations of 0.54/0.58 (Dataset 1/Dataset 2). Furthermore, the inter-observer agreement using the proposed method (i.e. different input strokes) was analyzed and gave a Jaccard index of 0.74/0.74 (Dataset 1/Dataset 2). The presented method provides satisfactory segmentation results with minimal observer effort in minimal time and can reduce the inter-observer variability for segmentation of

  14. User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy.

    Science.gov (United States)

    Ramkumar, Anjana; Dolz, Jose; Kirisli, Hortense A; Adebahr, Sonja; Schimek-Jasch, Tanja; Nestle, Ursula; Massoptier, Laurent; Varga, Edit; Stappers, Pieter Jan; Niessen, Wiro J; Song, Yu

    2016-04-01

    Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians' expertise and computers' potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the "strokes" and the "contour", to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design.

  15. Comparison of acute and chronic traumatic brain injury using semi-automatic multimodal segmentation of MR volumes.

    Science.gov (United States)

    Irimia, Andrei; Chambers, Micah C; Alger, Jeffry R; Filippou, Maria; Prastawa, Marcel W; Wang, Bo; Hovda, David A; Gerig, Guido; Toga, Arthur W; Kikinis, Ron; Vespa, Paul M; Van Horn, John D

    2011-11-01

    Although neuroimaging is essential for prompt and proper management of traumatic brain injury (TBI), there is a regrettable and acute lack of robust methods for the visualization and assessment of TBI pathophysiology, especially for of the purpose of improving clinical outcome metrics. Until now, the application of automatic segmentation algorithms to TBI in a clinical setting has remained an elusive goal because existing methods have, for the most part, been insufficiently robust to faithfully capture TBI-related changes in brain anatomy. This article introduces and illustrates the combined use of multimodal TBI segmentation and time point comparison using 3D Slicer, a widely-used software environment whose TBI data processing solutions are openly available. For three representative TBI cases, semi-automatic tissue classification and 3D model generation are performed to perform intra-patient time point comparison of TBI using multimodal volumetrics and clinical atrophy measures. Identification and quantitative assessment of extra- and intra-cortical bleeding, lesions, edema, and diffuse axonal injury are demonstrated. The proposed tools allow cross-correlation of multimodal metrics from structural imaging (e.g., structural volume, atrophy measurements) with clinical outcome variables and other potential factors predictive of recovery. In addition, the workflows described are suitable for TBI clinical practice and patient monitoring, particularly for assessing damage extent and for the measurement of neuroanatomical change over time. With knowledge of general location, extent, and degree of change, such metrics can be associated with clinical measures and subsequently used to suggest viable treatment options.

  16. a New Approach for the Semi-Automatic Texture Generation of the Buildings Facades, from Terrestrial Laser Scanner Data

    Science.gov (United States)

    Oniga, E.

    2012-07-01

    The result of the terrestrial laser scanning is an impressive number of spatial points, each of them being characterized as position by the X, Y and Z co-ordinates, by the value of the laser reflectance and their real color, expressed as RGB (Red, Green, Blue) values. The color code for each LIDAR point is taken from the georeferenced digital images, taken with a high resolution panoramic camera incorporated in the scanner system. In this article I propose a new algorithm for the semiautomatic texture generation, using the color information, the RGB values of every point that has been taken by terrestrial laser scanning technology and the 3D surfaces defining the buildings facades, generated with the Leica Cyclone software. The first step is when the operator defines the limiting value, i.e. the minimum distance between a point and the closest surface. The second step consists in calculating the distances, or the perpendiculars drawn from each point to the closest surface. In the third step we associate the points whose 3D coordinates are known, to every surface, depending on the limiting value. The fourth step consists in computing the Voronoi diagram for the points that belong to a surface. The final step brings automatic association between the RGB value of the color code and the corresponding polygon of the Voronoi diagram. The advantage of using this algorithm is that we can obtain, in a semi-automatic manner, a photorealistic 3D model of the building.

  17. Automatic analysis of microscopic images of red blood cell aggregates

    Science.gov (United States)

    Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.

    2015-06-01

    Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).

  18. Derivation of groundwater flow-paths based on semi-automatic extraction of lineaments from remote sensing data

    Directory of Open Access Journals (Sweden)

    U. Mallast

    2011-08-01

    Full Text Available In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine, the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.

  19. Automatic, semi-automatic and manual validation of urban drainage data.

    Science.gov (United States)

    Branisavljević, N; Prodanović, D; Pavlović, D

    2010-01-01

    Advances in sensor technology and the possibility of automated long distance data transmission have made continuous measurements the preferable way of monitoring urban drainage processes. Usually, the collected data have to be processed by an expert in order to detect and mark the wrong data, remove them and replace them with interpolated data. In general, the first step in detecting the wrong, anomaly data is called the data quality assessment or data validation. Data validation consists of three parts: data preparation, validation scores generation and scores interpretation. This paper will present the overall framework for the data quality improvement system, suitable for automatic, semi-automatic or manual operation. The first two steps of the validation process are explained in more detail, using several validation methods on the same set of real-case data from the Belgrade sewer system. The final part of the validation process, which is the scores interpretation, needs to be further investigated on the developed system.

  20. Sherlock: A Semi-automatic Framework for Quiz Generation Using a Hybrid Semantic Similarity Measure.

    Science.gov (United States)

    Lin, Chenghua; Liu, Dong; Pang, Wei; Wang, Zhe

    In this paper, we present a semi-automatic system (Sherlock) for quiz generation using linked data and textual descriptions of RDF resources. Sherlock is distinguished from existing quiz generation systems in its generic framework for domain-independent quiz generation as well as in the ability of controlling the difficulty level of the generated quizzes. Difficulty scaling is non-trivial, and it is fundamentally related to cognitive science. We approach the problem with a new angle by perceiving the level of knowledge difficulty as a similarity measure problem and propose a novel hybrid semantic similarity measure using linked data. Extensive experiments show that the proposed semantic similarity measure outperforms four strong baselines with more than 47 % gain in clustering accuracy. In addition, we discovered in the human quiz test that the model accuracy indeed shows a strong correlation with the pairwise quiz similarity.

  1. A semi-automatic method for peak and valley detection in free-breathing respiratory waveforms

    International Nuclear Information System (INIS)

    Lu Wei; Nystrom, Michelle M.; Parikh, Parag J.; Fooshee, David R.; Hubenschmidt, James P.; Bradley, Jeffrey D.; Low, Daniel A.

    2006-01-01

    The existing commercial software often inadequately determines respiratory peaks for patients in respiration correlated computed tomography. A semi-automatic method was developed for peak and valley detection in free-breathing respiratory waveforms. First the waveform is separated into breath cycles by identifying intercepts of a moving average curve with the inspiration and expiration branches of the waveform. Peaks and valleys were then defined, respectively, as the maximum and minimum between pairs of alternating inspiration and expiration intercepts. Finally, automatic corrections and manual user interventions were employed. On average for each of the 20 patients, 99% of 307 peaks and valleys were automatically detected in 2.8 s. This method was robust for bellows waveforms with large variations

  2. Response evaluation of malignant liver lesions after TACE/SIRT. Comparison of manual and semi-automatic measurement of different response criteria in multislice CT

    International Nuclear Information System (INIS)

    Hoeink, Anna Janina

    2017-01-01

    To compare measurement precision and interobserver variability in the evaluation of hepatocellular carcinoma (HCC) and liver metastases in MSCT before and after transarterial local ablative therapies. Retrospective study of 72 patients with malignant liver lesions (42 metastases; 30 HCCs) before and after therapy (43 SIRT procedures; 29 TACE procedures). Established (LAD; SAD; WHO) and vitality-based parameters (mRECIST; mLAD; mSAD; EASL) were assessed manually and semi-automatically by two readers. The relative interobserver difference (RID) and intraclass correlation coefficient (ICC) were calculated. The median RID for vitality-based parameters was lower from semi-automatic than from manual measurement of mLAD (manual 12.5 %; semi-automatic 3.4 %), mSAD (manual 12.7 %; semi-automatic 5.7 %) and EASL (manual 10.4 %; semi-automatic 1.8 %). The difference in established parameters was not statistically noticeable (p > 0.05). The ICCs of LAD (manual 0.984; semi-automatic 0.982), SAD (manual 0.975; semi-automatic 0.958) and WHO (manual 0.984; semi-automatic 0.978) are high, both in manual and semi-automatic measurements. The ICCs of manual measurements of mLAD (0.897), mSAD (0.844) and EASL (0.875) are lower. This decrease cannot be found in semi-automatic measurements of mLAD (0.997), mSAD (0.992) and EASL (0.998). Conclusion Vitality-based tumor measurements of HCC and metastases after transarterial local therapies should be performed semi-automatically due to greater measurement precision, thus increasing the reproducibility and in turn the reliability of therapeutic decisions.

  3. Response evaluation of malignant liver lesions after TACE/SIRT. Comparison of manual and semi-automatic measurement of different response criteria in multislice CT

    Energy Technology Data Exchange (ETDEWEB)

    Hoeink, Anna Janina [Univ. Hospital Cologne (Germany). Diagnostic and Interventional Radiology; Schuelke, Christoph; Loehnert, Annika; Kammerer, Sara; Fortkamp, Rasmus; Heindel, Walter; Buerke, Boris [Univ. Hospital Muenster (UKM), Muenster (Germany). Dept. of Clinical Radiology; Koch, Raphael [Univ. Hospital Muenster (UKM), Muenster (Germany). Inst. of Biostatistics and Clinical Research (IBKF)

    2017-11-15

    To compare measurement precision and interobserver variability in the evaluation of hepatocellular carcinoma (HCC) and liver metastases in MSCT before and after transarterial local ablative therapies. Retrospective study of 72 patients with malignant liver lesions (42 metastases; 30 HCCs) before and after therapy (43 SIRT procedures; 29 TACE procedures). Established (LAD; SAD; WHO) and vitality-based parameters (mRECIST; mLAD; mSAD; EASL) were assessed manually and semi-automatically by two readers. The relative interobserver difference (RID) and intraclass correlation coefficient (ICC) were calculated. The median RID for vitality-based parameters was lower from semi-automatic than from manual measurement of mLAD (manual 12.5 %; semi-automatic 3.4 %), mSAD (manual 12.7 %; semi-automatic 5.7 %) and EASL (manual 10.4 %; semi-automatic 1.8 %). The difference in established parameters was not statistically noticeable (p > 0.05). The ICCs of LAD (manual 0.984; semi-automatic 0.982), SAD (manual 0.975; semi-automatic 0.958) and WHO (manual 0.984; semi-automatic 0.978) are high, both in manual and semi-automatic measurements. The ICCs of manual measurements of mLAD (0.897), mSAD (0.844) and EASL (0.875) are lower. This decrease cannot be found in semi-automatic measurements of mLAD (0.997), mSAD (0.992) and EASL (0.998). Conclusion Vitality-based tumor measurements of HCC and metastases after transarterial local therapies should be performed semi-automatically due to greater measurement precision, thus increasing the reproducibility and in turn the reliability of therapeutic decisions.

  4. Image analysis

    International Nuclear Information System (INIS)

    Berman, M.; Bischof, L.M.; Breen, E.J.; Peden, G.M.

    1994-01-01

    This paper provides an overview of modern image analysis techniques pertinent to materials science. The usual approach in image analysis contains two basic steps: first, the image is segmented into its constituent components (e.g. individual grains), and second, measurement and quantitative analysis are performed. Usually, the segmentation part of the process is the harder of the two. Consequently, much of the paper concentrates on this aspect, reviewing both fundamental segmentation tools (commonly found in commercial image analysis packages) and more advanced segmentation tools. There is also a review of the most widely used quantitative analysis methods for measuring the size, shape and spatial arrangements of objects. Many of the segmentation and analysis methods are demonstrated using complex real-world examples. Finally, there is a discussion of hardware and software issues. 42 refs., 17 figs

  5. A Semi-automatic Algorithm for Preliminary Assessment of Labial Gingiva and Alveolar Bone Thickness of Maxillary Anterior Teeth.

    Science.gov (United States)

    Chen, Sheng-Hong; Chan, Hsun-Liang; Lu, Yongning; Ong, Sim-Heng; Wang, Hom-Lay; Ko, Eng Hong; Chang, Po-Chun

    Soft and hard tissue volumes are critical for implant placement and long-term stability. Although the literature has adequately addressed tissue biotypes of Western populations, pertinent information about Asian populations is limited. This study aimed to evaluate the soft and hard tissue profiles of the maxillary anterior teeth of the Taiwanese population using a semi-automatic algorithm. Cone beam computed tomography images of 11 adults with well-aligned maxillary anterior teeth were overlaid with those of cast models, based on the tooth crowns manually outlined by two independent observers. Each tooth was digitally trisected mesiodistally and apicocoronally. The thicknesses of the labial gingiva and alveolar bone were measured using a customized software program. No obvious difference between the observers was noted regarding the dimension of tooth crowns. The average thicknesses of the labial gingiva, the labial alveolar bone, and the palatal alveolar bone were 1.76 ± 0.11 mm, 1.02 ± 0.12 mm, and 1.80 ± 0.31 mm, respectively, with no significant differences between teeth. All parameters were thicker in the apical region than in the cervical region, and the alveolar bone was thinner in the midlabial region of incisors than in the interproximal regions. The thinnest areas were the midcervical compartment of the right central incisor (0.53 ± 0.33 mm) for the labial gingiva, the midcervical compartment of the right lateral incisor (0.23 ± 0.10 mm) for the labial alveolar bone, and the mesiocervical compartment of the left central incisor (0.33 ± 0.09 mm) for the palatal alveolar bone. This study presents an objective and comprehensive methodology for evaluating the soft and hard tissue profiles of maxillary anterior teeth and may be of value for presurgical planning for immediate implant placement. The results suggest that profiles of the Taiwanese subjects are similar to profiles of Western populations.

  6. Semi-automatic characterization and simulation of VCSEL devices for high speed VSR communications

    Science.gov (United States)

    Pellevrault, S.; Toffano, Z.; Destrez, A.; Pez, M.; Quentel, F.

    2006-04-01

    Very short range (VSR) high bit rate optical fiber communications are an emerging market dedicated to local area networks, digital displays or board to board interconnects within real time calculators. In this technology, a very fast way to exchange data with high noise immunity and low-cost is needed. Optical multimode graded index fibers are used here because they have electrical noise immunity and are easier to handle than monomode fibers. 850 nm VCSEL are used in VSR communications because of their low cost, direct on-wafer tests, and the possibility of manufacturing VCSEL arrays very easily compared to classical optical transceivers using edge-emitting laser diodes. Although much research has been carried out in temperature modeling on VCSEL emitters, few studies have been devoted to characterizations over a very broad range of temperatures. Nowadays, VCSEL VSR communications tend to be used in severe environments such as space, avionics and military equipments. Therefore, a simple way to characterize VCSEL emitters over a broad range of temperature is required. In this paper, we propose a complete characterization of the emitter part of 2.5 Gb/s opto-electrical transceiver modules operating from -40°C to +120°C using 850 nm VCSELs. Our method uses simple and semi-automatic measurements of a given set of chosen device parameters in order to make fast and efficient simulations.

  7. Semi-Automatic Operational Service for Drought Monitoring and Forecasting in the Tuscany Region

    Directory of Open Access Journals (Sweden)

    Ramona Magno

    2018-02-01

    Full Text Available A drought-monitoring and forecasting system developed for the Tuscany region was improved in order to provide a semi-automatic, more detailed, timely and comprehensive operational service for decision making, water authorities, researchers and general stakeholders. Ground-based and satellite data from different sources (regional meteorological stations network, MODIS Terra satellite and CHIRPS/CRU precipitation datasets are integrated through an open-source, interoperable SDI (spatial data infrastructure based on PostgreSQL/PostGIS to produce vegetation and precipitation indices that allow following of the occurrence and evolution of a drought event. The SDI allows the dissemination of comprehensive, up-to-date and customizable information suitable for different end-users through different channels, from a web page and monthly bulletins, to interoperable web services, and a comprehensive climate service. The web services allow geospatial elaborations on the fly, and the geo-database can be increased with new input/output data to respond to specific requests or to increase the spatial resolution.

  8. A semi-automatic calibration method for seismic arrays applied to an Alaskan array

    Science.gov (United States)

    Lindquist, K. G.; Tibuleac, I. M.; Hansen, R. A.

    2001-12-01

    Well-calibrated, small (less than 22 km) aperture seismic arrays are of great importance for event location and characterization. We have implemented the crosscorrelation method of Tibuleac and Herrin (Seis. Res. Lett. 1997) as a semi-automatic procedure, applicable to any seismic array. With this we are able to process thousands of phases with several days of computer time on a Sun Blade 1000 workstation. Complicated geology beneath elements and elevation differences amonst the array stations made station corrections necessary. 328 core phases (including PcP, PKiKP, PKP, PKKP) were used in order to determine the static corrections. To demonstrate this application and method, we have analyzed P and PcP arrivals at the ILAR array (Eielson, Alaska) between years 1995-2000. The arrivals were picked by PIDC, for events (mb>4.0) well located by the USGS. We calculated backazimuth and horizontal velocity residuals for all events. We observed large backazimuth residuals for regional and near-regional phases. We are discussing the possibility of a dipping Moho (strike E-W, dip N) beneath the array versus other local structure that would produce the residuals.

  9. Evaluation of a semi-automatic radioimmunoassay for hepatitis B surface antigen (HBsAg)

    International Nuclear Information System (INIS)

    Vries, J. de; Kruining, J.; Heijtink, R.A.

    1983-01-01

    The recently developed semi-automatic Hepatube system was evaluated in comparison to another radioimmunoassay for the detection of hepatitis B surface antigen (HBsAg), the manual Ausria II-125 test. After incubation of serum in anti-HBs coated tubes, the Hepatube system uses a machine to wash the tubes and to add tracer. After a second incubation, tubes are washed again in the machine and are manually transferred to the #betta# counter. Two machines were used. Machine 1 had an undefined defect. Of 1490 samples tested, 69 (4.6%) gave false-positive results versus 11 (0.7%) in the Ausria II-125 test. Machine 2 had one false-positive result among 920 samples versus 5 in the Ausria II-125 test. The sensitivity was measured with reference panels from Wellcome and Abbott as well as in titration series. The Hepatube system was found to be a factor three less sensitive than the Ausria II-125 test. The Hepatube processor is easy to handle; radioactive material can be held at a distance during the whole procedure; waste material is limited and less voluminous than in the Ausria II-125 test. (Auth.)

  10. Development of Semi-Automatic Lathe by using Intelligent Soft Computing Technique

    Science.gov (United States)

    Sakthi, S.; Niresh, J.; Vignesh, K.; Anand Raj, G.

    2018-03-01

    This paper discusses the enhancement of conventional lathe machine to semi-automated lathe machine by implementing a soft computing method. In the present scenario, lathe machine plays a vital role in the engineering division of manufacturing industry. While the manual lathe machines are economical, the accuracy and efficiency are not up to the mark. On the other hand, CNC machine provide the desired accuracy and efficiency, but requires a huge capital. In order to over come this situation, a semi-automated approach towards the conventional lathe machine is developed by employing stepper motors to the horizontal and vertical drive, that can be controlled by Arduino UNO -microcontroller. Based on the input parameters of the lathe operation the arduino coding is been generated and transferred to the UNO board. Thus upgrading from manual to semi-automatic lathe machines can significantly increase the accuracy and efficiency while, at the same time, keeping a check on investment cost and consequently provide a much needed escalation to the manufacturing industry.

  11. WiseScaffolder: an algorithm for the semi-automatic scaffolding of Next Generation Sequencing data.

    Science.gov (United States)

    Farrant, Gregory K; Hoebeke, Mark; Partensky, Frédéric; Andres, Gwendoline; Corre, Erwan; Garczarek, Laurence

    2015-09-03

    The sequencing depth provided by high-throughput sequencing technologies has allowed a rise in the number of de novo sequenced genomes that could potentially be closed without further sequencing. However, genome scaffolding and closure require costly human supervision that often results in genomes being published as drafts. A number of automatic scaffolders were recently released, which improved the global quality of genomes published in the last few years. Yet, none of them reach the efficiency of manual scaffolding. Here, we present an innovative semi-automatic scaffolder that additionally helps with chimerae resolution and generates valuable contig maps and outputs for manual improvement of the automatic scaffolding. This software was tested on the newly sequenced marine cyanobacterium Synechococcus sp. WH8103 as well as two reference datasets used in previous studies, Rhodobacter sphaeroides and Homo sapiens chromosome 14 (http://gage.cbcb.umd.edu/). The quality of resulting scaffolds was compared to that of three other stand-alone scaffolders: SSPACE, SOPRA and SCARPA. For all three model organisms, WiseScaffolder produced better results than other scaffolders in terms of contiguity statistics (number of genome fragments, N50, LG50, etc.) and, in the case of WH8103, the reliability of the scaffolds was confirmed by whole genome alignment against a closely related reference genome. We also propose an efficient computer-assisted strategy for manual improvement of the scaffolding, using outputs generated by WiseScaffolder, as well as for genome finishing that in our hands led to the circularization of the WH8103 genome. Altogether, WiseScaffolder proved more efficient than three other scaffolders for both prokaryotic and eukaryotic genomes and is thus likely applicable to most genome projects. The scaffolding pipeline described here should be of particular interest to biologists wishing to take advantage of the high added value of complete genomes.

  12. SplitRacer - a new Semi-Automatic Tool to Quantify And Interpret Teleseismic Shear-Wave Splitting

    Science.gov (United States)

    Reiss, M. C.; Rumpker, G.

    2017-12-01

    We have developed a semi-automatic, MATLAB-based GUI to combine standard seismological tasks such as the analysis and interpretation of teleseismic shear-wave splitting. Shear-wave splitting analysis is widely used to infer seismic anisotropy, which can be interpreted in terms of lattice-preferred orientation of mantle minerals, shape-preferred orientation caused by fluid-filled cracks or alternating layers. Seismic anisotropy provides a unique link between directly observable surface structures and the more elusive dynamic processes in the mantle below. Thus, resolving the seismic anisotropy of the lithosphere/asthenosphere is of particular importance for geodynamic modeling and interpretations. The increasing number of seismic stations from temporary experiments and permanent installations creates a new basis for comprehensive studies of seismic anisotropy world-wide. However, the increasingly large data sets pose new challenges for the rapid and reliably analysis of teleseismic waveforms and for the interpretation of the measurements. Well-established routines and programs are available but are often impractical for analyzing large data sets from hundreds of stations. Additionally, shear wave splitting results are seldom evaluated using the same well-defined quality criteria which may complicate comparison with results from different studies. SplitRacer has been designed to overcome these challenges by incorporation of the following processing steps: i) downloading of waveform data from multiple stations in mseed-format using FDSNWS tools; ii) automated initial screening and categorizing of XKS-waveforms using a pre-set SNR-threshold; iii) particle-motion analysis of selected phases at longer periods to detect and correct for sensor misalignment; iv) splitting analysis of selected phases based on transverse-energy minimization for multiple, randomly-selected, relevant time windows; v) one and two-layer joint-splitting analysis for all phases at one station by

  13. Semi-automatic supervised classification of minerals from x-ray mapping images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Flesche, Harald; Larsen, Rasmus

    1998-01-01

    to a small area in order to allow for the estimation of a variance-covariance matrix. This expansion is controlled by upper limits for the spatial and Euclidean spectral distances from the seed point. Second, after this initial expansion the growing of the training set is controlled by an upper limit...... is obtained by excluding observations that have high Mahalanobis distances to the training class mean. Spatial closeness is obtained by requiring connectivity. The marginal effects of changes in the parameters that are input to the seed growing algorithm are evaluated. Initially, the seed is expanded...... for the Mahalanobis distance to the current estimate of the class centre. Also, the estimates of class centres and covariance matrices may be continuously updated or the initial estimates may be used. Finally, the effect of the operator's choice of seed among a number of potential seeding points is evaluated. After...

  14. Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data

    Science.gov (United States)

    Thiele, Samuel T.; Grose, Lachlan; Samsu, Anindita; Micklethwaite, Steven; Vollgger, Stefan A.; Cruden, Alexander R.

    2017-12-01

    The advent of large digital datasets from unmanned aerial vehicle (UAV) and satellite platforms now challenges our ability to extract information across multiple scales in a timely manner, often meaning that the full value of the data is not realised. Here we adapt a least-cost-path solver and specially tailored cost functions to rapidly interpolate structural features between manually defined control points in point cloud and raster datasets. We implement the method in the geographic information system QGIS and the point cloud and mesh processing software CloudCompare. Using these implementations, the method can be applied to a variety of three-dimensional (3-D) and two-dimensional (2-D) datasets, including high-resolution aerial imagery, digital outcrop models, digital elevation models (DEMs) and geophysical grids. We demonstrate the algorithm with four diverse applications in which we extract (1) joint and contact patterns in high-resolution orthophotographs, (2) fracture patterns in a dense 3-D point cloud, (3) earthquake surface ruptures of the Greendale Fault associated with the Mw7.1 Darfield earthquake (New Zealand) from high-resolution light detection and ranging (lidar) data, and (4) oceanic fracture zones from bathymetric data of the North Atlantic. The approach improves the consistency of the interpretation process while retaining expert guidance and achieves significant improvements (35-65 %) in digitisation time compared to traditional methods. Furthermore, it opens up new possibilities for data synthesis and can quantify the agreement between datasets and an interpretation.

  15. Derivation of groundwater flow-paths based on semi-automatic extraction of lineaments from remote sensing data

    OpenAIRE

    U. Mallast; R. Gloaguen; S. Geyer; T. Rödiger; C. Siebert

    2011-01-01

    In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxili...

  16. Gray-Matter Volume Estimate Score: A Novel Semi-Automatic Method Measuring Early Ischemic Change on CT

    OpenAIRE

    Song, Dongbeom; Lee, Kijeong; Kim, Eun Hye; Kim, Young Dae; Lee, Hye Sun; Kim, Jinkwon; Song, Tae-Jin; Ahn, Sung Soo; Nam, Hyo Suk; Heo, Ji Hoe

    2015-01-01

    Background and Purpose We developed a novel method named Gray-matter Volume Estimate Score (GRAVES), measuring early ischemic changes on Computed Tomography (CT) semi-automatically by computer software. This study aimed to compare GRAVES and Alberta Stroke Program Early CT Score (ASPECTS) with regards to outcome prediction and inter-rater agreement. Methods This was a retrospective cohort study. Among consecutive patients with ischemic stroke in the anterior circulation who received intra-art...

  17. {sup 177}Lutetium-DOTATATE peptide radio-receptor therapy for patients with endocrine neoplasm and the individualized semi-automatic dosimetry. A retrospective analysis; {sup 177}Lutetium-DOTATATE-Peptid-Radio-Rezeptor-Therapie bei Patienten mit neuroendokrinen Neoplasien und die individualisierte, semi-automatische-Dosimetrie. Eine retrospektive Analyse

    Energy Technology Data Exchange (ETDEWEB)

    Loeser, Anastassia

    2016-09-28

    The {sup 177}lutetium-DOTATATE peptide radio-receptor therapy is a promising approach for the palliative treatment of patients with inoperable endocrine neoplasm. The individually variable biological dispersion and the tumor uptake including the protection of critical organs require a precise and reliable organ and tumor dosimetry. The HERMES Hybrid dosimetry module has appeared as reliable and user-friendly tool for clinical application. The next step is supposed to by the complete integration of 3D SPECT imaging.

  18. Semi-automatic handling of meteorological ground measurements using WeatherProg: prospects and practical implications

    Science.gov (United States)

    Langella, Giuliano; Basile, Angelo; Bonfante, Antonello; De Mascellis, Roberto; Manna, Piero; Terribile, Fabio

    2016-04-01

    WeatherProg is a computer program for the semi-automatic handling of data measured at ground stations within a climatic network. The program performs a set of tasks ranging from gathering raw point-based sensors measurements to the production of digital climatic maps. Originally the program was developed as the baseline asynchronous engine for the weather records management within the SOILCONSWEB Project (LIFE08 ENV/IT/000408), in which daily and hourly data where used to run water balance in the soil-plant-atmosphere continuum or pest simulation models. WeatherProg can be configured to automatically perform the following main operations: 1) data retrieval; 2) data decoding and ingestion into a database (e.g. SQL based); 3) data checking to recognize missing and anomalous values (using a set of differently combined checks including logical, climatological, spatial, temporal and persistence checks); 4) infilling of data flagged as missing or anomalous (deterministic or statistical methods); 5) spatial interpolation based on alternative/comparative methods such as inverse distance weighting, iterative regression kriging, and a weighted least squares regression (based on physiography), using an approach similar to PRISM. 6) data ingestion into a geodatabase (e.g. PostgreSQL+PostGIS or rasdaman). There is an increasing demand for digital climatic maps both for research and development (there is a gap between the major of scientific modelling approaches that requires digital climate maps and the gauged measurements) and for practical applications (e.g. the need to improve the management of weather records which in turn raises the support provided to farmers). The demand is particularly burdensome considering the requirement to handle climatic data at the daily (e.g. in the soil hydrological modelling) or even at the hourly time step (e.g. risk modelling in phytopathology). The key advantage of WeatherProg is the ability to perform all the required operations and

  19. Image Analysis

    DEFF Research Database (Denmark)

    The 19th Scandinavian Conference on Image Analysis was held at the IT University of Copenhagen in Denmark during June 15-17, 2015. The SCIA conference series has been an ongoing biannual event for more than 30 years and over the years it has nurtured a world-class regional research and development...... area within the four participating Nordic countries. It is a regional meeting of the International Association for Pattern Recognition (IAPR). We would like to thank all authors who submitted works to this year’s SCIA, the invited speakers, and our Program Committee. In total 67 papers were submitted....... The topics of the accepted papers range from novel applications of vision systems, pattern recognition, machine learning, feature extraction, segmentation, 3D vision, to medical and biomedical image analysis. The papers originate from all the Scandinavian countries and several other European countries...

  20. Assessment of hydrocephalus in children based on digital image processing and analysis

    Directory of Open Access Journals (Sweden)

    Fabijańska Anna

    2014-06-01

    Full Text Available Hydrocephalus is a pathological condition of the central nervous system which often affects neonates and young children. It manifests itself as an abnormal accumulation of cerebrospinal fluid within the ventricular system of the brain with its subsequent progression. One of the most important diagnostic methods of identifying hydrocephalus is Computer Tomography (CT. The enlarged ventricular system is clearly visible on CT scans. However, the assessment of the disease progress usually relies on the radiologist’s judgment and manual measurements, which are subjective, cumbersome and have limited accuracy. Therefore, this paper regards the problem of semi-automatic assessment of hydrocephalus using image processing and analysis algorithms. In particular, automated determination of popular indices of the disease progress is considered. Algorithms for the detection, semi-automatic segmentation and numerical description of the lesion are proposed. Specifically, the disease progress is determined using shape analysis algorithms. Numerical results provided by the introduced methods are presented and compared with those calculated manually by a radiologist and a trained operator. The comparison proves the correctness of the introduced approach.

  1. Building a semi-automatic ontology learning and construction system for geosciences

    Science.gov (United States)

    Babaie, H. A.; Sunderraman, R.; Zhu, Y.

    2013-12-01

    We are developing an ontology learning and construction framework that allows continuous, semi-automatic knowledge extraction, verification, validation, and maintenance by potentially a very large group of collaborating domain experts in any geosciences field. The system brings geoscientists from the side-lines to the center stage of ontology building, allowing them to collaboratively construct and enrich new ontologies, and merge, align, and integrate existing ontologies and tools. These constantly evolving ontologies can more effectively address community's interests, purposes, tools, and change. The goal is to minimize the cost and time of building ontologies, and maximize the quality, usability, and adoption of ontologies by the community. Our system will be a domain-independent ontology learning framework that applies natural language processing, allowing users to enter their ontology in a semi-structured form, and a combined Semantic Web and Social Web approach that lets direct participation of geoscientists who have no skill in the design and development of their domain ontologies. A controlled natural language (CNL) interface and an integrated authoring and editing tool automatically convert syntactically correct CNL text into formal OWL constructs. The WebProtege-based system will allow a potentially large group of geoscientists, from multiple domains, to crowd source and participate in the structuring of their knowledge model by sharing their knowledge through critiquing, testing, verifying, adopting, and updating of the concept models (ontologies). We will use cloud storage for all data and knowledge base components of the system, such as users, domain ontologies, discussion forums, and semantic wikis that can be accessed and queried by geoscientists in each domain. We will use NoSQL databases such as MongoDB as a service in the cloud environment. MongoDB uses the lightweight JSON format, which makes it convenient and easy to build Web applications using

  2. Parallelization of the AliRoot event reconstruction by performing a semi- automatic source-code transformation

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    side bus or processor interconnections. Parallelism can only result in performance gain, if the memory usage is optimized, memory locality improved and the communication between threads is minimized. But the domain of concurrent programming has become a field for highly skilled experts, as the implementation of multithreading is difficult, error prone and labor intensive. A full re-implementation for parallel execution of existing offline frameworks, like AliRoot in ALICE, is thus unaffordable. An alternative method, is to use a semi-automatic source-to-source transformation for getting a simple parallel design, with almost no interference between threads. This reduces the need of rewriting the develop...

  3. Speaker diarization and speech recognition in the semi-automatization of audio description: An exploratory study on future possibilities?

    Directory of Open Access Journals (Sweden)

    Héctor Delgado

    2015-12-01

    Full Text Available This article presents an overview of the technological components used in the process of audio description, and suggests a new scenario in which speech recognition, machine translation, and text-to-speech, with the corresponding human revision, could be used to increase audio description provision. The article focuses on a process in which both speaker diarization and speech recognition are used in order to obtain a semi-automatic transcription of the audio description track. The technical process is presented and experimental results are summarized.

  4. Speaker diarization and speech recognition in the semi-automatization of audio description: An exploratory study on future possibilities?

    Directory of Open Access Journals (Sweden)

    Héctor Delgado

    2015-06-01

    This article presents an overview of the technological components used in the process of audio description, and suggests a new scenario in which speech recognition, machine translation, and text-to-speech, with the corresponding human revision, could be used to increase audio description provision. The article focuses on a process in which both speaker diarization and speech recognition are used in order to obtain a semi-automatic transcription of the audio description track. The technical process is presented and experimental results are summarized.

  5. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds; Metodo de titulacao potenciometrica de alta precisao semi-automatizado para a caracterizacao de compostos de uranio

    Energy Technology Data Exchange (ETDEWEB)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da, E-mail: barbara@ird.gov.b, E-mail: fabio@ird.gov.b, E-mail: pedrodio@ird.gov.b, E-mail: radier@ird.gov.b, E-mail: delgado@ird.gov.b, E-mail: wanderley@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Lopes, Ricardo T., E-mail: ricardo@lin.ufrj.b [Universidade Federal do Rio de Janeiro (LIN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Lab. de Instrumentacao Nuclear

    2011-10-26

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  6. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    Science.gov (United States)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  7. Comparison of 2D radiography and a semi-automatic CT-based 3D method for measuring change in dorsal angulation over time in distal radius fractures

    Energy Technology Data Exchange (ETDEWEB)

    Christersson, Albert; Larsson, Sune [Uppsala University, Department of Orthopaedics, Uppsala (Sweden); Nysjoe, Johan; Malmberg, Filip; Sintorn, Ida-Maria; Nystroem, Ingela [Uppsala University, Centre for Image Analysis, Uppsala (Sweden); Berglund, Lars [Uppsala University, Uppsala Clinical Research Centre, UCR Statistics, Uppsala (Sweden)

    2016-06-15

    The aim of the present study was to compare the reliability and agreement between a computer tomography-based method (CT) and digitalised 2D radiographs (XR) when measuring change in dorsal angulation over time in distal radius fractures. Radiographs from 33 distal radius fractures treated with external fixation were retrospectively analysed. All fractures had been examined using both XR and CT at six times over 6 months postoperatively. The changes in dorsal angulation between the first reference images and the following examinations in every patient were calculated from 133 follow-up measurements by two assessors and repeated at two different time points. The measurements were analysed using Bland-Altman plots, comparing intra- and inter-observer agreement within and between XR and CT. The mean differences in intra- and inter-observer measurements for XR, CT, and between XR and CT were close to zero, implying equal validity. The average intra- and inter-observer limits of agreement for XR, CT, and between XR and CT were ± 4.4 , ± 1.9 and ± 6.8 respectively. For scientific purpose, the reliability of XR seems unacceptably low when measuring changes in dorsal angulation in distal radius fractures, whereas the reliability for the semi-automatic CT-based method was higher and is therefore preferable when a more precise method is requested. (orig.)

  8. Creation of individual ideally shaped stents using multi-slice CT: in vitro results from the semi-automatic virtual stent (SAVS) designer

    International Nuclear Information System (INIS)

    Hyodoh, Hideki; Katagiri, Yoshimi; Hyodoh, Kazusa; Akiba, Hidenari; Hareyama, Masato; Sakai, Toyohiko

    2005-01-01

    To plan stent-grafting for thoracic aortic aneurysm with complicated morphology, we created a virtual stent-grafting program [Semi Automatic Virtual Stent (SAVS) designer] using three-dimensional CT data. The usefulness of the SAVS designer was evaluated by measurement of transformed anatomical and straight stents. Curved model images (source, multi-planer reconstruction and volume rendering) were created, and a hollow virtual stent was produced by the SAVS designer. A straight Nitinol stent was transformed to match the curved configuration of the virtual stent. The accuracy of the anatomical stent was evaluated by experimental strain phantom studies in comparison with the straight stent. Mean separation length was 0 mm in the anatomical stent [22 mm outer diameter (OD)] and 5 mm in the straight stent (22 mm OD). The straight stent strain voltage was four times that of the anatomical stent at the stent end. The anatomical stent is useful because it fits the curved structure of the aorta and reduces the strain force compared to the straight stent. The SAVS designer can help to design and produce the anatomical stent. (orig.)

  9. SU-E-J-275: Review - Computerized PET/CT Image Analysis in the Evaluation of Tumor Response to Therapy

    International Nuclear Information System (INIS)

    Lu, W; Wang, J; Zhang, H

    2015-01-01

    Purpose: To review the literature in using computerized PET/CT image analysis for the evaluation of tumor response to therapy. Methods: We reviewed and summarized more than 100 papers that used computerized image analysis techniques for the evaluation of tumor response with PET/CT. This review mainly covered four aspects: image registration, tumor segmentation, image feature extraction, and response evaluation. Results: Although rigid image registration is straightforward, it has been shown to achieve good alignment between baseline and evaluation scans. Deformable image registration has been shown to improve the alignment when complex deformable distortions occur due to tumor shrinkage, weight loss or gain, and motion. Many semi-automatic tumor segmentation methods have been developed on PET. A comparative study revealed benefits of high levels of user interaction with simultaneous visualization of CT images and PET gradients. On CT, semi-automatic methods have been developed for only tumors that show marked difference in CT attenuation between the tumor and the surrounding normal tissues. Quite a few multi-modality segmentation methods have been shown to improve accuracy compared to single-modality algorithms. Advanced PET image features considering spatial information, such as tumor volume, tumor shape, total glycolytic volume, histogram distance, and texture features have been found more informative than the traditional SUVmax for the prediction of tumor response. Advanced CT features, including volumetric, attenuation, morphologic, structure, and texture descriptors, have also been found advantage over the traditional RECIST and WHO criteria in certain tumor types. Predictive models based on machine learning technique have been constructed for correlating selected image features to response. These models showed improved performance compared to current methods using cutoff value of a single measurement for tumor response. Conclusion: This review showed that

  10. SU-E-J-275: Review - Computerized PET/CT Image Analysis in the Evaluation of Tumor Response to Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Lu, W; Wang, J; Zhang, H [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: To review the literature in using computerized PET/CT image analysis for the evaluation of tumor response to therapy. Methods: We reviewed and summarized more than 100 papers that used computerized image analysis techniques for the evaluation of tumor response with PET/CT. This review mainly covered four aspects: image registration, tumor segmentation, image feature extraction, and response evaluation. Results: Although rigid image registration is straightforward, it has been shown to achieve good alignment between baseline and evaluation scans. Deformable image registration has been shown to improve the alignment when complex deformable distortions occur due to tumor shrinkage, weight loss or gain, and motion. Many semi-automatic tumor segmentation methods have been developed on PET. A comparative study revealed benefits of high levels of user interaction with simultaneous visualization of CT images and PET gradients. On CT, semi-automatic methods have been developed for only tumors that show marked difference in CT attenuation between the tumor and the surrounding normal tissues. Quite a few multi-modality segmentation methods have been shown to improve accuracy compared to single-modality algorithms. Advanced PET image features considering spatial information, such as tumor volume, tumor shape, total glycolytic volume, histogram distance, and texture features have been found more informative than the traditional SUVmax for the prediction of tumor response. Advanced CT features, including volumetric, attenuation, morphologic, structure, and texture descriptors, have also been found advantage over the traditional RECIST and WHO criteria in certain tumor types. Predictive models based on machine learning technique have been constructed for correlating selected image features to response. These models showed improved performance compared to current methods using cutoff value of a single measurement for tumor response. Conclusion: This review showed that

  11. Predicting new-onset of postoperative atrial fibrillation in patients undergoing cardiac surgery using semi-automatic reading of perioperative electrocardiograms

    DEFF Research Database (Denmark)

    Gu, Jiwei; Graff, Claus; Melgaard, Jacob

    2015-01-01

    P10 Predicting new-onset of postoperative atrial fibrillation in patients undergoingcardiac surgery using semi-automatic reading of perioperative electrocardiograms. Jiwei Gu, Claus Graff, Jacob Melgaard, Søren Lundbye-Christensen, Erik Berg Schmidt, Christian Torp-Pedersen, Kristinn Thorsteinsson......, Jan Jesper Andreasen. Aalborg, DenmarkBackground: Postoperative new onset atrial fibrillation (POAF) is the most common arrhythmia after cardiac surgery. The aim of this study was to evaluate if semi-automatic readings of perioperative electrocardiograms (ECGs) is of any value in predicting POAF after...... ECG monitoring. A semi-automatic machine capable of reading differentparameters of digitalized ECG’s was used to read both lead specific (P/QRS/T amplitudes/intervals) and global measurements (P-duration/QRS-duration/PR-interval/QT/Heart Rate/hypertrophy).Results: We divided the patients into two...

  12. Semi-Automatic Evaluation of Intrasubject Variability and Inter-session of Cerebral Activation Areas by Neuro functional Magnetic Resonance (FMRI)

    International Nuclear Information System (INIS)

    Rascovsky, Simon; Delgado, Jorge Andres; Sanz, Alexander

    2008-01-01

    To verify the reproducibility of word generation, text comprehension, antonyms generation and motor/somatosensory RMF protocols in a test-retest evaluation through a semiautomatic stereotaxical localization method for activation comparison. Methods: Word generation, text comprehension, antonyms generation and motor/somatosensory FMRI paradigms were applied on 8 healthy subjects on two separate sessions, performing the evaluation of inter-session activations through conjunction and cluster analysis. Results: Activations according to Brodmann areas were reproducible in 50%, 62.5% and 75% for word generation, text comprehension and antonyms generation respectively. For the motor paradigms, right motor conjoined activations were found in 86% of subjects and in 100% of subjects for left conjoined activations. Conclusions: The semi-automatic method of determining inter-session areas of common activation allows its use for functional cytoarchitectonic localization of fMRI activations with minimal intervention, and can be used as a quality control measure of the different paradigms used in RMF, minimizing observer bias.

  13. Semi-automatic surface sediment sampling system - A prototype to be implemented in bivalve fishing surveys

    Science.gov (United States)

    Rufino, Marta M.; Baptista, Paulo; Pereira, Fábio; Gaspar, Miguel B.

    2018-01-01

    In the current work we propose a new method to sample surface sediment during bivalve fishing surveys. Fishing institutes all around the word carry out regular surveys with the aim of monitoring the stocks of commercial species. These surveys comprise often more than one hundred of sampling stations and cover large geographical areas. Although superficial sediment grain sizes are among the main drivers of benthic communities and provide crucial information for studies on coastal dynamics, overall there is a strong lack of this type of data, possibly, because traditional surface sediment sampling methods use grabs, that require considerable time and effort to be carried out on regular basis or on large areas. In face of these aspects, we developed an easy and un-expensive method to sample superficial sediments, during bivalve fisheries monitoring surveys, without increasing survey time or human resources. The method was successfully evaluated and validated during a typical bivalve survey carried out on the Northwest coast of Portugal, confirming that it had any interference with the survey objectives. Furthermore, the method was validated by collecting samples using a traditional Van Veen grabs (traditional method), which showed a similar grain size composition to the ones collected by the new method, on the same localities. We recommend that the procedure is implemented on regular bivalve fishing surveys, together with an image analysis system to analyse the collected samples. The new method will provide substantial quantity of data on surface sediment in coastal areas, using a non-expensive and efficient manner, with a high potential application in different fields of research.

  14. Graphical user interface (GUIDE) and semi-automatic system for the acquisition of anaglyphs

    Science.gov (United States)

    Canchola, Marco A.; Arízaga, Juan A.; Cortés, Obed; Tecpanecatl, Eduardo; Cantero, Jose M.

    2013-09-01

    Diverse educational experiences have shown greater acceptance of children to ideas related to science, compared with adults. That fact and showing great curiosity are factors to consider to undertake scientific outreach efforts for children, with prospects of success. Moreover now 3D digital images have become a topic that has gained importance in various areas, entertainment, film and video games mainly, but also in areas such as medical practice transcendental in disease detection This article presents a system model for 3D images for educational purposes that allows students of various grade levels, school and college, have an approach to image processing, explaining the use of filters for stereoscopic images that give brain impression of depth. The system is based on one of two hardware elements, centered on an Arduino board, and a software based on Matlab. The paper presents the design and construction of each of the elements, also presents information on the images obtained and finally how users can interact with the device.

  15. Spinal imaging and image analysis

    CERN Document Server

    Yao, Jianhua

    2015-01-01

    This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications.  Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as  contributions were written by both clinicians and researchers, which reflects the inte...

  16. Retinal Imaging and Image Analysis

    Science.gov (United States)

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:22275207

  17. 10 CFR Appendix J1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Uniform Test Method for Measuring the Energy Consumption... Energy Consumption of Automatic and Semi-Automatic Clothes Washers The provisions of this appendix J1... means for determining the energy consumption of a clothes washer with an adaptive control system...

  18. RFA-cut: Semi-automatic segmentation of radiofrequency ablation zones with and without needles via optimal s-t-cuts.

    Science.gov (United States)

    Egger, Jan; Busse, Harald; Brandmaier, Philipp; Seider, Daniel; Gawlitza, Matthias; Strocka, Steffen; Voglreiter, Philip; Dokter, Mark; Hofmann, Michael; Kainz, Bernhard; Chen, Xiaojun; Hann, Alexander; Boechat, Pedro; Yu, Wei; Freisleben, Bernd; Alhonnoro, Tuomas; Pollari, Mika; Moche, Michael; Schmalstieg, Dieter

    2015-01-01

    In this contribution, we present a semi-automatic segmentation algorithm for radiofrequency ablation (RFA) zones via optimal s-t-cuts. Our interactive graph-based approach builds upon a polyhedron to construct the graph and was specifically designed for computed tomography (CT) acquisitions from patients that had RFA treatments of Hepatocellular Carcinomas (HCC). For evaluation, we used twelve post-interventional CT datasets from the clinical routine and as evaluation metric we utilized the Dice Similarity Coefficient (DSC), which is commonly accepted for judging computer aided medical segmentation tasks. Compared with pure manual slice-by-slice expert segmentations from interventional radiologists, we were able to achieve a DSC of about eighty percent, which is sufficient for our clinical needs. Moreover, our approach was able to handle images containing (DSC=75.9%) and not containing (78.1%) the RFA needles still in place. Additionally, we found no statistically significant difference (p<;0.423) between the segmentation results of the subgroups for a Mann-Whitney test. Finally, to the best of our knowledge, this is the first time a segmentation approach for CT scans including the RFA needles is reported and we show why another state-of-the-art segmentation method fails for these cases. Intraoperative scans including an RFA probe are very critical in the clinical practice and need a very careful segmentation and inspection to avoid under-treatment, which may result in tumor recurrence (up to 40%). If the decision can be made during the intervention, an additional ablation can be performed without removing the entire needle. This decreases the patient stress and associated risks and costs of a separate intervention at a later date. Ultimately, the segmented ablation zone containing the RFA needle can be used for a precise ablation simulation as the real needle position is known.

  19. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action.

    Science.gov (United States)

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan

    2018-01-01

    Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a

  20. Semi-Automatic Science Workflow Synthesis for High-End Computing on the NASA Earth Exchange

    Data.gov (United States)

    National Aeronautics and Space Administration — Enhance capabilities for collaborative data analysis and modeling in Earth sciences. Develop components for automatic workflow capture, archiving and management....

  1. ACCELERATION OF TOPOGRAPHIC MAP PRODUCTION USING SEMI-AUTOMATIC DTM FROM DSM RADAR DATA

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2016-06-01

    Full Text Available Badan Informasi Geospasial (BIG is government institution in Indonesia which is responsible to provide Topographic Map at several map scale. For medium map scale, e.g. 1:25.000 or 1:50.000, DSM from Radar data is very good solution since Radar is able to penetrate cloud that usually covering tropical area in Indonesia. DSM Radar is produced using Radargrammetry and Interferrometry technique. The conventional method of DTM production is using “stereo-mate”, the stereo image created from DSM Radar and ORRI (Ortho Rectified Radar Image, and human operator will digitizing masspoint and breakline manually using digital stereoplotter workstation. This technique is accurate but very costly and time consuming, also needs large resource of human operator. Since DSMs are already generated, it is possible to filter DSM to DTM using several techniques. This paper will study the possibility of DSM to DTM filtering using technique that usually used in point cloud LIDAR filtering. Accuracy of this method will also be calculated using enough numbers of check points. If the accuracy meets the requirement, this method is very potential to accelerate the production of Topographic Map in Indonesia.

  2. Semi-automatic Term Extraction for an isiZulu Linguistic Terms ...

    African Journals Online (AJOL)

    user

    This paper advances the use of frequency analysis and the keyword analysis as strategies to extract terms for the compilation of the dictionary of isiZulu linguistic terms. The study uses the isiZulu. National Corpus (INC) of about 1,2 million tokens as a reference corpus as well as an LSP corpus of about 100,000 tokens as a ...

  3. SU-C-201-04: Quantification of Perfusion Heterogeneity Based On Texture Analysis for Fully Automatic Detection of Ischemic Deficits From Myocardial Perfusion Imaging

    International Nuclear Information System (INIS)

    Fang, Y; Huang, H; Su, T

    2015-01-01

    Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCI Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination

  4. Retinal imaging and image analysis

    NARCIS (Netherlands)

    Abramoff, M.D.; Garvin, Mona K.; Sonka, Milan

    2010-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of

  5. Semi-automatic registration of 3D orthodontics models from photographs

    Science.gov (United States)

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin

    2013-03-01

    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  6. A SEMI-AUTOMATIC RULE SET BUILDING METHOD FOR URBAN LAND COVER CLASSIFICATION BASED ON MACHINE LEARNING AND HUMAN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    H. Y. Gu

    2017-09-01

    Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  7. Semi-Automatic Registration of Airborne and Terrestrial Laser Scanning Data Using Building Corner Matching with Boundaries as Reliability Check

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2013-11-01

    Full Text Available Data registration is a prerequisite for the integration of multi-platform laser scanning in various applications. A new approach is proposed for the semi-automatic registration of airborne and terrestrial laser scanning data with buildings without eaves. Firstly, an automatic calculation procedure for thresholds in density of projected points (DoPP method is introduced to extract boundary segments from terrestrial laser scanning data. A new algorithm, using a self-extending procedure, is developed to recover the extracted boundary segments, which then intersect to form the corners of buildings. The building corners extracted from airborne and terrestrial laser scanning are reliably matched through an automatic iterative process in which boundaries from two datasets are compared for the reliability check. The experimental results illustrate that the proposed approach provides both high reliability and high geometric accuracy (average error of 0.44 m/0.15 m in horizontal/vertical direction for corresponding building corners for the final registration of airborne laser scanning (ALS and tripod mounted terrestrial laser scanning (TLS data.

  8. From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings

    Directory of Open Access Journals (Sweden)

    Hélène Macher

    2017-10-01

    Full Text Available The creation of as-built Building Information Models requires the acquisition of the as-is state of existing buildings. Laser scanners are widely used to achieve this goal since they permit to collect information about object geometry in form of point clouds and provide a large amount of accurate data in a very fast way and with a high level of details. Unfortunately, the scan-to-BIM (Building Information Model process remains currently largely a manual process which is time consuming and error-prone. In this paper, a semi-automatic approach is presented for the 3D reconstruction of indoors of existing buildings from point clouds. Several segmentations are performed so that point clouds corresponding to grounds, ceilings and walls are extracted. Based on these point clouds, walls and slabs of buildings are reconstructed and described in the IFC format in order to be integrated into BIM software. The assessment of the approach is proposed thanks to two datasets. The evaluation items are the degree of automation, the transferability of the approach and the geometric quality of results of the 3D reconstruction. Additionally, quality indexes are introduced to inspect the results in order to be able to detect potential errors of reconstruction.

  9. Integrating different tracking systems in football: multiple camera semi-automatic system, local position measurement and GPS technologies.

    Science.gov (United States)

    Buchheit, Martin; Allen, Adam; Poon, Tsz Kit; Modonutti, Mattia; Gregson, Warren; Di Salvo, Valter

    2014-12-01

    Abstract During the past decade substantial development of computer-aided tracking technology has occurred. Therefore, we aimed to provide calibration equations to allow the interchangeability of different tracking technologies used in soccer. Eighty-two highly trained soccer players (U14-U17) were monitored during training and one match. Player activity was collected simultaneously with a semi-automatic multiple-camera (Prozone), local position measurement (LPM) technology (Inmotio) and two global positioning systems (GPSports and VX). Data were analysed with respect to three different field dimensions (small, systems were compared, and calibration equations (linear regression models) between each system were calculated for each field dimension. Most metrics differed between the 4 systems with the magnitude of the differences dependant on both pitch size and the variable of interest. Trivial-to-small between-system differences in total distance were noted. However, high-intensity running distance (>14.4 km · h -1 ) was slightly-to-moderately greater when tracked with Prozone, and accelerations, small-to-very largely greater with LPM. For most of the equations, the typical error of the estimate was of a moderate magnitude. Interchangeability of the different tracking systems is possible with the provided equations, but care is required given their moderate typical error of the estimate.

  10. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    Science.gov (United States)

    Dore, C.; Murphy, M.

    2013-02-01

    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  11. Chromosome painting in biological dosimetry: Semi-automatic system to score stable chromosome aberrations

    International Nuclear Information System (INIS)

    Garcia-Sagredo, J.M.; Vallcorba, I.; Sanchez-Hombre, M.C.; Ferro, M.T.; San Roman Cos-Gayon, C.; Santos, A.; Malpica, N.; Ortiz, C.

    1997-01-01

    From the beginning of the description of the procedure of chromosome painting by fluorescence in situ hybridization (FISH), it was thought its possible application to score induced chromosomal aberrations in radiation exposition. With chromosome painting it is possible to detect changes between chromosomes that has been validated in radiation exposition. Translocation scoring by FISH, contrarily to the unstable dicentrics, mainly detect stable chromosome aberrations that do not disappear, it allows the capability of quantify delayed acute expositions or chronic cumulative expositions. The large number of cells that have to be analyzed for high accuracy, specially when dealing with low radiation doses, makes it almost imperative to use an automatic analysis system. After validate translocation scoring by FISH in our, we have evaluated the ability and sensitivity to detect chromosomal aberrations by chromosome using different paint probes used, showing that any combination of paint probes can be used to score induced chromosomal aberrations. Our group has developed a FISH analysis that is currently being adapted for translocation scoring analysis. It includes systematic error correction and internal control probes. The performance tests carried out show that 9,000 cells can be analyzed in 10 hr. using a Sparc 4/370. Although with a faster computer, a higher throughput is expected, for large population screening or very low radiation doses, this performance still has to be improved. (author)

  12. Hyperspectral classification of grassland species: towards a UAS application for semi-automatic field surveys

    Science.gov (United States)

    Lopatin, Javier; Fassnacht, Fabian E.; Kattenborn, Teja; Schmidtlein, Sebastian

    2017-04-01

    Grasslands are one of the ecosystems that have been strongly intervened during the past decades due to anthropogenic impacts, affecting their structural and functional composition. To monitor the spatial and/or temporal changes of these environments, a reliable field survey is first needed. As quality relevés are usually expensive and time consuming, the amount of information available is usually poor or not well spatially distributed at the regional scale. In the present study, we investigate the possibility of a semi-automated method used for repeated surveys of monitoring sites. We analyze the applicability of very high spatial resolution hyperspectral data to classify grassland species at the level of individuals. The AISA+ imaging spectrometer mounted on a scaffold was applied to scan 1 m2 grassland plots and assess the impact of four sources of variation on the predicted species cover: (1) the spatial resolution of the scans, (2) the species number and structural diversity, (3) the species cover, and (4) the species functional types (bryophytes, forbs and graminoids). We found that the spatial resolution and the diversity level (mainly structural diversity) were the most important source of variation for the proposed approach. A spatial resolution below 1 cm produced relatively high model performances, while predictions with pixel sizes over that threshold produced non adequate results. Areas with low interspecies overlap reached classification median values of 0.8 (kappa). On the contrary, results were not satisfactory in plots with frequent interspecies overlap in multiple layers. By means of a bootstrapping procedure, we found that areas with shadows and mixed pixels introduce uncertainties into the classification. We conclude that the application of very high resolution hyperspectral remote sensing as a robust alternative or supplement to field surveys is possible for environments with low structural heterogeneity. This study presents the first try of a

  13. Semi-automatic building extraction in informal settlements from high-resolution satellite imagery

    Science.gov (United States)

    Mayunga, Selassie David

    The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance

  14. An interactive tool for semi-automatic feature extraction of hyperspectral data

    Science.gov (United States)

    Kovács, Zoltán; Szabó, Szilárd

    2016-09-01

    The spectral reflectance of the surface provides valuable information about the environment, which can be used to identify objects (e.g. land cover classification) or to estimate quantities of substances (e.g. biomass). We aimed to develop an MS Excel add-in - Hyperspectral Data Analyst (HypDA) - for a multipurpose quantitative analysis of spectral data in VBA programming language. HypDA was designed to calculate spectral indices from spectral data with user defined formulas (in all possible combinations involving a maximum of 4 bands) and to find the best correlations between the quantitative attribute data of the same object. Different types of regression models reveal the relationships, and the best results are saved in a worksheet. Qualitative variables can also be involved in the analysis carried out with separability and hypothesis testing; i.e. to find the wavelengths responsible for separating data into predefined groups. HypDA can be used both with hyperspectral imagery and spectrometer measurements. This bivariate approach requires significantly fewer observations than popular multivariate methods; it can therefore be applied to a wide range of research areas.

  15. Development of a semi-automatic alignment tool for accelerated localization of the prostate

    International Nuclear Information System (INIS)

    Hua Chiaho; Lovelock, D. Michael; Mageras, Gikas S.; Katz, Matthew S.; Mechalakos, James; Lief, Eugene P.; Hollister, Timothy; Lutz, Wendell R.; Zelefsky, Michael J.; Ling, Clifton C.

    2003-01-01

    -fitting correction, 97% of the scans showed that the entire posterior prostate gland was covered by PTV given a margin of 6 mm at the rectum-prostate interface and 10 mm elsewhere. In comparison, only 74% and 65% could be achieved by the corrections based on daily and weekly bony matching on portal images, respectively. Conclusions: Results show that daily CT extent fitting provides a precise correction of prostate position in terms of CoG. Identifying prostate extents on five axial CT slices at the CT console is less time-consuming compared with daily contouring of the prostate on many slices. Taking advantage of the prostate curvature in the longitudinal direction, this method also eliminates the necessity of identifying prostate base and apex. Therefore, it is clinically feasible and should provide an accelerated localization of the prostate immediately before daily treatment

  16. Semi-Automatic Mapping of Tidal Cracks in the Fast Ice Region near Zhongshan Station in East Antarctica Using Landsat-8 OLI Imagery

    Directory of Open Access Journals (Sweden)

    Fengming Hui

    2016-03-01

    Full Text Available Tidal cracks are linear features that appear parallel to coastlines in fast ice regions due to the actions of periodic and non-periodic sea level oscillations. They can influence energy and heat exchange between the ocean, ice, and atmosphere, as well as human activities. In this paper, the LINE module of Geomatics 2015 software was used to automatically extract tidal cracks in fast ice regions near the Chinese Zhongshan Station in East Antarctica from Landsat-8 Operational Land Imager (OLI data with resolutions of 15 m (panchromatic band 8 and 30 m (multispectral bands 1–7. The detected tidal cracks were determined based on matching between the output from the LINE module and manually-interpreted tidal cracks in OLI images. The ratio of the length of detected tidal cracks to the total length of interpreted cracks was used to evaluate the automated detection method. Results show that the vertical direction gradient is a better input to the LINE module than the top-of-atmosphere (TOA reflectance input for estimating the presence of cracks, regardless of the examined resolution. Data with a resolution of 15 m also gives better results in crack detection than data with a resolution of 30 m. The statistics also show that, in the results from the 15-m-resolution data, the ratios in Band 8 performed best with values of the above-mentioned ratio of 50.92 and 31.38 percent using the vertical gradient and the TOA reflectance methods, respectively. On the other hand, in the results from the 30-m-resolution data, the ratios in Band 5 performed best with ratios of 47.43 and 17.8 percent using the same methods, respectively. This implies that Band 8 was better for tidal crack detection than the multispectral fusion data (Bands 1–7, and Band 5 with a resolution of 30 m was best among the multispectral data. The semi-automatic mapping of tidal cracks will improve the safety of vehicles travel in fast ice regimes.

  17. LTRsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected LTR retrotransposons

    Directory of Open Access Journals (Sweden)

    Steinbiss Sascha

    2012-11-01

    Full Text Available Abstract Background Long terminal repeat (LTR retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets, making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. Results We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. Conclusions LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining

  18. LTRsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected LTR retrotransposons.

    Science.gov (United States)

    Steinbiss, Sascha; Kastens, Sascha; Kurtz, Stefan

    2012-11-07

    Long terminal repeat (LTR) retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-)families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets), making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining the output of software for predicting LTR

  19. Treatment choice for osteoarthritis of the knee joint according to semi-automatic MRI based assessment of disease severity

    International Nuclear Information System (INIS)

    Sasho, Takahisa; Suzuki, Masahiko; Nakagawa, Koichi; Ochiai, Nobuyasu; Matsuki, Megumi; Takahashi, Kazuhisa; Moriya, Hideshige

    2008-01-01

    Objective assessment of disease severity of osteoarthritis of the knee joint (OA knee) is fundamental to establish adequate treatment system. Regrettably, there is no such a reliable system. Grading system based upon X-ray findings or measurement of joint space narrowing is widely used method for this purpose but they are still far from satisfaction. Our previous study elucidated that measuring irregularity of the contour of the femoral condyle on MRI (irregularity index) using newly developed software enabled us to assess disease severity of OA objectively. Advantages of this system are expressing severity by metric variable and semi-automatic character. In the present study, we examined relationship between treatment selection and irregularity index. Sixty-one medial type OA knees that received total knee arthroplasty (TKA), arthroscopic surgery (AS), and conservative treatment (CT) were involved. Their x-ray grading, irregularity index were recorded at the time of corresponding treatment. Irregularity index of each group were compared. As for AS group, pre- and post-operative knee score employing JOA score were also examined to study relationship between irregularity index and improvement of knee score. All the four parameters that represent irregularity of femoral condyle were significantly higher in TKA group than in AS group, whereas no significant difference was observed between AS group and CT group. Negative correlation was observed between irregularity index and improvement of knee score after arthroscopic surgery. Although treatment selection was determined by skillful knee surgeon in this series, irregularity index could indicate adequate timing of TKA. It also served as an indicator to predict outcome of arthroscopic surgery, and could be used as to show limitation of arthroscopic surgery. Our new system to assess disease severity of OA knee can serve as an index to determine treatment options. (author)

  20. Counting radon tracks in Makrofol detectors with the 'image reduction and analysis facility' (IRAF) software package

    International Nuclear Information System (INIS)

    Hernandez, F.; Gonzalez-Manrique, S.; Karlsson, L.; Hernandez-Armas, J.; Aparicio, A.

    2007-01-01

    Makrofol detectors are commonly used for long-term radon ( 222 Rn) measurements in houses, schools and workplaces. The use of this type of passive detectors for the determination of radon concentrations requires the counting of the nuclear tracks produced by alpha particles on the detecting material. The 'image reduction and analysis facility' (IRAF) software package is a piece of software commonly used in astronomical applications. It allows detailed counting and mapping of sky sections where stars are grouped very closely, even forming clusters. In order to count the nuclear tracks in our Makrofol radon detectors, we have developed an inter-disciplinary application that takes advantage of the similitude that exist between counting stars in a dark sky and tracks in a track-etch detector. Thus, a low cost semi-automatic system has been set up in our laboratory which utilises a commercially available desktop scanner and the IRAF software package. A detailed description of the proposed semi-automatic method and its performance, in comparison to ocular counting, is described in detail here. In addition, the calibration factor for this procedure, 2.97+/-0.07kBqm -3 htrack -1 cm 2 , has been calculated based on the results obtained from exposing 46 detectors to certified radon concentrations. Furthermore, the results of a preliminary radon survey carried out in 62 schools in Tenerife island (Spain), using Makrofol detectors, counted with the mentioned procedure, are briefly presented. The results reported here indicate that the developed procedure permits a fast, accurate and unbiased determination of the radon tracks in a large number of detectors. The measurements carried out in the schools showed that the radon concentrations in at least 12 schools were above 200Bqm -3 and, in two of them, above 400Bqm -3 . Further studies should be performed at those schools following the European Union recommendations about radon concentrations in buildings

  1. Fusion of dynamic contrast-enhanced magnetic resonance mammography at 3.0 T with X-ray mammograms: Pilot study evaluation using dedicated semi-automatic registration software

    Energy Technology Data Exchange (ETDEWEB)

    Dietzel, Matthias, E-mail: dietzelmatthias2@hotmail.com [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany); Hopp, Torsten; Ruiter, Nicole [Karlsruhe Institute of Technology (KIT), Institute for Data Processing and Electronics, Postfach 3640, D-76021 Karlsruhe (Germany); Zoubi, Ramy [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany); Runnebaum, Ingo B. [Clinic of Gynecology and Obstetrics, Friedrich-Schiller-University Jena, Bachstrasse 18, D-07743 Jena (Germany); Kaiser, Werner A. [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany); Medical School, University of Harvard, 25 Shattuck Street, Boston, MA 02115 (United States); Baltzer, Pascal A.T. [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University Jena, Erlanger Allee 101, D-07740 Jena (Germany)

    2011-08-15

    Rationale and objectives: To evaluate the semi-automatic image registration accuracy of X-ray-mammography (XR-M) with high-resolution high-field (3.0 T) MR-mammography (MR-M) in an initial pilot study. Material and methods: MR-M was acquired on a high-field clinical scanner at 3.0 T (T1-weighted 3D VIBE {+-} Gd). XR-M was obtained with state-of-the-art full-field digital systems. Seven patients with clearly delineable mass lesions >10 mm both in XR-M and MR-M were enrolled (exclusion criteria: previous breast surgery; surgical intervention between XR-M and MR-M). XR-M and MR-M were matched using a dedicated image-registration algorithm allowing semi-automatic non-linear deformation of MR-M based on finite-element modeling. To identify registration errors (RE) a virtual craniocaudal 2D mammogram was calculated by the software from MR-M (with and w/o Gadodiamide/Gd) and matched with corresponding XR-M. To quantify REs the geometric center of the lesions in the virtual vs. conventional mammogram were subtracted. The robustness of registration was quantified by registration of X-MRs to both MR-Ms with and w/o Gadodiamide. Results: Image registration was performed successfully for all patients. Overall RE was 8.2 mm (1 min after Gd; confidence interval/CI: 2.0-14.4 mm, standard deviation/SD: 6.7 mm) vs. 8.9 mm (no Gd; CI: 4.0-13.9 mm, SD: 5.4 mm). The mean difference between pre- vs. post-contrast was 0.7 mm (SD: 1.9 mm). Conclusion: Image registration of high-field 3.0 T MR-mammography with X-ray-mammography is feasible. For this study applying a high-resolution protocol at 3.0 T, the registration was robust and the overall registration error was sufficient for clinical application.

  2. Fusion of dynamic contrast-enhanced magnetic resonance mammography at 3.0 T with X-ray mammograms: Pilot study evaluation using dedicated semi-automatic registration software

    International Nuclear Information System (INIS)

    Dietzel, Matthias; Hopp, Torsten; Ruiter, Nicole; Zoubi, Ramy; Runnebaum, Ingo B.; Kaiser, Werner A.; Baltzer, Pascal A.T.

    2011-01-01

    Rationale and objectives: To evaluate the semi-automatic image registration accuracy of X-ray-mammography (XR-M) with high-resolution high-field (3.0 T) MR-mammography (MR-M) in an initial pilot study. Material and methods: MR-M was acquired on a high-field clinical scanner at 3.0 T (T1-weighted 3D VIBE ± Gd). XR-M was obtained with state-of-the-art full-field digital systems. Seven patients with clearly delineable mass lesions >10 mm both in XR-M and MR-M were enrolled (exclusion criteria: previous breast surgery; surgical intervention between XR-M and MR-M). XR-M and MR-M were matched using a dedicated image-registration algorithm allowing semi-automatic non-linear deformation of MR-M based on finite-element modeling. To identify registration errors (RE) a virtual craniocaudal 2D mammogram was calculated by the software from MR-M (with and w/o Gadodiamide/Gd) and matched with corresponding XR-M. To quantify REs the geometric center of the lesions in the virtual vs. conventional mammogram were subtracted. The robustness of registration was quantified by registration of X-MRs to both MR-Ms with and w/o Gadodiamide. Results: Image registration was performed successfully for all patients. Overall RE was 8.2 mm (1 min after Gd; confidence interval/CI: 2.0-14.4 mm, standard deviation/SD: 6.7 mm) vs. 8.9 mm (no Gd; CI: 4.0-13.9 mm, SD: 5.4 mm). The mean difference between pre- vs. post-contrast was 0.7 mm (SD: 1.9 mm). Conclusion: Image registration of high-field 3.0 T MR-mammography with X-ray-mammography is feasible. For this study applying a high-resolution protocol at 3.0 T, the registration was robust and the overall registration error was sufficient for clinical application.

  3. Transferability of Object-Oriented Image Analysis Methods for Slum Identification

    Directory of Open Access Journals (Sweden)

    Alfred Stein

    2013-08-01

    Full Text Available Updated spatial information on the dynamics of slums can be helpful to measure and evaluate progress of policies. Earlier studies have shown that semi-automatic detection of slums using remote sensing can be challenging considering the large variability in definition and appearance. In this study, we explored the potential of an object-oriented image analysis (OOA method to detect slums, using very high resolution (VHR imagery. This method integrated expert knowledge in the form of a local slum ontology. A set of image-based parameters was identified that was used for differentiating slums from non-slum areas in an OOA environment. The method was implemented on three subsets of the city of Ahmedabad, India. Results show that textural features such as entropy and contrast derived from a grey level co-occurrence matrix (GLCM and the size of image segments are stable parameters for classification of built-up areas and the identification of slums. Relation with classified slum objects, in terms of enclosed by slums and relative border with slums was used to refine classification. The analysis on three different subsets showed final accuracies ranging from 47% to 68%. We conclude that our method produces useful results as it allows including location specific adaptation, whereas generically applicable rulesets for slums are still to be developed.

  4. Counting radon tracks in Makrofol detectors with the 'image reduction and analysis facility' (IRAF) software package

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, F. [Laboratorio de Fisica Medica y Radioactividad Ambiental, Departamento de Medicina Fisica y Farmacologia, Universidad de La Laguna, 38320 La Laguna, Tenerife (Spain)]. E-mail: fimerall@ull.es; Gonzalez-Manrique, S. [Laboratorio de Fisica Medica y Radioactividad Ambiental, Departamento de Medicina Fisica y Farmacologia, Universidad de La Laguna, 38320 La Laguna, Tenerife (Spain); Karlsson, L. [Laboratorio de Fisica Medica y Radioactividad Ambiental, Departamento de Medicina Fisica y Farmacologia, Universidad de La Laguna, 38320 La Laguna, Tenerife (Spain); Hernandez-Armas, J. [Laboratorio de Fisica Medica y Radioactividad Ambiental, Departamento de Medicina Fisica y Farmacologia, Universidad de La Laguna, 38320 La Laguna, Tenerife (Spain); Aparicio, A. [Instituto de Astrofisica de Canarias, 38200 La Laguna, Tenerife (Spain); Departamento de Astrofisica, Universidad de La Laguna. Avenida. Astrofisico Francisco Sanchez s/n, 38071 La Laguna, Tenerife (Spain)

    2007-03-15

    Makrofol detectors are commonly used for long-term radon ({sup 222}Rn) measurements in houses, schools and workplaces. The use of this type of passive detectors for the determination of radon concentrations requires the counting of the nuclear tracks produced by alpha particles on the detecting material. The 'image reduction and analysis facility' (IRAF) software package is a piece of software commonly used in astronomical applications. It allows detailed counting and mapping of sky sections where stars are grouped very closely, even forming clusters. In order to count the nuclear tracks in our Makrofol radon detectors, we have developed an inter-disciplinary application that takes advantage of the similitude that exist between counting stars in a dark sky and tracks in a track-etch detector. Thus, a low cost semi-automatic system has been set up in our laboratory which utilises a commercially available desktop scanner and the IRAF software package. A detailed description of the proposed semi-automatic method and its performance, in comparison to ocular counting, is described in detail here. In addition, the calibration factor for this procedure, 2.97+/-0.07kBqm{sup -3}htrack{sup -1}cm{sup 2}, has been calculated based on the results obtained from exposing 46 detectors to certified radon concentrations. Furthermore, the results of a preliminary radon survey carried out in 62 schools in Tenerife island (Spain), using Makrofol detectors, counted with the mentioned procedure, are briefly presented. The results reported here indicate that the developed procedure permits a fast, accurate and unbiased determination of the radon tracks in a large number of detectors. The measurements carried out in the schools showed that the radon concentrations in at least 12 schools were above 200Bqm{sup -3} and, in two of them, above 400Bqm{sup -3}. Further studies should be performed at those schools following the European Union recommendations about radon concentrations in

  5. Appropriate threshold levels of cardiac beat-to-beat variation in semi-automatic analysis of equine ECG recordings

    DEFF Research Database (Denmark)

    Madsen, Mette Flethøj; Kanters, Jørgen K.; Pedersen, Philip Juul

    2016-01-01

    considerably with heart rate (HR), and an adaptable model consisting of three different HR ranges with separate threshold levels of maximum acceptable RR deviation was consequently defined. For resting HRs

  6. Oncological image analysis.

    Science.gov (United States)

    Brady, Sir Michael; Highnam, Ralph; Irving, Benjamin; Schnabel, Julia A

    2016-10-01

    Cancer is one of the world's major healthcare challenges and, as such, an important application of medical image analysis. After a brief introduction to cancer, we summarise some of the major developments in oncological image analysis over the past 20 years, but concentrating those in the authors' laboratories, and then outline opportunities and challenges for the next decade. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Soft tissue segmentation and 3D display from computerized tomography and magnetic resonance imaging

    International Nuclear Information System (INIS)

    Fan, R.T.; Trivedi, S.S.; Fellingham, L.L.; Gamboa-Aldeco, A.; Hedgcock, M.W.

    1987-01-01

    Volume calculation and 3D display of human anatomy facilitate a physician's diagnosis, treatment, and evaluation. Accurate segmentation of soft tissue structures is a prerequisite for such volume calculations and 3D displays, but segmentation by hand-outlining structures is often tedious and time-consuming. In this paper, methods based on analysis of statistics of image gray level are applied to segmentation of soft tissue in medical images, with the goal of making segmentation automatic or semi-automatic. The resulting segmented images, volume calculations, and 3D displays are analyzed and compared with results based on physician-drawn outlines as well as actual volume measurements

  8. Gabor Analysis for Imaging

    DEFF Research Database (Denmark)

    Christensen, Ole; Feichtinger, Hans G.; Paukner, Stephan

    2015-01-01

    , it characterizes a function by its transform over phase space, which is the time–frequency plane (TF-plane) in a musical context or the location–wave-number domain in the context of image processing. Since the transition from the signal domain to the phase space domain introduces an enormous amount of data...... of the generalities relevant for an understanding of Gabor analysis of functions on Rd. We pay special attention to the case d = 2, which is the most important case for image processing and image analysis applications. The chapter is organized as follows. Section 2 presents central tools from functional analysis......, the application of Gabor expansions to image representation is considered in Sect. 6....

  9. A medical software system for volumetric analysis of cerebral pathologies in magnetic resonance imaging (MRI) data.

    Science.gov (United States)

    Egger, Jan; Kappus, Christoph; Freisleben, Bernd; Nimsky, Christopher

    2012-08-01

    In this contribution, a medical software system for volumetric analysis of different cerebral pathologies in magnetic resonance imaging (MRI) data is presented. The software system is based on a semi-automatic segmentation algorithm and helps to overcome the time-consuming process of volume determination during monitoring of a patient. After imaging, the parameter settings-including a seed point-are set up in the system and an automatic segmentation is performed by a novel graph-based approach. Manually reviewing the result leads to reseeding, adding seed points or an automatic surface mesh generation. The mesh is saved for monitoring the patient and for comparisons with follow-up scans. Based on the mesh, the system performs a voxelization and volume calculation, which leads to diagnosis and therefore further treatment decisions. The overall system has been tested with different cerebral pathologies-glioblastoma multiforme, pituitary adenomas and cerebral aneurysms- and evaluated against manual expert segmentations using the Dice Similarity Coefficient (DSC). Additionally, intra-physician segmentations have been performed to provide a quality measure for the presented system.

  10. Digital curation: a proposal of a semi-automatic digital object selection-based model for digital curation in Big Data environments

    Directory of Open Access Journals (Sweden)

    Moisés Lima Dutra

    2016-08-01

    Full Text Available Introduction: This work presents a new approach for Digital Curations from a Big Data perspective. Objective: The objective is to propose techniques to digital curations for selecting and evaluating digital objects that take into account volume, velocity, variety, reality, and the value of the data collected from multiple knowledge domains. Methodology: This is an exploratory research of applied nature, which addresses the research problem in a qualitative way. Heuristics allow this semi-automatic process to be done either by human curators or by software agents. Results: As a result, it was proposed a model for searching, processing, evaluating and selecting digital objects to be processed by digital curations. Conclusions: It is possible to use Big Data environments as a source of information resources for Digital Curation; besides, Big Data techniques and tools can support the search and selection process of information resources by Digital Curations.

  11. A novel semi-automatic snake robot for natural orifice transluminal endoscopic surgery: preclinical tests in animal and human cadaver models (with video).

    Science.gov (United States)

    Son, Jaebum; Cho, Chang Nho; Kim, Kwang Gi; Chang, Tae Young; Jung, Hyunchul; Kim, Sung Chun; Kim, Min-Tae; Yang, Nari; Kim, Tae-Yun; Sohn, Dae Kyung

    2015-06-01

    Natural orifice transluminal endoscopic surgery (NOTES) is an emerging surgical technique. We aimed to design, create, and evaluate a new semi-automatic snake robot for NOTES. The snake robot employs the characteristics of both a manual endoscope and a multi-segment snake robot. This robot is inserted and retracted manually, like a classical endoscope, while its shape is controlled using embedded robot technology. The feasibility of a prototype robot for NOTES was evaluated in animals and human cadavers. The transverse stiffness and maneuverability of the snake robot appeared satisfactory. It could be advanced through the anus as far as the peritoneal cavity without any injury to adjacent organs. Preclinical tests showed that the device could navigate the peritoneal cavity. The snake robot has advantages of high transverse force and intuitive control. This new robot may be clinically superior to conventional tools for transanal NOTES.

  12. Digital image analysis

    DEFF Research Database (Denmark)

    Riber-Hansen, Rikke; Vainer, Ben; Steiniche, Torben

    2012-01-01

    Digital image analysis (DIA) is increasingly implemented in histopathological research to facilitate truly quantitative measurements, decrease inter-observer variation and reduce hands-on time. Originally, efforts were made to enable DIA to reproduce manually obtained results on histological slides...... reproducibility, application of stereology-based quantitative measurements, time consumption, optimization of histological slides, regions of interest selection and recent developments in staining and imaging techniques....

  13. Complications in CT-guided, semi-automatic coaxial core biopsy of potentially malignant pulmonary lesions; Komplikationen bei CT-gesteuerter, koaxialer Stanzbiopsie malignomverdaechtiger Lungenherde in halbautomatischer Technik

    Energy Technology Data Exchange (ETDEWEB)

    Schulze, R. [Klinik Loewenstein (Germany). Dept. of Radiology; Seebacher, G.; Enderes, B.; Kugler, G.; Graeter, T.P. [Klinik Loewenstein (Germany). Dept. of Thoracic and Vascular Surgery; Fischer, J.R. [Klinik Loewenstein (Germany). Dept. of Oncology

    2015-08-15

    Histological verification of pulmonary lesions is important to ensure correct treatment. Computed tomographic (CT) transthoracic core biopsy is a well-established procedure for this. Comparison of available studies is difficult though, as technical and patient characteristics vary. Using a standardized biopsy technique, we evaluated our results for CT-guided coaxial core biopsy in a semi-automatic technique. Within 2 years, 664 consecutive transpulmonary biopsies were analyzed retrospectively. All interventions were performed using a 17/18G semi-automatic core biopsy system (4 to 8 specimens). The incidence of complications and technical and patient-dependent risk factors were evaluated. Comparing the histology with the final diagnosis, the sensitivity was 96.3 %, and the specificity was 100 %. 24 procedures were not diagnostic. In all others immunohistological staining was possible. The main complication was pneumothorax (PT, 21.7 %), with chest tube insertion in 6 % of the procedures (n = 40). Bleeding without therapeutic consequences was seen in 43 patients. There was no patient mortality. The rate of PT with chest tube insertion was 9.6 % in emphysema patients and 2.8 % without emphysema (p = 0.001). Smokers with emphysema had a 5 times higher risk of developing PT (p = 0.001). Correlation of tumor size or biopsy angle and the risk of PT was not significant. The risk of developing a PT was associated with an increasing intrapulmonary depth of the lesion (p = 0.001). CT-guided, semiautomatic coaxial core biopsy of the lung is a safe diagnostic procedure. The rate of major complications is low, and the sensitivity and specificity of the procedure are high. Smokers with emphysema are at a significantly higher risk of developing pneumothorax and should be monitored accordingly.

  14. Design and development of a prototypical software for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small- and medium-sized enterprises (SME)

    Science.gov (United States)

    Möller, Thomas; Bellin, Knut; Creutzburg, Reiner

    2015-03-01

    The aim of this paper is to show the recent progress in the design and prototypical development of a software suite Copra Breeder* for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small and medium-sized enterprises.

  15. Reproducibility of a semi-automatic method for 6-point vertebral morphometry in a multi-centre trial

    International Nuclear Information System (INIS)

    Guglielmi, Giuseppe; Stoppino, Luca Pio; Placentino, Maria Grazia; D'Errico, Francesco; Palmieri, Francesco

    2009-01-01

    Purpose: To evaluate the reproducibility of a semi-automated system for vertebral morphometry (MorphoXpress) in a large multi-centre trial. Materials and methods: The study involved 132 clinicians (no radiologist) with different levels of experience across 20 osteo-centres in Italy. All have received training in using MorphoXpress. An expert radiologist was also involved providing data used as standard of reference. The test image originate from normal clinical activity and represent a variety of normal, under and over exposed films, indicating both normal anatomy and vertebral deformities. The image was represented twice to the clinicians in a random order. Using the software, the clinicians initially marked the midpoints of the upper and lower vertebrae to include as many of the vertebrae (T5-L4) as practical within each given image. MorphoXpress performs the localisation of all morphometric points based on statistical model-based vision system. Intra-operator as well inter-operator measurement of agreement was calculated using the coefficient of variation and the mean and standard deviation of the difference of two measurements to check their agreement. Results: The overall intra-operator mean differences in vertebral heights is 1.61 ± 4.27% (1 S.D.). The overall intra-operator coefficient of variation is 3.95%. The overall inter-operator mean differences in vertebral heights is 2.93 ± 5.38% (1 S.D.). The overall inter-operator coefficient of variation is 6.89%. Conclusions: The technology tested here can facilitate reproducible quantitative morphometry suitable for large studies of vertebral deformities

  16. Semi-automatic spray pyrolysis deposition of thin, transparent, titania films as blocking layers for dye-sensitized and perovskite solar cells.

    Science.gov (United States)

    Krýsová, Hana; Krýsa, Josef; Kavan, Ladislav

    2018-01-01

    For proper function of the negative electrode of dye-sensitized and perovskite solar cells, the deposition of a nonporous blocking film is required on the surface of F-doped SnO 2 (FTO) glass substrates. Such a blocking film can minimise undesirable parasitic processes, for example, the back reaction of photoinjected electrons with the oxidized form of the redox mediator or with the hole-transporting medium can be avoided. In the present work, thin, transparent, blocking TiO 2 films are prepared by semi-automatic spray pyrolysis of precursors consisting of titanium diisopropoxide bis(acetylacetonate) as the main component. The variation in the layer thickness of the sprayed films is achieved by varying the number of spray cycles. The parameters investigated in this work were deposition temperature (150, 300 and 450 °C), number of spray cycles (20-200), precursor composition (with/without deliberately added acetylacetone), concentration (0.05 and 0.2 M) and subsequent post-calcination at 500 °C. The photo-electrochemical properties were evaluated in aqueous electrolyte solution under UV irradiation. The blocking properties were tested by cyclic voltammetry with a model redox probe with a simple one-electron-transfer reaction. Semi-automatic spraying resulted in the formation of transparent, homogeneous, TiO 2 films, and the technique allows for easy upscaling to large electrode areas. The deposition temperature of 450 °C was necessary for the fabrication of highly photoactive TiO 2 films. The blocking properties of the as-deposited TiO 2 films (at 450 °C) were impaired by post-calcination at 500 °C, but this problem could be addressed by increasing the number of spray cycles. The modification of the precursor by adding acetylacetone resulted in the fabrication of TiO 2 films exhibiting perfect blocking properties that were not influenced by post-calcination. These results will surely find use in the fabrication of large-scale dye-sensitized and perovskite solar

  17. Image sequence analysis

    CERN Document Server

    1981-01-01

    The processing of image sequences has a broad spectrum of important applica­ tions including target tracking, robot navigation, bandwidth compression of TV conferencing video signals, studying the motion of biological cells using microcinematography, cloud tracking, and highway traffic monitoring. Image sequence processing involves a large amount of data. However, because of the progress in computer, LSI, and VLSI technologies, we have now reached a stage when many useful processing tasks can be done in a reasonable amount of time. As a result, research and development activities in image sequence analysis have recently been growing at a rapid pace. An IEEE Computer Society Workshop on Computer Analysis of Time-Varying Imagery was held in Philadelphia, April 5-6, 1979. A related special issue of the IEEE Transactions on Pattern Anal­ ysis and Machine Intelligence was published in November 1980. The IEEE Com­ puter magazine has also published a special issue on the subject in 1981. The purpose of this book ...

  18. A semi-automatic method to determine electrode positions and labels from gel artifacts in EEG/fMRI-studies

    NARCIS (Netherlands)

    de Munck, J.C.; van Houdt, P.J.; Verdaasdonk, R.; Ossenblok, P.P.W.

    2012-01-01

    The analysis of simultaneous EEG and fMRI data is generally based on the extraction of regressors of interest from the EEG, which are correlated to the fMRI data in a general linear model setting. In more advanced approaches, the spatial information of EEG is also exploited by assuming underlying

  19. Comparison of a semi-automatic annotation tool and a natural language processing application for the generation of clinical statement entries.

    Science.gov (United States)

    Lin, Ching-Heng; Wu, Nai-Yuan; Lai, Wei-Shao; Liou, Der-Ming

    2015-01-01

    Electronic medical records with encoded entries should enhance the semantic interoperability of document exchange. However, it remains a challenge to encode the narrative concept and to transform the coded concepts into a standard entry-level document. This study aimed to use a novel approach for the generation of entry-level interoperable clinical documents. Using HL7 clinical document architecture (CDA) as the example, we developed three pipelines to generate entry-level CDA documents. The first approach was a semi-automatic annotation pipeline (SAAP), the second was a natural language processing (NLP) pipeline, and the third merged the above two pipelines. We randomly selected 50 test documents from the i2b2 corpora to evaluate the performance of the three pipelines. The 50 randomly selected test documents contained 9365 words, including 588 Observation terms and 123 Procedure terms. For the Observation terms, the merged pipeline had a significantly higher F-measure than the NLP pipeline (0.89 vs 0.80, pgenerating entry-level interoperable clinical documents. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.comFor numbered affiliation see end of article.

  20. Interactive facades analysis and synthesis of semi-regular facades

    KAUST Repository

    AlHalawani, Sawsan; Yang, Yongliang; Liu, Han; Mitra, Niloy J.

    2013-01-01

    Urban facades regularly contain interesting variations due to allowed deformations of repeated elements (e.g., windows in different open or close positions) posing challenges to state-of-the-art facade analysis algorithms. We propose a semi-automatic framework to recover both repetition patterns of the elements and their individual deformation parameters to produce a factored facade representation. Such a representation enables a range of applications including interactive facade images, improved multi-view stereo reconstruction, facade-level change detection, and novel image editing possibilities. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  1. Interactive facades analysis and synthesis of semi-regular facades

    KAUST Repository

    AlHalawani, Sawsan

    2013-05-01

    Urban facades regularly contain interesting variations due to allowed deformations of repeated elements (e.g., windows in different open or close positions) posing challenges to state-of-the-art facade analysis algorithms. We propose a semi-automatic framework to recover both repetition patterns of the elements and their individual deformation parameters to produce a factored facade representation. Such a representation enables a range of applications including interactive facade images, improved multi-view stereo reconstruction, facade-level change detection, and novel image editing possibilities. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  2. Automatic Synthetic Aperture Radar based oil spill detection and performance estimation via a semi-automatic operational service benchmark.

    Science.gov (United States)

    Singha, Suman; Vespe, Michele; Trieschmann, Olaf

    2013-08-15

    Today the health of ocean is in danger as it was never before mainly due to man-made pollutions. Operational activities show regular occurrence of accidental and deliberate oil spill in European waters. Since the areas covered by oil spills are usually large, satellite remote sensing particularly Synthetic Aperture Radar represents an effective option for operational oil spill detection. This paper describes the development of a fully automated approach for oil spill detection from SAR. Total of 41 feature parameters extracted from each segmented dark spot for oil spill and 'look-alike' classification and ranked according to their importance. The classification algorithm is based on a two-stage processing that combines classification tree analysis and fuzzy logic. An initial evaluation of this methodology on a large dataset has been carried out and degree of agreement between results from proposed algorithm and human analyst was estimated between 85% and 93% respectively for ENVISAT and RADARSAT. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Adaptive neuro-fuzzy inference systems for semi-automatic discrimination between seismic events: a study in Tehran region

    Science.gov (United States)

    Vasheghani Farahani, Jamileh; Zare, Mehdi; Lucas, Caro

    2012-04-01

    Thisarticle presents an adaptive neuro-fuzzy inference system (ANFIS) for classification of low magnitude seismic events reported in Iran by the network of Tehran Disaster Mitigation and Management Organization (TDMMO). ANFIS classifiers were used to detect seismic events using six inputs that defined the seismic events. Neuro-fuzzy coding was applied using the six extracted features as ANFIS inputs. Two types of events were defined: weak earthquakes and mining blasts. The data comprised 748 events (6289 signals) ranging from magnitude 1.1 to 4.6 recorded at 13 seismic stations between 2004 and 2009. We surveyed that there are almost 223 earthquakes with M ≤ 2.2 included in this database. Data sets from the south, east, and southeast of the city of Tehran were used to evaluate the best short period seismic discriminants, and features as inputs such as origin time of event, distance (source to station), latitude of epicenter, longitude of epicenter, magnitude, and spectral analysis (fc of the Pg wave) were used, increasing the rate of correct classification and decreasing the confusion rate between weak earthquakes and quarry blasts. The performance of the ANFIS model was evaluated for training and classification accuracy. The results confirmed that the proposed ANFIS model has good potential for determining seismic events.

  4. Improved regional-scale Brazilian cropping systems' mapping based on a semi-automatic object-based clustering approach

    Science.gov (United States)

    Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha

    2018-06-01

    Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.

  5. Semi-automatic engineering and tailoring of high-efficiency Bragg-reflection waveguide samples for quantum photonic applications

    Science.gov (United States)

    Pressl, B.; Laiho, K.; Chen, H.; Günthner, T.; Schlager, A.; Auchter, S.; Suchomel, H.; Kamp, M.; Höfling, S.; Schneider, C.; Weihs, G.

    2018-04-01

    Semiconductor alloys of aluminum gallium arsenide (AlGaAs) exhibit strong second-order optical nonlinearities. This makes them prime candidates for the integration of devices for classical nonlinear optical frequency conversion or photon-pair production, for example, through the parametric down-conversion (PDC) process. Within this material system, Bragg-reflection waveguides (BRW) are a promising platform, but the specifics of the fabrication process and the peculiar optical properties of the alloys require careful engineering. Previously, BRW samples have been mostly derived analytically from design equations using a fixed set of aluminum concentrations. This approach limits the variety and flexibility of the device design. Here, we present a comprehensive guide to the design and analysis of advanced BRW samples and show how to automatize these tasks. Then, nonlinear optimization techniques are employed to tailor the BRW epitaxial structure towards a specific design goal. As a demonstration of our approach, we search for the optimal effective nonlinearity and mode overlap which indicate an improved conversion efficiency or PDC pair production rate. However, the methodology itself is much more versatile as any parameter related to the optical properties of the waveguide, for example the phasematching wavelength or modal dispersion, may be incorporated as design goals. Further, we use the developed tools to gain a reliable insight in the fabrication tolerances and challenges of real-world sample imperfections. One such example is the common thickness gradient along the wafer, which strongly influences the photon-pair rate and spectral properties of the PDC process. Detailed models and a better understanding of the optical properties of a realistic BRW structure are not only useful for investigating current samples, but also provide important feedback for the design and fabrication of potential future turn-key devices.

  6. Development of new quantitative mass spectrometry and semi-automatic isofocusing methods for the determination of Apolipoprotein E typing.

    Science.gov (United States)

    Hirtz, Christophe; Vialaret, Jerome; Nouadje, Georges; Schraen, Susanna; Benlian, Pascale; Mary, Sandrine; Philibert, Pascal; Tiers, Laurent; Bros, Pauline; Delaby, Constance; Gabelle, Audrey; Lehmann, Sylvain

    2016-02-15

    Apolipoprotein E (Apo E) is a 36 Kda glycoprotein involved in lipid transport. It exists in 3 major isoforms: E2, E3 and E4. ApoE status is known to be a major risk factor for late-onset Alzheimer's and cardiovascular diseases. Genotyping is commonly used to obtain ApoE status but can show technical issues with ambiguous determinations. Phenotyping can be an alternative, not requiring genetic material. We evaluated the ability to accurately type ApoE isoforms by 2 phenotyping tests in comparison with genotyping. Two phenotyping techniques were used: (1) LC-MS/MS detection of 4 ApoE specific peptides (6490 Agilent triple quadripole): After its denaturation, serum was either reduced and alkylated, or only diluted, and then trypsin digested. Before analysis, desalting, evaporation and resuspension were performed. (2) Isoelectric focusing and immunoprecipitation: serum samples were neuraminidase digested, delipidated and electrophoresed on Hydragel ApoE (Sebia agarose gel) using Hydrasys 2 Scan instrument (Sebia, Lisses, France). ApoE isoforms bands were directly immunofixed in the gel using a polyclonal anti human ApoE antibody. Then, incubation of the gel with HRP secondary antibody followed by TTF1/TTF2 substrate allowed the visualization of ApoE bands. The results of the two techniques were compared to genotyping. Sera from 35 patients previously genotyped were analyzed with the 2 phenotyping techniques. 100% concordance between both phenotyping assays was obtained for the tested phenotypes (E2/E2, E2/E3, E2/E4, E3/E3, E3/E4, E4/E4). When compared to genotyping, 3 samples were discordant. After reanalyzing them by both phenotyping tests and DNA sequencing, 2/3 discrepancies were confirmed. Those can be explained by variants or rare ApoE alleles or by unidentified technical issues. 102 additional samples were then tested on LC-MS/MS only and compared to genotyping. The data showed 100% concordance. Our 2 phenotyping methods represent a valuable alternative to

  7. Digi-Clima Grid: image processing and distributed computing for recovering historical climate data

    Directory of Open Access Journals (Sweden)

    Sergio Nesmachnow

    2015-12-01

    Full Text Available This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. A experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.

  8. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans

    Directory of Open Access Journals (Sweden)

    Alizé Lacoste Jeanson

    2017-05-01

    Full Text Available Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. Methods We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT and lean tissue (LT in such material. An intra-class correlation coefficient (ICC was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS linear regressions and support vector machine regression (SVMR. Results and Discussion The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5 and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77 than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08 for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.

  9. Conceptual Design and Simulation of a Semi-Automatic Cell for the Washing and Preparation of a Corpse Prior to an Islamic Burial

    Directory of Open Access Journals (Sweden)

    A. Meghdari

    2012-07-01

    Full Text Available Washing the corpse and dressing the body prior to burial is an act of love and necessity in many religions. Applying robotics and automation technologies for the washing and preparation of a deceased Muslim in accordance with the Islamic Shari'at laws has been the challenging foundation of this research. With an increasing annual population growth resulting in an increase in the number of deaths (historically and/or immediately after a national disaster, automating part of this procedure to increase the speed of operation, reducing the health risks to the personnel of washing rooms “Ghassalkhaneh” at the cemeteries and enhancing their quality of life have been the primary objectives of this project. We have named and patented this semi-automated corpse preparation machine as the “PaakShooy” or “پاک شوی” in Persian (Farsi which means purifying the deceased. The whole process is composed of three operational units lined up in a series; the automatic washing chamber, drying cell and the semi-automatic shrouding table. This paper covers an introductory concept of the subject in Islam, a conceptual design of various machines and mechanisms to automate the important tasks in accordance with Islamic laws, and the final detailed design, graphic simulation and animation of the PaakShooy machine. In doing so, consultation with Islamic scholars has been a priority from the beginning of the project to the end and a few Fatwa have been issued by some high ranking Ayatollahs in support of the project. With a few modifications, the semi-automated PaakShooy machine may now be updated to conform to other religions/customs.

  10. Body composition estimation from selected slices: equations computed from a new semi-automatic thresholding method developed on whole-body CT scans.

    Science.gov (United States)

    Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara; Brůžek, Jaroslav

    2017-01-01

    Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.

  11. An investigation into the factors that influence toolmark identifications on ammunition discharged from semi-automatic pistols recovered from car fires.

    Science.gov (United States)

    Collender, Mark A; Doherty, Kevin A J; Stanton, Kenneth T

    2017-01-01

    Following a shooting incident where a vehicle is used to convey the culprits to and from the scene, both the getaway car and the firearm are often deliberately burned in an attempt to destroy any forensic evidence which may be subsequently recovered. Here we investigate the factors that influence the ability to make toolmark identifications on ammunition discharged from pistols recovered from such car fires. This work was carried out by conducting a number of controlled furnace tests in conjunction with real car fire tests in which three 9mm semi-automatic pistols were burned. Comparisons between pre-burn and post burn test fired ammunition discharged from these pistols were then performed to establish if identifications were still possible. The surfaces of the furnace heated samples and car fire samples were examined following heating/burning to establish what factors had influenced their surface morphology. The primary influence on the surfaces of the furnace heated and car fire samples was the formation of oxide layers. The car fire samples were altered to a greater extent than the furnace heated samples. Identifications were still possible between pre- and post-burn discharged cartridge cases, but this was not the case for the discharged bullets. It is suggested that the reason for this is a difference between the types of firearms discharge-generated toolmarks impressed onto the base of cartridge cases compared to those striated along the surfaces of bullets. It was also found that the temperatures recorded in the front foot wells were considerably less than those recorded on top of the rear seats during the car fires. These factors should be assessed by forensic firearms examiners when performing casework involving pistols recovered from car fires. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  12. Medical image registration for analysis

    International Nuclear Information System (INIS)

    Petrovic, V.

    2006-01-01

    Full text: Image registration techniques represent a rich family of image processing and analysis tools that aim to provide spatial correspondences across sets of medical images of similar and disparate anatomies and modalities. Image registration is a fundamental and usually the first step in medical image analysis and this paper presents a number of advanced techniques as well as demonstrates some of the advanced medical image analysis techniques they make possible. A number of both rigid and non-rigid medical image alignment algorithms of equivalent and merely consistent anatomical structures respectively are presented. The algorithms are compared in terms of their practical aims, inputs, computational complexity and level of operator (e.g. diagnostician) interaction. In particular, the focus of the methods discussion is placed on the applications and practical benefits of medical image registration. Results of medical image registration on a number of different imaging modalities and anatomies are presented demonstrating the accuracy and robustness of their application. Medical image registration is quickly becoming ubiquitous in medical imaging departments with the results of such algorithms increasingly used in complex medical image analysis and diagnostics. This paper aims to demonstrate at least part of the reason why

  13. Mapping landslide source and transport areas in VHR images with Object-Based Analysis and Support Vector Machines

    Science.gov (United States)

    Heleno, Sandra; Matias, Magda; Pina, Pedro

    2015-04-01

    Visual interpretation of satellite imagery remains extremely demanding in terms of resources and time, especially when dealing with numerous multi-scale landslides affecting wide areas, such as is the case of rainfall-induced shallow landslides. Applying automated methods can contribute to more efficient landslide mapping and updating of existing inventories, and in recent years the number and variety of approaches is rapidly increasing. Very High Resolution (VHR) images, acquired by space-borne sensors with sub-metric precision, such as Ikonos, Quickbird, Geoeye and Worldview, are increasingly being considered as the best option for landslide mapping, but these new levels of spatial detail also present new challenges to state of the art image analysis tools, asking for automated methods specifically suited to map landslide events on VHR optical images. In this work we develop and test a methodology for semi-automatic landslide recognition and mapping of landslide source and transport areas. The method combines object-based image analysis and a Support Vector Machine supervised learning algorithm, and was tested using a GeoEye-1 multispectral image, sensed 3 days after a damaging landslide event in Madeira Island, together with a pre-event LiDAR DEM. Our approach has proved successful in the recognition of landslides on a 15 Km2-wide study area, with 81 out of 85 landslides detected in its validation regions. The classifier also showed reasonable performance (false positive rate 60% and false positive rate below 36% in both validation regions) in the internal mapping of landslide source and transport areas, in particular in the sunnier east-facing slopes. In the less illuminated areas the classifier is still able to accurately map the source areas, but performs poorly in the mapping of landslide transport areas.

  14. Quality of Life Assessment Based on Spatial and Temporal Analysis of the Vegetation Area Derived from Satellite Images

    Directory of Open Access Journals (Sweden)

    MARIA IOANA VLAD

    2011-01-01

    Full Text Available The quality of life in urban areas is a function of many parameters among which, one highly important is the number and quality of green areas for people and wildlife to thrive. The quality of life is also a political concept often used to describe citizen satisfaction within different residential locations. Only in the last decades green areas have suffered a progressive decrease in quality, pointing out the ecological urban risk with a negative impact on the standard of living and population health status. This paper presents the evolution of green areas in the cities of South-Eastern Romania within the last 20 years and sets forth the current state of quality of life from the perspective of vegetation reference. By using state-of-the-art processing tools applied on high-resolution satellite images, we have derived knowledge about the spatial and temporal expansion of urbanized regions. Our semi-automatic technologies for analysis of remote sensing data such as Landsat 7 ETM+, correlated with statistical information inferred from urban charts, demonstrate a negative trend in the distribution of green areas within the analyzed cities, with long-term implications on multiple areas in our lives.

  15. Quantitative analysis of receptor imaging

    International Nuclear Information System (INIS)

    Fu Zhanli; Wang Rongfu

    2004-01-01

    Model-based methods for quantitative analysis of receptor imaging, including kinetic, graphical and equilibrium methods, are introduced in detail. Some technical problem facing quantitative analysis of receptor imaging, such as the correction for in vivo metabolism of the tracer and the radioactivity contribution from blood volume within ROI, and the estimation of the nondisplaceable ligand concentration, is also reviewed briefly

  16. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  17. Production optimization of {sup 99}Mo/{sup 99m}Tc zirconium molybate gel generators at semi-automatic device: DISIGEG

    Energy Technology Data Exchange (ETDEWEB)

    Monroy-Guzman, F., E-mail: fabiola.monroy@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Rivero Gutierrez, T., E-mail: tonatiuh.rivero@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Lopez Malpica, I.Z.; Hernandez Cortes, S.; Rojas Nava, P.; Vazquez Maldonado, J.C. [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Vazquez, A. [Instituto Mexicano del Petroleo, Eje Central Norte Lazaro Cardenas 152, Col. San Bartolo Atepehuacan, 07730, Mexico D.F. (Mexico)

    2012-01-15

    DISIGEG is a synthesis installation of zirconium {sup 99}Mo-molybdate gels for {sup 99}Mo/{sup 99m}Tc generator production, which has been designed, built and installed at the ININ. The device consists of a synthesis reactor and five systems controlled via keyboard: (1) raw material access, (2) chemical air stirring, (3) gel dried by air and infrared heating, (4) moisture removal and (5) gel extraction. DISIGEG operation is described and dried condition effects of zirconium {sup 99}Mo- molybdate gels on {sup 99}Mo/{sup 99m}Tc generator performance were evaluated as well as some physical-chemical properties of these gels. The results reveal that temperature, time and air flow applied during the drying process directly affects zirconium {sup 99}Mo-molybdate gel generator performance. All gels prepared have a similar chemical structure probably constituted by three-dimensional network, based on zirconium pentagonal bipyramids and molybdenum octahedral. Basic structural variations cause a change in gel porosity and permeability, favouring or inhibiting {sup 99m}TcO{sub 4}{sup -} diffusion into the matrix. The {sup 99m}TcO{sub 4}{sup -} eluates produced by {sup 99}Mo/{sup 99m}Tc zirconium {sup 99}Mo-molybdate gel generators prepared in DISIGEG, air dried at 80 Degree-Sign C for 5 h and using an air flow of 90 mm, satisfied all the Pharmacopoeias regulations: {sup 99m}Tc yield between 70-75%, {sup 99}Mo breakthrough less than 3 Multiplication-Sign 10{sup -3}%, radiochemical purities about 97% sterile and pyrogen-free eluates with a pH of 6. - Highlights: Black-Right-Pointing-Pointer {sup 99}Mo/{sup 99m}Tc generators based on {sup 99}Mo-molybdate gels were synthesized at a semi-automatic device. Black-Right-Pointing-Pointer Generator performances depend on synthesis conditions of the zirconium {sup 99}Mo-molybdate gel. Black-Right-Pointing-Pointer {sup 99m}TcO{sub 4}{sup -} diffusion and yield into generator depends on gel porosity and permeability. Black

  18. Hyperspectral image analysis. A tutorial

    DEFF Research Database (Denmark)

    Amigo Rubio, Jose Manuel; Babamoradi, Hamid; Elcoroaristizabal Martin, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processi...... to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case....... will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology...

  19. Hyperspectral image analysis. A tutorial

    International Nuclear Information System (INIS)

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-01-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares – Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. - Highlights: • Comprehensive tutorial of Hyperspectral Image analysis. • Hierarchical discrimination of six classes of plastics containing flame retardant. • Step by step guidelines to perform class-modeling on hyperspectral images. • Fusion of multivariate data analysis and digital image processing methods. • Promising methodology for real-time detection of plastics containing flame retardant.

  20. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  1. Multimodality image analysis work station

    International Nuclear Information System (INIS)

    Ratib, O.; Huang, H.K.

    1989-01-01

    The goal of this project is to design and implement a PACS (picture archiving and communication system) workstation for quantitative analysis of multimodality images. The Macintosh II personal computer was selected for its friendly user interface, its popularity among the academic and medical community, and its low cost. The Macintosh operates as a stand alone workstation where images are imported from a central PACS server through a standard Ethernet network and saved on a local magnetic or optical disk. A video digitizer board allows for direct acquisition of images from sonograms or from digitized cine angiograms. The authors have focused their project on the exploration of new means of communicating quantitative data and information through the use of an interactive and symbolic user interface. The software developed includes a variety of image analysis, algorithms for digitized angiograms, sonograms, scintigraphic images, MR images, and CT scans

  2. Automated Glacier Mapping using Object Based Image Analysis. Case Studies from Nepal, the European Alps and Norway

    Science.gov (United States)

    Vatle, S. S.

    2015-12-01

    Frequent and up-to-date glacier outlines are needed for many applications of glaciology, not only glacier area change analysis, but also for masks in volume or velocity analysis, for the estimation of water resources and as model input data. Remote sensing offers a good option for creating glacier outlines over large areas, but manual correction is frequently necessary, especially in areas containing supraglacial debris. We show three different workflows for mapping clean ice and debris-covered ice within Object Based Image Analysis (OBIA). By working at the object level as opposed to the pixel level, OBIA facilitates using contextual, spatial and hierarchical information when assigning classes, and additionally permits the handling of multiple data sources. Our first example shows mapping debris-covered ice in the Manaslu Himalaya, Nepal. SAR Coherence data is used in combination with optical and topographic data to classify debris-covered ice, obtaining an accuracy of 91%. Our second example shows using a high-resolution LiDAR derived DEM over the Hohe Tauern National Park in Austria. Breaks in surface morphology are used in creating image objects; debris-covered ice is then classified using a combination of spectral, thermal and topographic properties. Lastly, we show a completely automated workflow for mapping glacier ice in Norway. The NDSI and NIR/SWIR band ratio are used to map clean ice over the entire country but the thresholds are calculated automatically based on a histogram of each image subset. This means that in theory any Landsat scene can be inputted and the clean ice can be automatically extracted. Debris-covered ice can be included semi-automatically using contextual and morphological information.

  3. CONTEXT BASED FOOD IMAGE ANALYSIS

    OpenAIRE

    He, Ye; Xu, Chang; Khanna, Nitin; Boushey, Carol J.; Delp, Edward J.

    2013-01-01

    We are developing a dietary assessment system that records daily food intake through the use of food images. Recognizing food in an image is difficult due to large visual variance with respect to eating or preparation conditions. This task becomes even more challenging when different foods have similar visual appearance. In this paper we propose to incorporate two types of contextual dietary information, food co-occurrence patterns and personalized learning models, in food image analysis to r...

  4. Multispectral analysis of multimodal images

    Energy Technology Data Exchange (ETDEWEB)

    Kvinnsland, Yngve; Brekke, Njaal (Dept. of Surgical Sciences, Univ. of Bergen, Bergen (Norway)); Taxt, Torfinn M.; Gruener, Renate (Dept. of Biomedicine, Univ. of Bergen, Bergen (Norway))

    2009-02-15

    An increasing number of multimodal images represent a valuable increase in available image information, but at the same time it complicates the extraction of diagnostic information across the images. Multispectral analysis (MSA) has the potential to simplify this problem substantially as unlimited number of images can be combined, and tissue properties across the images can be extracted automatically. Materials and methods. We have developed a software solution for MSA containing two algorithms for unsupervised classification, an EM-algorithm finding multinormal class descriptions and the k-means clustering algorithm, and two for supervised classification, a Bayesian classifier using multinormal class descriptions and a kNN-algorithm. The software has an efficient user interface for the creation and manipulation of class descriptions, and it has proper tools for displaying the results. Results. The software has been tested on different sets of images. One application is to segment cross-sectional images of brain tissue (T1- and T2-weighted MR images) into its main normal tissues and brain tumors. Another interesting set of images are the perfusion maps and diffusion maps, derived images from raw MR images. The software returns segmentation that seem to be sensible. Discussion. The MSA software appears to be a valuable tool for image analysis with multimodal images at hand. It readily gives a segmentation of image volumes that visually seems to be sensible. However, to really learn how to use MSA, it will be necessary to gain more insight into what tissues the different segments contain, and the upcoming work will therefore be focused on examining the tissues through for example histological sections.

  5. Imaging mass spectrometry statistical analysis.

    Science.gov (United States)

    Jones, Emrys A; Deininger, Sören-Oliver; Hogendoorn, Pancras C W; Deelder, André M; McDonnell, Liam A

    2012-08-30

    Imaging mass spectrometry is increasingly used to identify new candidate biomarkers. This clinical application of imaging mass spectrometry is highly multidisciplinary: expertise in mass spectrometry is necessary to acquire high quality data, histology is required to accurately label the origin of each pixel's mass spectrum, disease biology is necessary to understand the potential meaning of the imaging mass spectrometry results, and statistics to assess the confidence of any findings. Imaging mass spectrometry data analysis is further complicated because of the unique nature of the data (within the mass spectrometry field); several of the assumptions implicit in the analysis of LC-MS/profiling datasets are not applicable to imaging. The very large size of imaging datasets and the reporting of many data analysis routines, combined with inadequate training and accessible reviews, have exacerbated this problem. In this paper we provide an accessible review of the nature of imaging data and the different strategies by which the data may be analyzed. Particular attention is paid to the assumptions of the data analysis routines to ensure that the reader is apprised of their correct usage in imaging mass spectrometry research. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. UV imaging in pharmaceutical analysis

    DEFF Research Database (Denmark)

    Østergaard, Jesper

    2018-01-01

    UV imaging provides spatially and temporally resolved absorbance measurements, which are highly useful in pharmaceutical analysis. Commercial UV imaging instrumentation was originally developed as a detector for separation sciences, but the main use is in the area of in vitro dissolution...

  7. Development of a Semi-Automatic Technique for Flow Estimation using Optical Flow Registration and k-means Clustering on Two Dimensional Cardiovascular Magnetic Resonance Flow Images

    DEFF Research Database (Denmark)

    Brix, Lau; Christoffersen, Christian P. V.; Kristiansen, Martin Søndergaard

    was then categorized into groups by the k-means clustering method. Finally, the cluster containing the vessel under investigation was selected manually by a single mouse click. All calculations were performed on a Nvidia 8800 GTX graphics card using the Compute Unified Device Architecture (CUDA) extension to the C...

  8. Supervised Mineral Classification with Semi-automatic Training and Validation Set Generation in Scanning Electron Microscope Energy Dispersive Spectroscopy Images of Thin Sections

    DEFF Research Database (Denmark)

    Flesche, Harald; Nielsen, Allan Aasbjerg; Larsen, Rasmus

    2000-01-01

    This paper addresses the problem of classifying minerals common in siliciclastic and carbonate rocks. Twelve chemical elements are mapped from thin sections by energy dispersive spectroscopy in a scanning electron microscope (SEM). Extensions to traditional multivariate statistical methods...

  9. Methods in quantitative image analysis.

    Science.gov (United States)

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value

  10. Image formation and image analysis in electron microscopy

    International Nuclear Information System (INIS)

    Heel, M. van.

    1981-01-01

    This thesis covers various aspects of image formation and image analysis in electron microscopy. The imaging of relatively strong objects in partially coherent illumination, the coherence properties of thermionic emission sources and the detection of objects in quantum noise limited images are considered. IMAGIC, a fast, flexible and friendly image analysis software package is described. Intelligent averaging of molecular images is discussed. (C.F.)

  11. Image analysis enhancement and interpretation

    International Nuclear Information System (INIS)

    Glauert, A.M.

    1978-01-01

    The necessary practical and mathematical background are provided for the analysis of an electron microscope image in order to extract the maximum amount of structural information. Instrumental methods of image enhancement are described, including the use of the energy-selecting electron microscope and the scanning transmission electron microscope. The problems of image interpretation are considered with particular reference to the limitations imposed by radiation damage and specimen thickness. A brief survey is given of the methods for producing a three-dimensional structure from a series of two-dimensional projections, although emphasis is really given on the analysis, processing and interpretation of the two-dimensional projection of a structure. (Auth.)

  12. Image Analysis of Eccentric Photorefraction

    Directory of Open Access Journals (Sweden)

    J. Dušek

    2004-01-01

    Full Text Available This article deals with image and data analysis of the recorded video-sequences of strabistic infants. It describes a unique noninvasive measuring system based on two measuring methods (position of I. Purkynje image with relation to the centre of the lens and eccentric photorefraction for infants. The whole process is divided into three steps. The aim of the first step is to obtain video sequences on our special system (Eye Movement Analyser. Image analysis of the recorded sequences is performed in order to obtain curves of basic eye reactions (accommodation and convergence. The last step is to calibrate of these curves to corresponding units (diopter and degrees of movement.

  13. Introduction to Medical Image Analysis

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Moeslund, Thomas B.

    This book is a result of a collaboration between DTU Informatics at the Technical University of Denmark and the Laboratory of Computer Vision and Media Technology at Aalborg University. It is partly based on the book ”Image and Video Processing”, second edition by Thomas Moeslund. The aim...... of the book is to present the fascinating world of medical image analysis in an easy and interesting way. Compared to many standard books on image analysis, the approach we have chosen is less mathematical and more casual. Some of the key algorithms are exemplified in C-code. Please note that the code...

  14. Shape analysis in medical image analysis

    CERN Document Server

    Tavares, João

    2014-01-01

    This book contains thirteen contributions from invited experts of international recognition addressing important issues in shape analysis in medical image analysis, including techniques for image segmentation, registration, modelling and classification, and applications in biology, as well as in cardiac, brain, spine, chest, lung and clinical practice. This volume treats topics such as, anatomic and functional shape representation and matching; shape-based medical image segmentation; shape registration; statistical shape analysis; shape deformation; shape-based abnormity detection; shape tracking and longitudinal shape analysis; machine learning for shape modeling and analysis; shape-based computer-aided-diagnosis; shape-based medical navigation; benchmark and validation of shape representation, analysis and modeling algorithms. This work will be of interest to researchers, students, and manufacturers in the fields of artificial intelligence, bioengineering, biomechanics, computational mechanics, computationa...

  15. Artificial intelligence and medical imaging. Expert systems and image analysis

    International Nuclear Information System (INIS)

    Wackenheim, A.; Zoellner, G.; Horviller, S.; Jacqmain, T.

    1987-01-01

    This paper gives an overview on the existing systems for automated image analysis and interpretation in medical imaging, especially in radiology. The example of ORFEVRE, the system for the analysis of CAT-scan images of the cervical triplet (c3-c5) by image analysis and subsequent expert-system is given and discussed in detail. Possible extensions are described [fr

  16. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  17. Multifractal-based nuclei segmentation in fish images.

    Science.gov (United States)

    Reljin, Nikola; Slavkovic-Ilic, Marijeta; Tapia, Coya; Cihoric, Nikola; Stankovic, Srdjan

    2017-09-01

    The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.

  18. Pocket pumped image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kotov, I.V., E-mail: kotov@bnl.gov [Brookhaven National Laboratory, Upton, NY 11973 (United States); O' Connor, P. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Murray, N. [Centre for Electronic Imaging, Open University, Milton Keynes, MK7 6AA (United Kingdom)

    2015-07-01

    The pocket pumping technique is used to detect small electron trap sites. These traps, if present, degrade CCD charge transfer efficiency. To reveal traps in the active area, a CCD is illuminated with a flat field and, before image is read out, accumulated charges are moved back and forth number of times in parallel direction. As charges are moved over a trap, an electron is removed from the original pocket and re-emitted in the following pocket. As process repeats one pocket gets depleted and the neighboring pocket gets excess of charges. As a result a “dipole” signal appears on the otherwise flat background level. The amplitude of the dipole signal depends on the trap pumping efficiency. This paper is focused on trap identification technique and particularly on new methods developed for this purpose. The sensor with bad segments was deliberately chosen for algorithms development and to demonstrate sensitivity and power of new methods in uncovering sensor defects.

  19. 32P-postlabeling assay for carcinogen-DNA adducts: description of beta shielding apparatus and semi-automatic spotting and washing devices that facilitate the handling of multiple samples

    International Nuclear Information System (INIS)

    Reddy, M.V.; Blackburn, G.R.

    1990-01-01

    The utilization of the 32 P-postlabeling assay in combination with TLC for the sensitive detection and estimation of aromatic DNA adducts has been increasing. The procedure consists of 32 P-labeling of carcinogen-adducted 3'-nucleotides in the DNA digests using γ- 32 P ATP and polynucleotide kinase, separation of 32 P-labeled adducts by TLC, and their detection by autoradiography. During both 32 P-labeling and initial phases of TLC, a relatively high amount of γ- 32 P ATP is handled when 30 samples are processed simultaneously. We describe the design of acrylic shielding apparatus, semi-automatic TLC spotting devices, and devices for development and washing of multiple TLC plates, which not only provide substantial protection from exposure to 32 P beta radiation, but also allow quick and easy handling of a large number of samples. Specifically, the equipment includes: (i) a multi-tube carousel rack having 15 wells to hold capless Eppendorf tubes and a rotatable lid with an aperture to access individual tubes; (ii) a pipette shielder; (iii) two semi-automatic spotting devices to apply radioactive solutions to TLC plates; (iv) a multi-plate holder for TLC plates; and (v) a mechanical device for washing multiple TLC plates. Item (i) is small enough to be held in one-hand, vortexed, and centrifuged to mix the solutions in each tube while beta radiation is shielded. Items (iii) to (iv) aid in the automation of the assay. (author)

  20. Imaging software accuracy for 3-dimensional analysis of the upper airway.

    Science.gov (United States)

    Weissheimer, André; Menezes, Luciane Macedo de; Sameshima, Glenn T; Enciso, Reyes; Pham, John; Grauer, Dan

    2012-12-01

    The aim of this study was to compare the precision and accuracy of 6 imaging software programs for measuring upper airway volumes in cone-beam computed tomography data. The sample consisted of 33 growing patients and an oropharynx acrylic phantom, scanned with an i-CAT scanner (Imaging Sciences International, Hatfield, Pa). The known oropharynx acrylic phantom volume was used as the gold standard. Semi-automatic segmentations with interactive and fixed threshold protocols of the patients' oropharynx and oropharynx acrylic phantom were performed by using Mimics (Materialise, Leuven, Belgium), ITK-Snap (www.itksnap.org), OsiriX (Pixmeo, Geneva, Switzerland), Dolphin3D (Dolphin Imaging & Management Solutions, Chatsworth, Calif), InVivo Dental (Anatomage, San Jose, Calif), and Ondemand3D (CyberMed, Seoul, Korea) software programs. The intraclass correlation coefficient was used for the reliability tests. A repeated measurements analysis of variance (ANOVA) test and post-hoc tests (Bonferroni) were used to compare the software programs. The reliability was high for all programs. With the interactive threshold protocol, the oropharynx acrylic phantom segmentations with Mimics, Dolphin3D, OsiriX, and ITK-Snap showed less than 2% errors in volumes compared with the gold standard. Ondemand3D and InVivo Dental had more than 5% errors compared with the gold standard. With the fixed threshold protocol, the volume errors were similar (-11.1% to -11.7%) among the programs. In the oropharynx segmentation with the interactive protocol, ITK-Snap, Mimics, OsiriX, and Dolphin3D were statistically significantly different (P 0.05) was found between InVivo Dental and OnDemand3D. All 6 imaging software programs were reliable but had errors in the volume segmentations of the oropharynx. Mimics, Dolphin3D, ITK-Snap, and OsiriX were similar and more accurate than InVivo Dental and Ondemand3D for upper airway assessment. Copyright © 2012 American Association of Orthodontists. Published by

  1. Signal and image multiresolution analysis

    CERN Document Server

    Ouahabi, Abdelialil

    2012-01-01

    Multiresolution analysis using the wavelet transform has received considerable attention in recent years by researchers in various fields. It is a powerful tool for efficiently representing signals and images at multiple levels of detail with many inherent advantages, including compression, level-of-detail display, progressive transmission, level-of-detail editing, filtering, modeling, fractals and multifractals, etc.This book aims to provide a simple formalization and new clarity on multiresolution analysis, rendering accessible obscure techniques, and merging, unifying or completing

  2. Teaching image analysis at DIKU

    DEFF Research Database (Denmark)

    Johansen, Peter

    2010-01-01

    The early development of computer vision at Department of Computer Science at University of Copenhagen (DIKU) is briefly described. The different disciplines in computer vision are introduced, and the principles for teaching two courses, an image analysis course, and a robot lab class are outlined....

  3. Automating the segmentation of medical images for the production of voxel tomographic computational models

    International Nuclear Information System (INIS)

    Caon, M.

    2001-01-01

    Radiation dosimetry for the diagnostic medical imaging procedures performed on humans requires anatomically accurate, computational models. These may be constructed from medical images as voxel-based tomographic models. However, they are time consuming to produce and as a consequence, there are few available. This paper discusses the emergence of semi-automatic segmentation techniques and describes an application (iRAD) written in Microsoft Visual Basic that allows the bitmap of a medical image to be segmented interactively and semi-automatically while displayed in Microsoft Excel. iRAD will decrease the time required to construct voxel models. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  4. Application of a qualitative image analysis on the evaluation of microbial process parameters of biogas plants; Einsatz einer quantitativen Bildanalyse zur Beurteilung mikrobieller Prozessparameter von Biogasanlagen

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong Sung; Scherer, Paul [Hamburg Univ. of Applied Sciences, Hamburg-Bergedorf (Germany). Faculty of Life Sciences

    2013-10-01

    To evaluate efficiency and microbial activity of biogas or anaerobic waste water treatment plants, the number of microorganisms per gram or per milliliter is supposed to be a crucial factor. Since some years ago our research group at HAW-Hamburg has struggled to develop a simple and rapid technique for quantification and classification of environmental microbes with a semi-automatic digital image analysis. For detection of methanogens, the methanogenic fluorescence coenzyme F420 was used, which represents the vitality of methanogens. Furthermore this technique has been supplemented with morphological classification resulting in a Quantitative Microscopic Fingerprinting (QMF). The technique facilitates to find out microbial reasons for some problems of reactor performances. In addition QMF allows differentiating between H{sub 2}-CO{sub 2} and acetate consuming methanogens according to their morphology. At the moment, this study focusses on some relationships between QMF and plant operation parameters. As an example, a thermophilic biogas plant fed by 65% liquid cow manure, maize silage, grass silage and solid cow manure was analyzed for more than 22 weeks. Here some basic background and methodical procedures were presented as well as validation of this technique. (orig.)

  5. Astronomical Image and Data Analysis

    CERN Document Server

    Starck, J.-L

    2006-01-01

    With information and scale as central themes, this comprehensive survey explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats those innovative methods of image, signal, and data processing that are proving to be both effective and widely relevant. The authors are leaders in this rapidly developing field and draw upon decades of experience. They have been playing leading roles in international projects such as the Virtual Observatory and the Grid. The book addresses not only students and professional astronomers and astrophysicists, but also serious amateur astronomers and specialists in earth observation, medical imaging, and data mining. The coverage includes chapters or appendices on: detection and filtering; image compression; multichannel, multiscale, and catalog data analytical methods; wavelets transforms, Picard iteration, and software tools. This second edition of Starck and Murtagh's highly appreciated reference again deals with to...

  6. Identification of Forested Landslides Using LiDar Data, Object-based Image Analysis, and Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Xianju Li

    2015-07-01

    Full Text Available For identification of forested landslides, most studies focus on knowledge-based and pixel-based analysis (PBA of LiDar data, while few studies have examined (semi- automated methods and object-based image analysis (OBIA. Moreover, most of them are focused on soil-covered areas with gentle hillslopes. In bedrock-covered mountains with steep and rugged terrain, it is so difficult to identify landslides that there is currently no research on whether combining semi-automated methods and OBIA with only LiDar derivatives could be more effective. In this study, a semi-automatic object-based landslide identification approach was developed and implemented in a forested area, the Three Gorges of China. Comparisons of OBIA and PBA, two different machine learning algorithms and their respective sensitivity to feature selection (FS, were first investigated. Based on the classification result, the landslide inventory was finally obtained according to (1 inclusion of holes encircled by the landslide body; (2 removal of isolated segments, and (3 delineation of closed envelope curves for landslide objects by manual digitizing operation. The proposed method achieved the following: (1 the filter features of surface roughness were first applied for calculating object features, and proved useful; (2 FS improved classification accuracy and reduced features; (3 the random forest algorithm achieved higher accuracy and was less sensitive to FS than a support vector machine; (4 compared to PBA, OBIA was more sensitive to FS, remarkably reduced computing time, and depicted more contiguous terrain segments; (5 based on the classification result with an overall accuracy of 89.11% ± 0.03%, the obtained inventory map was consistent with the referenced landslide inventory map, with a position mismatch value of 9%. The outlined approach would be helpful for forested landslide identification in steep and rugged terrain.

  7. Image analysis for material characterisation

    Science.gov (United States)

    Livens, Stefan

    In this thesis, a number of image analysis methods are presented as solutions to two applications concerning the characterisation of materials. Firstly, we deal with the characterisation of corrosion images, which is handled using a multiscale texture analysis method based on wavelets. We propose a feature transformation that deals with the problem of rotation invariance. Classification is performed with a Learning Vector Quantisation neural network and with combination of outputs. In an experiment, 86,2% of the images showing either pit formation or cracking, are correctly classified. Secondly, we develop an automatic system for the characterisation of silver halide microcrystals. These are flat crystals with a triangular or hexagonal base and a thickness in the 100 to 200 nm range. A light microscope is used to image them. A novel segmentation method is proposed, which allows to separate agglomerated crystals. For the measurement of shape, the ratio between the largest and the smallest radius yields the best results. The thickness measurement is based on the interference colours that appear for light reflected at the crystals. The mean colour of different thickness populations is determined, from which a calibration curve is derived. With this, the thickness of new populations can be determined accurately.

  8. Planning applications in image analysis

    Science.gov (United States)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  9. Quantitative image analysis of synovial tissue

    NARCIS (Netherlands)

    van der Hall, Pascal O.; Kraan, Maarten C.; Tak, Paul Peter

    2007-01-01

    Quantitative image analysis is a form of imaging that includes microscopic histological quantification, video microscopy, image analysis, and image processing. Hallmarks are the generation of reliable, reproducible, and efficient measurements via strict calibration and step-by-step control of the

  10. Mesh Processing in Medical Image Analysis

    DEFF Research Database (Denmark)

    The following topics are dealt with: mesh processing; medical image analysis; interactive freeform modeling; statistical shape analysis; clinical CT images; statistical surface recovery; automated segmentation; cerebral aneurysms; and real-time particle-based representation....

  11. Automated image analysis of uterine cervical images

    Science.gov (United States)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  12. Development and Evaluation of a Semi-automated Segmentation Tool and a Modified Ellipsoid Formula for Volumetric Analysis of the Kidney in Non-contrast T2-Weighted MR Images.

    Science.gov (United States)

    Seuss, Hannes; Janka, Rolf; Prümmer, Marcus; Cavallaro, Alexander; Hammon, Rebecca; Theis, Ragnar; Sandmair, Martin; Amann, Kerstin; Bäuerle, Tobias; Uder, Michael; Hammon, Matthias

    2017-04-01

    Volumetric analysis of the kidney parenchyma provides additional information for the detection and monitoring of various renal diseases. Therefore the purposes of the study were to develop and evaluate a semi-automated segmentation tool and a modified ellipsoid formula for volumetric analysis of the kidney in non-contrast T2-weighted magnetic resonance (MR)-images. Three readers performed semi-automated segmentation of the total kidney volume (TKV) in axial, non-contrast-enhanced T2-weighted MR-images of 24 healthy volunteers (48 kidneys) twice. A semi-automated threshold-based segmentation tool was developed to segment the kidney parenchyma. Furthermore, the three readers measured renal dimensions (length, width, depth) and applied different formulas to calculate the TKV. Manual segmentation served as a reference volume. Volumes of the different methods were compared and time required was recorded. There was no significant difference between the semi-automatically and manually segmented TKV (p = 0.31). The difference in mean volumes was 0.3 ml (95% confidence interval (CI), -10.1 to 10.7 ml). Semi-automated segmentation was significantly faster than manual segmentation, with a mean difference = 188 s (220 vs. 408 s); p T2-weighted MR data delivers accurate and reproducible results and was significantly faster than manual segmentation. Applying a modified ellipsoid formula quickly provides an accurate kidney volume.

  13. Image Analysis for X-ray Imaging of Food

    DEFF Research Database (Denmark)

    Einarsdottir, Hildur

    for quality and safety evaluation of food products. In this effort the fields of statistics, image analysis and statistical learning are combined, to provide analytical tools for determining the aforementioned food traits. The work demonstrated includes a quantitative analysis of heat induced changes......X-ray imaging systems are increasingly used for quality and safety evaluation both within food science and production. They offer non-invasive and nondestructive penetration capabilities to image the inside of food. This thesis presents applications of a novel grating-based X-ray imaging technique...... and defect detection in food. Compared to the complex three dimensional analysis of microstructure, here two dimensional images are considered, making the method applicable for an industrial setting. The advantages obtained by grating-based imaging are compared to conventional X-ray imaging, for both foreign...

  14. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research.

    Science.gov (United States)

    Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions

  15. Ultrasonic image analysis and image-guided interventions.

    Science.gov (United States)

    Noble, J Alison; Navab, Nassir; Becher, H

    2011-08-06

    The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research.

  16. Vaccine Images on Twitter: Analysis of What Images are Shared.

    Science.gov (United States)

    Chen, Tao; Dredze, Mark

    2018-04-03

    Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet's textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. ©Tao Chen, Mark Dredze. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.04.2018.

  17. Vaccine Images on Twitter: Analysis of What Images are Shared

    Science.gov (United States)

    Dredze, Mark

    2018-01-01

    Background Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. Objective The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. Methods We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Results Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet’s textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. Conclusions We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. PMID:29615386

  18. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, M; Woo, B; Kim, J [Seoul National University, Seoul (Korea, Republic of); Jamshidi, N; Kuo, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.

  19. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    International Nuclear Information System (INIS)

    Lee, M; Woo, B; Kim, J; Jamshidi, N; Kuo, M

    2015-01-01

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automatically from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI

  20. Introduction to the Multifractal Analysis of Images

    OpenAIRE

    Lévy Véhel , Jacques

    1998-01-01

    International audience; After a brief review of some classical approaches in image segmentation, the basics of multifractal theory and its application to image analysis are presented. Practical methods for multifractal spectrum estimation are discussed and some experimental results are given.

  1. Tolerance analysis through computational imaging simulations

    Science.gov (United States)

    Birch, Gabriel C.; LaCasse, Charles F.; Stubbs, Jaclynn J.; Dagel, Amber L.; Bradley, Jon

    2017-11-01

    The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.

  2. ImageParser: a tool for finite element generation from three-dimensional medical images

    Directory of Open Access Journals (Sweden)

    Yamada T

    2004-10-01

    Full Text Available Abstract Background The finite element method (FEM is a powerful mathematical tool to simulate and visualize the mechanical deformation of tissues and organs during medical examinations or interventions. It is yet a challenge to build up an FEM mesh directly from a volumetric image partially because the regions (or structures of interest (ROIs may be irregular and fuzzy. Methods A software package, ImageParser, is developed to generate an FEM mesh from 3-D tomographic medical images. This software uses a semi-automatic method to detect ROIs from the context of image including neighboring tissues and organs, completes segmentation of different tissues, and meshes the organ into elements. Results The ImageParser is shown to build up an FEM model for simulating the mechanical responses of the breast based on 3-D CT images. The breast is compressed by two plate paddles under an overall displacement as large as 20% of the initial distance between the paddles. The strain and tangential Young's modulus distributions are specified for the biomechanical analysis of breast tissues. Conclusion The ImageParser can successfully exact the geometry of ROIs from a complex medical image and generate the FEM mesh with customer-defined segmentation information.

  3. Similarity analysis between quantum images

    Science.gov (United States)

    Zhou, Ri-Gui; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-06-01

    Similarity analyses between quantum images are so essential in quantum image processing that it provides fundamental research for the other fields, such as quantum image matching, quantum pattern recognition. In this paper, a quantum scheme based on a novel quantum image representation and quantum amplitude amplification algorithm is proposed. At the end of the paper, three examples and simulation experiments show that the measurement result must be 0 when two images are same, and the measurement result has high probability of being 1 when two images are different.

  4. Image registration with uncertainty analysis

    Science.gov (United States)

    Simonson, Katherine M [Cedar Crest, NM

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  5. Microscopic validation of whole mouse micro-metastatic tumor imaging agents using cryo-imaging and sliding organ image registration

    Science.gov (United States)

    Liu, Yiqiao; Zhou, Bo; Qutaish, Mohammed; Wilson, David L.

    2016-03-01

    We created a metastasis imaging, analysis platform consisting of software and multi-spectral cryo-imaging system suitable for evaluating emerging imaging agents targeting micro-metastatic tumor. We analyzed CREKA-Gd in MRI, followed by cryo-imaging which repeatedly sectioned and tiled microscope images of the tissue block face, providing anatomical bright field and molecular fluorescence, enabling 3D microscopic imaging of the entire mouse with single metastatic cell sensitivity. To register MRI volumes to the cryo bright field reference, we used our standard mutual information, non-rigid registration which proceeded: preprocess --> affine --> B-spline non-rigid 3D registration. In this report, we created two modified approaches: mask where we registered locally over a smaller rectangular solid, and sliding organ. Briefly, in sliding organ, we segmented the organ, registered the organ and body volumes separately and combined results. Though sliding organ required manual annotation, it provided the best result as a standard to measure other registration methods. Regularization parameters for standard and mask methods were optimized in a grid search. Evaluations consisted of DICE, and visual scoring of a checkerboard display. Standard had accuracy of 2 voxels in all regions except near the kidney, where there were 5 voxels sliding. After mask and sliding organ correction, kidneys sliding were within 2 voxels, and Dice overlap increased 4%-10% in mask compared to standard. Mask generated comparable results with sliding organ and allowed a semi-automatic process.

  6. Transfer function analysis of radiographic imaging systems

    International Nuclear Information System (INIS)

    Metz, C.E.; Doi, K.

    1979-01-01

    The theoretical and experimental aspects of the techniques of transfer function analysis used in radiographic imaging systems are reviewed. The mathematical principles of transfer function analysis are developed for linear, shift-invariant imaging systems, for the relation between object and image and for the image due to a sinusoidal plane wave object. The other basic mathematical principle discussed is 'Fourier analysis' and its application to an input function. Other aspects of transfer function analysis included are alternative expressions for the 'optical transfer function' of imaging systems and expressions are derived for both serial and parallel transfer image sub-systems. The applications of transfer function analysis to radiographic imaging systems are discussed in relation to the linearisation of the radiographic imaging system, the object, the geometrical unsharpness, the screen-film system unsharpness, other unsharpness effects and finally noise analysis. It is concluded that extensive theoretical, computer simulation and experimental studies have demonstrated that the techniques of transfer function analysis provide an accurate and reliable means for predicting and understanding the effects of various radiographic imaging system components in most practical diagnostic medical imaging situations. (U.K.)

  7. Automated object-based classification of rain-induced landslides with VHR multispectral images in Madeira Island

    Science.gov (United States)

    Heleno, S.; Matias, M.; Pina, P.; Sousa, A. J.

    2015-09-01

    A method for semi-automatic landslide detection, with the ability to separate source and run-out areas, is presented in this paper. It combines object-based image analysis and a Support Vector Machine classifier on a GeoEye-1 multispectral image, sensed 3 days after the major damaging landslide event that occurred in Madeira island (20 February 2010), with a pre-event LIDAR Digital Elevation Model. The testing is developed in a 15 km2-wide study area, where 95 % of the landslides scars are detected by this supervised approach. The classifier presents a good performance in the delineation of the overall landslide area. In addition, fair results are achieved in the separation of the source from the run-out landslide areas, although in less illuminated slopes this discrimination is less effective than in sunnier east facing-slopes.

  8. Microscopy image segmentation tool: Robust image data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Valmianski, Ilya, E-mail: ivalmian@ucsd.edu; Monton, Carlos; Schuller, Ivan K. [Department of Physics and Center for Advanced Nanoscience, University of California San Diego, 9500 Gilman Drive, La Jolla, California 92093 (United States)

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  9. Microscopy image segmentation tool: Robust image data analysis

    Science.gov (United States)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  10. Microscopy image segmentation tool: Robust image data analysis

    International Nuclear Information System (INIS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-01-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy

  11. Information granules in image histogram analysis.

    Science.gov (United States)

    Wieclawek, Wojciech

    2018-04-01

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    Science.gov (United States)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  13. System for objective assessment of image differences in digital cinema

    Science.gov (United States)

    Fliegel, Karel; Krasula, Lukáš; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek

    2014-09-01

    There is high demand for quick digitization and subsequent image restoration of archived film records. Digitization is very urgent in many cases because various invaluable pieces of cultural heritage are stored on aging media. Only selected records can be reconstructed perfectly using painstaking manual or semi-automatic procedures. This paper aims to answer the question what are the quality requirements on the restoration process in order to obtain acceptably close visual perception of the digitally restored film in comparison to the original analog film copy. This knowledge is very important to preserve the original artistic intention of the movie producers. Subjective experiment with artificially distorted images has been conducted in order to answer the question what is the visual impact of common image distortions in digital cinema. Typical color and contrast distortions were introduced and test images were presented to viewers using digital projector. Based on the outcome of this subjective evaluation a system for objective assessment of image distortions has been developed and its performance tested. The system utilizes calibrated digital single-lens reflex camera and subsequent analysis of suitable features of images captured from the projection screen. The evaluation of captured image data has been optimized in order to obtain predicted differences between the reference and distorted images while achieving high correlation with the results of subjective assessment. The system can be used to objectively determine the difference between analog film and digital cinema images on the projection screen.

  14. Analysis of 3-D images

    Science.gov (United States)

    Wani, M. Arif; Batchelor, Bruce G.

    1992-03-01

    Deriving generalized representation of 3-D objects for analysis and recognition is a very difficult task. Three types of representations based on type of an object is used in this paper. Objects which have well-defined geometrical shapes are segmented by using a fast edge region based segmentation technique. The segmented image is represented by plan and elevation of each part of the object if the object parts are symmetrical about their central axis. The plan and elevation concept enables representing and analyzing such objects quickly and efficiently. The second type of representation is used for objects having parts which are not symmetrical about their central axis. The segmented surface patches of such objects are represented by the 3-D boundary and the surface features of each segmented surface. Finally, the third type of representation is used for objects which don't have well-defined geometrical shapes (for example a loaf of bread). These objects are represented and analyzed from its features which are derived using a multiscale contour based technique. Anisotropic Gaussian smoothing technique is introduced to segment the contours at various scales of smoothing. A new merging technique is used which enables getting the current best estimate of break points at each scale. This new technique enables elimination of loss of accuracy of localization effects at coarser scales without using scale space tracking approach.

  15. Performance testing of a semi-automatic card punch system, using direct STR profiling of DNA from blood samples on FTA™ cards.

    Science.gov (United States)

    Ogden, Samantha J; Horton, Jeffrey K; Stubbs, Simon L; Tatnell, Peter J

    2015-01-01

    The 1.2 mm Electric Coring Tool (e-Core™) was developed to increase the throughput of FTA(™) sample collection cards used during forensic workflows and is similar to a 1.2 mm Harris manual micro-punch for sampling dried blood spots. Direct short tandem repeat (STR) DNA profiling was used to compare samples taken by the e-Core tool with those taken by the manual micro-punch. The performance of the e-Core device was evaluated using a commercially available PowerPlex™ 18D STR System. In addition, an analysis was performed that investigated the potential carryover of DNA via the e-Core punch from one FTA disc to another. This contamination study was carried out using Applied Biosystems AmpflSTR™ Identifiler™ Direct PCR Amplification kits. The e-Core instrument does not contaminate FTA discs when a cleaning punch is used following excision of discs containing samples and generates STR profiles that are comparable to those generated by the manual micro-punch. © 2014 American Academy of Forensic Sciences.

  16. Applications of stochastic geometry in image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Kendall, W.S.; Molchanov, I.S.

    2009-01-01

    A discussion is given of various stochastic geometry models (random fields, sequential object processes, polygonal field models) which can be used in intermediate and high-level image analysis. Two examples are presented of actual image analysis problems (motion tracking in video,

  17. Solar Image Analysis and Visualization

    CERN Document Server

    Ireland, J

    2009-01-01

    This volume presents a selection of papers on the state of the art of image enhancement, automated feature detection, machine learning, and visualization tools in support of solar physics that focus on the challenges presented by new ground-based and space-based instrumentation. The articles and topics were inspired by the Third Solar Image Processing Workshop, held at Trinity College Dublin, Ireland but contributions from other experts have been included as well. This book is mainly aimed at researchers and graduate students working on image processing and compter vision in astronomy and solar physics.

  18. Multi-Source Image Analysis.

    Science.gov (United States)

    1979-12-01

    These collections were taken to show the advantages made available to the inter- preter. In a military operation, however, often little or no in- situ ...The large body of water labeled "W" on each image represents the Agua Hedionda lagoon. East of the lagoon the area is primarily agricultural with a...power plant located in the southeast corner of the image. West of the Agua Hedionda lagoon is Carlsbad, California. Damp ground is labelled "Dg" on the

  19. Semi-automatic aircraft control system

    Science.gov (United States)

    Gilson, Richard D. (Inventor)

    1978-01-01

    A flight control type system which provides a tactile readout to the hand of a pilot for directing elevator control during both approach to flare-out and departure maneuvers. For altitudes above flare-out, the system sums the instantaneous coefficient of lift signals of a lift transducer with a generated signal representing ideal coefficient of lift for approach to flare-out, i.e., a value of about 30% below stall. Error signals resulting from the summation are read out by the noted tactile device. Below flare altitude, an altitude responsive variation is summed with the signal representing ideal coefficient of lift to provide error signal readout.

  20. Objective analysis of image quality of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  1. Advanced spot quality analysis in two-colour microarray experiments

    Directory of Open Access Journals (Sweden)

    Vetter Guillaume

    2008-09-01

    Full Text Available Abstract Background Image analysis of microarrays and, in particular, spot quantification and spot quality control, is one of the most important steps in statistical analysis of microarray data. Recent methods of spot quality control are still in early age of development, often leading to underestimation of true positive microarray features and, consequently, to loss of important biological information. Therefore, improving and standardizing the statistical approaches of spot quality control are essential to facilitate the overall analysis of microarray data and subsequent extraction of biological information. Findings We evaluated the performance of two image analysis packages MAIA and GenePix (GP using two complementary experimental approaches with a focus on the statistical analysis of spot quality factors. First, we developed control microarrays with a priori known fluorescence ratios to verify the accuracy and precision of the ratio estimation of signal intensities. Next, we developed advanced semi-automatic protocols of spot quality evaluation in MAIA and GP and compared their performance with available facilities of spot quantitative filtering in GP. We evaluated these algorithms for standardised spot quality analysis in a whole-genome microarray experiment assessing well-characterised transcriptional modifications induced by the transcription regulator SNAI1. Using a set of RT-PCR or qRT-PCR validated microarray data, we found that the semi-automatic protocol of spot quality control we developed with MAIA allowed recovering approximately 13% more spots and 38% more differentially expressed genes (at FDR = 5% than GP with default spot filtering conditions. Conclusion Careful control of spot quality characteristics with advanced spot quality evaluation can significantly increase the amount of confident and accurate data resulting in more meaningful biological conclusions.

  2. Forensic Analysis of Digital Image Tampering

    Science.gov (United States)

    2004-12-01

    analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...used to embed the hidden watermark is Steganography Software F5 version 11+ discussed in Section 2.2. Original JPEG Image – 580 x 435 – 17.4

  3. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  4. Breast cancer histopathology image analysis : a review

    NARCIS (Netherlands)

    Veta, M.; Pluim, J.P.W.; Diest, van P.J.; Viergever, M.A.

    2014-01-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology

  5. Multiplicative calculus in biomedical image analysis

    NARCIS (Netherlands)

    Florack, L.M.J.; Assen, van H.C.

    2011-01-01

    We advocate the use of an alternative calculus in biomedical image analysis, known as multiplicative (a.k.a. non-Newtonian) calculus. It provides a natural framework in problems in which positive images or positive definite matrix fields and positivity preserving operators are of interest. Indeed,

  6. Image analysis in x-ray cinefluorography

    Energy Technology Data Exchange (ETDEWEB)

    Ikuse, J; Yasuhara, H; Sugimoto, H [Toshiba Corp., Kawasaki, Kanagawa (Japan)

    1979-02-01

    For the cinefluorographic image in the cardiovascular diagnostic system, the image quality is evaluated by means of MTF (Modulation Transfer Function), and object contrast by introducing the concept of x-ray spectrum analysis. On the basis of these results, further investigation is made of optimum X-ray exposure factors set for cinefluorography and the cardiovascular diagnostic system.

  7. An Imaging And Graphics Workstation For Image Sequence Analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  8. Pseudo colour visualization of fused multispectral laser scattering images for optical diagnosis of rheumatoid arthritis

    Science.gov (United States)

    Zabarylo, U.; Minet, O.

    2010-01-01

    Investigations on the application of optical procedures for the diagnosis of rheumatism using scattered light images are only at the beginning both in terms of new image-processing methods and subsequent clinical application. For semi-automatic diagnosis using laser light, the multispectral scattered light images are registered and overlapped to pseudo-coloured images, which depict diagnostically essential contents by visually highlighting pathological changes.

  9. Facial Image Analysis in Anthropology: A Review

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2011-01-01

    Roč. 49, č. 2 (2011), s. 141-153 ISSN 0323-1119 Institutional support: RVO:67985807 Keywords : face * computer-assisted methods * template matching * geometric morphopetrics * robust image analysis Subject RIV: IN - Informatics, Computer Science

  10. Optimization of shearography image quality analysis

    International Nuclear Information System (INIS)

    Rafhayudi Jamro

    2005-01-01

    Shearography is an optical technique based on speckle pattern to measure the deformation of the object surface in which the fringe pattern is obtained through the correlation analysis from the speckle pattern. Analysis of fringe pattern for engineering application is limited for qualitative measurement. Therefore, for further analysis that lead to qualitative data, series of image processing mechanism are involved. In this paper, the fringe pattern for qualitative analysis is discussed. In principal field of applications is qualitative non-destructive testing such as detecting discontinuity, defect in the material structure, locating fatigue zones and etc and all these required image processing application. In order to performed image optimisation successfully, the noise in the fringe pattern must be minimised and the fringe pattern itself must be maximise. This can be achieved by applying a filtering method with a kernel size ranging from 2 X 2 to 7 X 7 pixels size and also applying equalizer in the image processing. (Author)

  11. Structural analysis in medical imaging

    International Nuclear Information System (INIS)

    Dellepiane, S.; Serpico, S.B.; Venzano, L.; Vernazza, G.

    1987-01-01

    The conventional techniques in Pattern Recognition (PR) have been greatly improved by the introduction of Artificial Intelligence (AI) approaches, in particular for knowledge representation, inference mechanism and control structure. The purpose of this paper is to describe an image understanding system, based on the integrated approach (AI - PR), developed in the author's Department to interpret Nuclear Magnetic Resonance (NMR) images. The system is characterized by a heterarchical control structure and a blackboard model for the global data-base. The major aspects of the system are pointed out, with particular reference to segmentation, knowledge representation and error recovery (backtracking). The eye slices obtained in the case of two patients have been analyzed and the related results are discussed

  12. MSCT follow-up in malignant lymphoma. Comparison of manual linear measurements with semi-automated lymph node analysis for therapy response classification

    International Nuclear Information System (INIS)

    Wessling, J.; Puesken, M.; Kohlhase, N.; Persigehl, T.; Mesters, R.; Heindel, W.; Buerke, B.; Koch, R.

    2012-01-01

    Purpose: Assignment of semi-automated lymph node analysis compared to manual measurements for therapy response classification of malignant lymphoma in MSCT. Materials and Methods: MSCT scans of 63 malignant lymphoma patients before and after 2 cycles of chemotherapy (307 target lymph nodes) were evaluated. The long axis diameter (LAD), short axis diameter (SAD) and bi-dimensional WHO were determined manually and semi-automatically. The time for manual and semi-automatic segmentation was evaluated. The ref. standard response was defined as the mean relative change across all manual and semi-automatic measurements (mean manual/semi-automatic LAD, SAD, semi-automatic volume). Statistical analysis encompassed t-test and McNemar's test for clustered data. Results: Response classification per lymph node revealed semi-automated volumetry and bi-dimensional WHO to be significantly more accurate than manual linear metric measurements. Response classification per patient based on RECIST revealed more patients to be correctly classified by semi-automatic measurements, e.g. 96.0 %/92.9 % (WHO bi-dimensional/volume) compared to 85.7/84.1 % for manual LAD and SAD, respectively (mean reduction in misclassified patients of 9.95 %). Considering the use of correction tools, the time expenditure for lymph node segmentation (29.7 ± 17.4 sec) was the same as with the manual approach (29.1 ± 14.5 sec). Conclusion: Semi-automatically derived 'lymph node volume' and 'bi-dimensional WHO' significantly reduce the number of misclassified patients in the CT follow-up of malignant lymphoma by at least 10 %. However, lymph node volumetry does not outperform bi-dimensional WHO. (orig.)

  13. MSCT follow-up in malignant lymphoma. Comparison of manual linear measurements with semi-automated lymph node analysis for therapy response classification

    Energy Technology Data Exchange (ETDEWEB)

    Wessling, J.; Puesken, M.; Kohlhase, N.; Persigehl, T.; Mesters, R.; Heindel, W.; Buerke, B. [Muenster Univ. (Germany). Dept. of Clinical Radiology; Koch, R. [Muenster Univ. (Germany). Inst. of Biostatistics and Clinical Research

    2012-09-15

    Purpose: Assignment of semi-automated lymph node analysis compared to manual measurements for therapy response classification of malignant lymphoma in MSCT. Materials and Methods: MSCT scans of 63 malignant lymphoma patients before and after 2 cycles of chemotherapy (307 target lymph nodes) were evaluated. The long axis diameter (LAD), short axis diameter (SAD) and bi-dimensional WHO were determined manually and semi-automatically. The time for manual and semi-automatic segmentation was evaluated. The ref. standard response was defined as the mean relative change across all manual and semi-automatic measurements (mean manual/semi-automatic LAD, SAD, semi-automatic volume). Statistical analysis encompassed t-test and McNemar's test for clustered data. Results: Response classification per lymph node revealed semi-automated volumetry and bi-dimensional WHO to be significantly more accurate than manual linear metric measurements. Response classification per patient based on RECIST revealed more patients to be correctly classified by semi-automatic measurements, e.g. 96.0 %/92.9 % (WHO bi-dimensional/volume) compared to 85.7/84.1 % for manual LAD and SAD, respectively (mean reduction in misclassified patients of 9.95 %). Considering the use of correction tools, the time expenditure for lymph node segmentation (29.7 {+-} 17.4 sec) was the same as with the manual approach (29.1 {+-} 14.5 sec). Conclusion: Semi-automatically derived 'lymph node volume' and 'bi-dimensional WHO' significantly reduce the number of misclassified patients in the CT follow-up of malignant lymphoma by at least 10 %. However, lymph node volumetry does not outperform bi-dimensional WHO. (orig.)

  14. Malware Analysis Using Visualized Image Matrices

    Directory of Open Access Journals (Sweden)

    KyoungSoo Han

    2014-01-01

    Full Text Available This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  15. Malware analysis using visualized image matrices.

    Science.gov (United States)

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  16. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  17. Image processing and analysis software development

    International Nuclear Information System (INIS)

    Shahnaz, R.

    1999-01-01

    The work presented in this project is aimed at developing a software 'IMAGE GALLERY' to investigate various image processing and analysis techniques. The work was divided into two parts namely the image processing techniques and pattern recognition, which further comprised of character and face recognition. Various image enhancement techniques including negative imaging, contrast stretching, compression of dynamic, neon, diffuse, emboss etc. have been studied. Segmentation techniques including point detection, line detection, edge detection have been studied. Also some of the smoothing and sharpening filters have been investigated. All these imaging techniques have been implemented in a window based computer program written in Visual Basic Neural network techniques based on Perception model have been applied for face and character recognition. (author)

  18. From Digital Imaging to Computer Image Analysis of Fine Art

    Science.gov (United States)

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  19. Breast cancer histopathology image analysis: a review.

    Science.gov (United States)

    Veta, Mitko; Pluim, Josien P W; van Diest, Paul J; Viergever, Max A

    2014-05-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology slide digitization, and which aim at replacing the optical microscope as the primary tool used by pathologist. Breast cancer is the most prevalent form of cancers among women, and image analysis methods that target this disease have a huge potential to reduce the workload in a typical pathology lab and to improve the quality of the interpretation. This paper is meant as an introduction for nonexperts. It starts with an overview of the tissue preparation, staining and slide digitization processes followed by a discussion of the different image processing techniques and applications, ranging from analysis of tissue staining to computer-aided diagnosis, and prognosis of breast cancer patients.

  20. Some developments in multivariate image analysis

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    be up to several million. The main MIA tool for exploratory analysis is score density plot – all pixels are projected into principal component space and on the corresponding scores plots are colorized according to their density (how many pixels are crowded in the unit area of the plot). Looking...... for and analyzing patterns on these plots and the original image allow to do interactive analysis, to get some hidden information, build a supervised classification model, and much more. In the present work several alternative methods to original principal component analysis (PCA) for building the projection......Multivariate image analysis (MIA), one of the successful chemometric applications, now is used widely in different areas of science and industry. Introduced in late 80s it has became very popular with hyperspectral imaging, where MIA is one of the most efficient tools for exploratory analysis...

  1. Document image analysis: A primer

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    (1) Typical documents in today's office are computer-generated, but even so, inevitably by different computers and ... different sizes, from a business card to a large engineering drawing. Document analysis ... Whether global or adaptive ...

  2. Development of High Speed Imaging and Analysis Techniques Compressible Dynamics Stall

    Science.gov (United States)

    Chandrasekhara, M. S.; Carr, L. W.; Wilder, M. C.; Davis, Sanford S. (Technical Monitor)

    1996-01-01

    parameters on the dynamic stall process. When interferograms can be captured in real time, the potential for real-time mapping of a developing unsteady flow such as dynamic stall becomes a possibility. This has been achieved in the present case through the use of a high-speed drum camera combined with electronic circuitry which has resulted in a series of interferograms obtained during a single cycle of dynamic stall; images obtained at the rate of 20 KHz will be presented as a part of the formal presentation. Interferometry has been available for a long time; however, most of its use has been limited to visualization. The present research has focused on use of interferograms for quantitative mapping of the flow over oscillating airfoils. Instantaneous pressure distributions can now be obtained semi-automatically, making practical the analysis of the thousands of interferograms that are produced in this research. A review of the techniques that have been developed as part of this research effort will be presented in the final paper.

  3. Traffic analysis and control using image processing

    Science.gov (United States)

    Senthilkumar, K.; Ellappan, Vijayan; Arun, A. R.

    2017-11-01

    This paper shows the work on traffic analysis and control till date. It shows an approach to regulate traffic the use of image processing and MATLAB systems. This concept uses computational images that are to be compared with original images of the street taken in order to determine the traffic level percentage and set the timing for the traffic signal accordingly which are used to reduce the traffic stoppage on traffic lights. They concept proposes to solve real life scenarios in the streets, thus enriching the traffic lights by adding image receivers like HD cameras and image processors. The input is then imported into MATLAB to be used. as a method for calculating the traffic on roads. Their results would be computed in order to adjust the traffic light timings on a particular street, and also with respect to other similar proposals but with the added value of solving a real, big instance.

  4. Development of Image Analysis Software of MAXI

    Science.gov (United States)

    Eguchi, S.; Ueda, Y.; Hiroi, K.; Isobe, N.; Sugizaki, M.; Suzuki, M.; Tomida, H.; Maxi Team

    2010-12-01

    Monitor of All-sky X-ray Image (MAXI) is an X-ray all-sky monitor, attached to the Japanese experiment module Kibo on the International Space Station. The main scientific goals of the MAXI mission include the discovery of X-ray novae followed by prompt alerts to the community (Negoro et al., in this conference), and production of X-ray all-sky maps and new source catalogs with unprecedented sensitivities. To extract the best capabilities of the MAXI mission, we are working on the development of detailed image analysis tools. We utilize maximum likelihood fitting to a projected sky image, where we take account of the complicated detector responses, such as the background and point spread functions (PSFs). The modeling of PSFs, which strongly depend on the orbit and attitude of MAXI, is a key element in the image analysis. In this paper, we present the status of our software development.

  5. Digital image analysis of NDT radiographs

    International Nuclear Information System (INIS)

    Graeme, W.A. Jr.; Eizember, A.C.; Douglass, J.

    1989-01-01

    Prior to the introduction of Charge Coupled Device (CCD) detectors the majority of image analysis performed on NDT radiographic images was done visually in the analog domain. While some film digitization was being performed, the process was often unable to capture all the usable information on the radiograph or was too time consuming. CCD technology now provides a method to digitize radiographic film images without losing the useful information captured in the original radiograph in a timely process. Incorporating that technology into a complete digital radiographic workstation allows analog radiographic information to be processed, providing additional information to the radiographer. Once in the digital domain, that data can be stored, and fused with radioscopic and other forms of digital data. The result is more productive analysis and management of radiographic inspection data. The principal function of the NDT Scan IV digital radiography system is the digitization, enhancement and storage of radiographic images

  6. Mathematical foundations of image processing and analysis

    CERN Document Server

    Pinoli, Jean-Charles

    2014-01-01

    Mathematical Imaging is currently a rapidly growing field in applied mathematics, with an increasing need for theoretical mathematics. This book, the second of two volumes, emphasizes the role of mathematics as a rigorous basis for imaging sciences. It provides a comprehensive and convenient overview of the key mathematical concepts, notions, tools and frameworks involved in the various fields of gray-tone and binary image processing and analysis, by proposing a large, but coherent, set of symbols and notations, a complete list of subjects and a detailed bibliography. It establishes a bridg

  7. Chromatic Image Analysis For Quantitative Thermal Mapping

    Science.gov (United States)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  8. Laue image analysis. Pt. 2

    International Nuclear Information System (INIS)

    Greenhough, T.J.; Shrive, A.K.

    1994-01-01

    Many Laue diffraction patterns from crystals of particular biological or chemical interest are of insufficient quality for their analysis to be feasible. In many cases, this is because of pronounced streaking of the spots owing to either large mosaic spread or disorder introduced during reactions in the crystal. Methods for the analysis of exposures exhibiting radial or near-radial streaking are described, along with their application in Laue diffraction studies of form-II crystals of Met-tRNA synthetase and a photosynthetic reaction centre from Rhodobacter sphaeroides. In both cases, variable elliptical radial masking has led to significant improvements in data quality and quantity and exposures that previously were too streaked to process may now be analysed. These masks can also provide circular profiles as a special case for processing high-quality Laue exposures and spatial-overlap deconvolution may be performed using the elliptical or circular masks. (orig.)

  9. Multisource Images Analysis Using Collaborative Clustering

    Directory of Open Access Journals (Sweden)

    Pierre Gançarski

    2008-04-01

    Full Text Available The development of very high-resolution (VHR satellite imagery has produced a huge amount of data. The multiplication of satellites which embed different types of sensors provides a lot of heterogeneous images. Consequently, the image analyst has often many different images available, representing the same area of the Earth surface. These images can be from different dates, produced by different sensors, or even at different resolutions. The lack of machine learning tools using all these representations in an overall process constraints to a sequential analysis of these various images. In order to use all the information available simultaneously, we propose a framework where different algorithms can use different views of the scene. Each one works on a different remotely sensed image and, thus, produces different and useful information. These algorithms work together in a collaborative way through an automatic and mutual refinement of their results, so that all the results have almost the same number of clusters, which are statistically similar. Finally, a unique result is produced, representing a consensus among the information obtained by each clustering method on its own image. The unified result and the complementarity of the single results (i.e., the agreement between the clustering methods as well as the disagreement lead to a better understanding of the scene. The experiments carried out on multispectral remote sensing images have shown that this method is efficient to extract relevant information and to improve the scene understanding.

  10. RootAnalyzer: A Cross-Section Image Analysis Tool for Automated Characterization of Root Cells and Tissues.

    Directory of Open Access Journals (Sweden)

    Joshua Chopin

    Full Text Available The morphology of plant root anatomical features is a key factor in effective water and nutrient uptake. Existing techniques for phenotyping root anatomical traits are often based on manual or semi-automatic segmentation and annotation of microscopic images of root cross sections. In this article, we propose a fully automated tool, hereinafter referred to as RootAnalyzer, for efficiently extracting and analyzing anatomical traits from root-cross section images. Using a range of image processing techniques such as local thresholding and nearest neighbor identification, RootAnalyzer segments the plant root from the image's background, classifies and characterizes the cortex, stele, endodermis and epidermis, and subsequently produces statistics about the morphological properties of the root cells and tissues. We use RootAnalyzer to analyze 15 images of wheat plants and one maize plant image and evaluate its performance against manually-obtained ground truth data. The comparison shows that RootAnalyzer can fully characterize most root tissue regions with over 90% accuracy.

  11. Biologically inspired EM image alignment and neural reconstruction.

    Science.gov (United States)

    Knowles-Barley, Seymour; Butcher, Nancy J; Meinertzhagen, Ian A; Armstrong, J Douglas

    2011-08-15

    Three-dimensional reconstruction of consecutive serial-section transmission electron microscopy (ssTEM) images of neural tissue currently requires many hours of manual tracing and annotation. Several computational techniques have already been applied to ssTEM images to facilitate 3D reconstruction and ease this burden. Here, we present an alternative computational approach for ssTEM image analysis. We have used biologically inspired receptive fields as a basis for a ridge detection algorithm to identify cell membranes, synaptic contacts and mitochondria. Detected line segments are used to improve alignment between consecutive images and we have joined small segments of membrane into cell surfaces using a dynamic programming algorithm similar to the Needleman-Wunsch and Smith-Waterman DNA sequence alignment procedures. A shortest path-based approach has been used to close edges and achieve image segmentation. Partial reconstructions were automatically generated and used as a basis for semi-automatic reconstruction of neural tissue. The accuracy of partial reconstructions was evaluated and 96% of membrane could be identified at the cost of 13% false positive detections. An open-source reference implementation is available in the Supplementary information. seymour.kb@ed.ac.uk; douglas.armstrong@ed.ac.uk Supplementary data are available at Bioinformatics online.

  12. Applications Of Binary Image Analysis Techniques

    Science.gov (United States)

    Tropf, H.; Enderle, E.; Kammerer, H. P.

    1983-10-01

    After discussing the conditions where binary image analysis techniques can be used, three new applications of the fast binary image analysis system S.A.M. (Sensorsystem for Automation and Measurement) are reported: (1) The human view direction is measured at TV frame rate while the subject's head is free movable. (2) Industrial parts hanging on a moving conveyor are classified prior to spray painting by robot. (3) In automotive wheel assembly, the eccentricity of the wheel is minimized by turning the tyre relative to the rim in order to balance the eccentricity of the components.

  13. Fourier analysis: from cloaking to imaging

    Science.gov (United States)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  14. Fourier analysis: from cloaking to imaging

    International Nuclear Information System (INIS)

    Wu, Kedi; Ping Wang, Guo; Cheng, Qiluan

    2016-01-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers. (review)

  15. Quantitative Image Simulation and Analysis of Nanoparticles

    DEFF Research Database (Denmark)

    Madsen, Jacob; Hansen, Thomas Willum

    Microscopy (HRTEM) has become a routine analysis tool for structural characterization at atomic resolution, and with the recent development of in-situ TEMs, it is now possible to study catalytic nanoparticles under reaction conditions. However, the connection between an experimental image, and the underlying...... physical phenomena or structure is not always straightforward. The aim of this thesis is to use image simulation to better understand observations from HRTEM images. Surface strain is known to be important for the performance of nanoparticles. Using simulation, we estimate of the precision and accuracy...... of strain measurements from TEM images, and investigate the stability of these measurements to microscope parameters. This is followed by our efforts toward simulating metal nanoparticles on a metal-oxide support using the Charge Optimized Many Body (COMB) interatomic potential. The simulated interface...

  16. Hyperspectral Image Analysis of Food Quality

    DEFF Research Database (Denmark)

    Arngren, Morten

    inspection.Near-infrared spectroscopy can address these issues by offering a fast and objectiveanalysis of the food quality. A natural extension to these single spectrumNIR systems is to include image information such that each pixel holds a NIRspectrum. This augmented image information offers several......Assessing the quality of food is a vital step in any food processing line to ensurethe best food quality and maximum profit for the farmer and food manufacturer.Traditional quality evaluation methods are often destructive and labourintensive procedures relying on wet chemistry or subjective human...... extensions to the analysis offood quality. This dissertation is concerned with hyperspectral image analysisused to assess the quality of single grain kernels. The focus is to highlight thebenefits and challenges of using hyperspectral imaging for food quality presentedin two research directions. Initially...

  17. Deep Learning in Medical Image Analysis.

    Science.gov (United States)

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  18. Data Analysis Strategies in Medical Imaging.

    Science.gov (United States)

    Parmar, Chintan; Barry, Joseph D; Hosny, Ahmed; Quackenbush, John; Aerts, Hugo Jwl

    2018-03-26

    Radiographic imaging continues to be one of the most effective and clinically useful tools within oncology. Sophistication of artificial intelligence (AI) has allowed for detailed quantification of radiographic characteristics of tissues using predefined engineered algorithms or deep learning methods. Precedents in radiology as well as a wealth of research studies hint at the clinical relevance of these characteristics. However, there are critical challenges associated with the analysis of medical imaging data. While some of these challenges are specific to the imaging field, many others like reproducibility and batch effects are generic and have already been addressed in other quantitative fields such as genomics. Here, we identify these pitfalls and provide recommendations for analysis strategies of medical imaging data including data normalization, development of robust models, and rigorous statistical analyses. Adhering to these recommendations will not only improve analysis quality, but will also enhance precision medicine by allowing better integration of imaging data with other biomedical data sources. Copyright ©2018, American Association for Cancer Research.

  19. Multispectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2012-01-01

    Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. The pellets were divided into two groups: one with pellets coated using synthetic astaxanthin in fish oil and the other with pellets coated...

  20. A virtual laboratory for medical image analysis

    NARCIS (Netherlands)

    Olabarriaga, Sílvia D.; Glatard, Tristan; de Boer, Piter T.

    2010-01-01

    This paper presents the design, implementation, and usage of a virtual laboratory for medical image analysis. It is fully based on the Dutch grid, which is part of the Enabling Grids for E-sciencE (EGEE) production infrastructure and driven by the gLite middleware. The adopted service-oriented

  1. Scanning transmission electron microscopy imaging and analysis

    CERN Document Server

    Pennycook, Stephen J

    2011-01-01

    Provides the first comprehensive treatment of the physics and applications of this mainstream technique for imaging and analysis at the atomic level Presents applications of STEM in condensed matter physics, materials science, catalysis, and nanoscience Suitable for graduate students learning microscopy, researchers wishing to utilize STEM, as well as for specialists in other areas of microscopy Edited and written by leading researchers and practitioners

  2. Flame analysis using image processing techniques

    Science.gov (United States)

    Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng

    2018-04-01

    This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.

  3. Frequency domain analysis of knock images

    Science.gov (United States)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  4. Computed image analysis of neutron radiographs

    International Nuclear Information System (INIS)

    Dinca, M.; Anghel, E.; Preda, M.; Pavelescu, M.

    2008-01-01

    Similar with X-radiography, using neutron like penetrating particle, there is in practice a nondestructive technique named neutron radiology. When the registration of information is done on a film with the help of a conversion foil (with high cross section for neutrons) that emits secondary radiation (β,γ) that creates a latent image, the technique is named neutron radiography. A radiographic industrial film that contains the image of the internal structure of an object, obtained by neutron radiography, must be subsequently analyzed to obtain qualitative and quantitative information about the structural integrity of that object. There is possible to do a computed analysis of a film using a facility with next main components: an illuminator for film, a CCD video camera and a computer (PC) with suitable software. The qualitative analysis intends to put in evidence possibly anomalies of the structure due to manufacturing processes or induced by working processes (for example, the irradiation activity in the case of the nuclear fuel). The quantitative determination is based on measurements of some image parameters: dimensions, optical densities. The illuminator has been built specially to perform this application but can be used for simple visual observation. The illuminated area is 9x40 cm. The frame of the system is a comparer of Abbe Carl Zeiss Jena type, which has been adapted to achieve this application. The video camera assures the capture of image that is stored and processed by computer. A special program SIMAG-NG has been developed at INR Pitesti that beside of the program SMTV II of the special acquisition module SM 5010 can analyze the images of a film. The major application of the system was the quantitative analysis of a film that contains the images of some nuclear fuel pins beside a dimensional standard. The system was used to measure the length of the pellets of the TRIGA nuclear fuel. (authors)

  5. Web Based Distributed Coastal Image Analysis System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  6. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R

    2004-01-01

    IntroductionTodd R. ReedCONTENT-BASED IMAGE SEQUENCE REPRESENTATIONPedro M. Q. Aguiar, Radu S. Jasinschi, José M. F. Moura, andCharnchai PluempitiwiriyawejTHE COMPUTATION OF MOTIONChristoph Stiller, Sören Kammel, Jan Horn, and Thao DangMOTION ANALYSIS AND DISPLACEMENT ESTIMATION IN THE FREQUENCY DOMAINLuca Lucchese and Guido Maria CortelazzoQUALITY OF SERVICE ASSESSMENT IN NEW GENERATION WIRELESS VIDEO COMMUNICATIONSGaetano GiuntaERROR CONCEALMENT IN DIGITAL VIDEOFrancesco G.B. De NataleIMAGE SEQUENCE RESTORATION: A WIDER PERSPECTIVEAnil KokaramVIDEO SUMMARIZATIONCuneyt M. Taskiran and Edward

  7. Study of TCP densification via image analysis

    International Nuclear Information System (INIS)

    Silva, R.C.; Alencastro, F.S.; Oliveira, R.N.; Soares, G.A.

    2011-01-01

    Among ceramic materials that mimic human bone, β-type tri-calcium phosphate (β-TCP) has shown appropriate chemical stability and superior resorption rate when compared to hydroxyapatite. In order to increase its mechanical strength, the material is sintered, under controlled time and temperature conditions, to obtain densification without phase change. In the present work, tablets were produced via uniaxial compression and then sintered at 1150°C for 2h. The analysis via XRD and FTIR showed that the sintered tablets were composed only by β-TCP. The SEM images were used for quantification of grain size and volume fraction of pores, via digital image analysis. The tablets showed small pore fraction (between 0,67% and 6,38%) and homogeneous grain size distribution (∼2μm). Therefore, the analysis method seems viable to quantify porosity and grain size. (author)

  8. Analysis of renal nuclear medicine images

    International Nuclear Information System (INIS)

    Jose, R.M.J.

    2000-01-01

    Nuclear medicine imaging of the renal system involves producing time-sequential images showing the distribution of a radiopharmaceutical in the renal system. Producing numerical and graphical data from nuclear medicine studies requires defining regions of interest (ROIs) around various organs within the field of view, such as the left kidney, right kidney and bladder. Automating this process has several advantages: a saving of a clinician's time; enhanced objectivity and reproducibility. This thesis describes the design, implementation and assessment of an automatic ROI generation system. The performance of the system described in this work is assessed by comparing the results to those obtained using manual techniques. Since nuclear medicine images are inherently noisy, the sequence of images is reconstructed using the first few components of a principal components analysis in order to reduce the noise in the images. An image of the summed reconstructed sequence is then formed. This summed image is segmented by using an edge co-occurrence matrix as a feature space for simultaneously classifying regions and locating boundaries. Two methods for assigning the regions of a segmented image to organ class labels are assessed. The first method is based on using Dempster-Shafer theory to combine uncertain evidence from several sources into a single evidence; the second method makes use of a neural network classifier. The use of each technique in classifying the regions of a segmented image are assessed in separate experiments using 40 real patient-studies. A comparative assessment of the two techniques shows that the neural network produces more accurate region labels for the kidneys. The optimum neural system is determined experimentally. Results indicate that combining temporal and spatial information with a priori clinical knowledge produces reasonable ROIs. Consistency in the neural network assignment of regions is enhanced by taking account of the contextual

  9. MRI and image quantitation for drug assessment - growth effects of anabolic steroids and precursors.

    Science.gov (United States)

    Tang, Haiying; Wu, Ed; Vasselli, Joseph

    2005-01-01

    MRI and image quantitation play an expanding role in modern drug research, because MRI offers high resolution and non-invasive ability, and provides excellent soft tissue contrast. Moreover, with development of effective image segmentation and analysis methods, in-vivo and serial tissue growth measurements could be assessed. In the study, MR image acquisition and analysis protocol were established and validated for investigating the effects of anabolic steroids and precursors on muscle growth and body composition in a guinea pig model. Semi-automatic and interactive segmentation methods were developed to accurately label the tissue of interest for tissue volume estimation. In addition, a longitudinal tissue area outlining procedure was proposed for study of tissue geometric features in relation to tissue growth. Finally, a fully automatic data retrieval and analysis scheme was implemented to facilitate the overall huge amount of image quantitation, statistical analysis, as well as study group comparisons. As a result, highly significant differences in muscle and organ growth were detected between intact and castrated guinea pigs using the selected anabolic steroids, indicating the viability of employing such protocol to assess other anabolic steroids. Furthermore, the anabolic potential of selected steroid precursors and their effects on muscle growth, in comparison with that in respective positive control groups of castrated guinea pigs, were evaluated with the proposed protocol.

  10. Rapid Analysis and Exploration of Fluorescence Microscopy Images

    OpenAIRE

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason; Steininger, Robert J; Wu, Lani; Altschuler, Steven

    2014-01-01

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard.

  11. Image analysis for ophthalmological diagnosis image processing of Corvis ST images using Matlab

    CERN Document Server

    Koprowski, Robert

    2016-01-01

    This monograph focuses on the use of analysis and processing methods for images from the Corvis® ST tonometer. The presented analysis is associated with the quantitative, repeatable and fully automatic evaluation of the response of the eye, eyeball and cornea to an air-puff. All the described algorithms were practically implemented in MATLAB®. The monograph also describes and provides the full source code designed to perform the discussed calculations. As a result, this monograph is intended for scientists, graduate students and students of computer science and bioengineering as well as doctors wishing to expand their knowledge of modern diagnostic methods assisted by various image analysis and processing methods.

  12. Image sequence analysis workstation for multipoint motion analysis

    Science.gov (United States)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  13. Quantitative Analysis in Nuclear Medicine Imaging

    CERN Document Server

    2006-01-01

    This book provides a review of image analysis techniques as they are applied in the field of diagnostic and therapeutic nuclear medicine. Driven in part by the remarkable increase in computing power and its ready and inexpensive availability, this is a relatively new yet rapidly expanding field. Likewise, although the use of radionuclides for diagnosis and therapy has origins dating back almost to the discovery of natural radioactivity itself, radionuclide therapy and, in particular, targeted radionuclide therapy has only recently emerged as a promising approach for therapy of cancer and, to a lesser extent, other diseases. As effort has, therefore, been made to place the reviews provided in this book in a broader context. The effort to do this is reflected by the inclusion of introductory chapters that address basic principles of nuclear medicine imaging, followed by overview of issues that are closely related to quantitative nuclear imaging and its potential role in diagnostic and therapeutic applications. ...

  14. Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox

    Directory of Open Access Journals (Sweden)

    Andre Santos Ribeiro

    2015-07-01

    Full Text Available Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity.Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI and positron emission tomography (PET. It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also.Results. It was observed both a high inter

  15. Semiautomatic digital imaging system for cytogenetic analysis

    International Nuclear Information System (INIS)

    Chaubey, R.C.; Chauhan, P.C.; Bannur, S.V.; Kulgod, S.V.; Chadda, V.K.; Nigam, R.K.

    1999-08-01

    The paper describes a digital image processing system, developed indigenously at BARC for size measurement of microscopic biological objects such as cell, nucleus and micronucleus in mouse bone marrow; cytochalasin-B blocked human lymphocytes in-vitro; numerical counting and karyotyping of metaphase chromosomes of human lymphocytes. Errors in karyotyping of chromosomes by the imaging system may creep in due to lack of well-defined position of centromere or extensive bending of chromosomes, which may result due to poor quality of preparation. Good metaphase preparations are mandatory for precise and accurate analysis by the system. Additional new morphological parameters about each chromosome have to be incorporated to improve the accuracy of karyotyping. Though the experienced cytogenetisist is the final judge; however, the system assists him/her to carryout analysis much faster as compared to manual scoring. Further, experimental studies are in progress to validate different software packages developed for various cytogenetic applications. (author)

  16. Morphometric image analysis of giant vesicles

    DEFF Research Database (Denmark)

    Husen, Peter Rasmussen; Arriaga, Laura; Monroy, Francisco

    2012-01-01

    We have developed a strategy to determine lengths and orientations of tie lines in the coexistence region of liquid-ordered and liquid-disordered phases of cholesterol containing ternary lipid mixtures. The method combines confocal-fluorescence-microscopy image stacks of giant unilamellar vesicles...... (GUVs), a dedicated 3D-image analysis, and a quantitative analysis based in equilibrium thermodynamic considerations. This approach was tested in GUVs composed of 1,2-dioleoyl-sn-glycero-3-phosphocholine/1,2-palmitoyl-sn-glycero-3-phosphocholine/cholesterol. In general, our results show a reasonable...... agreement with previously reported data obtained by other methods. For example, our computed tie lines were found to be nonhorizontal, indicating a difference in cholesterol content in the coexisting phases. This new, to our knowledge, analytical strategy offers a way to further exploit fluorescence...

  17. ODM Data Analysis-A tool for the automatic validation, monitoring and generation of generic descriptive statistics of patient data.

    Science.gov (United States)

    Brix, Tobias Johannes; Bruland, Philipp; Sarfraz, Saad; Ernsting, Jan; Neuhaus, Philipp; Storck, Michael; Doods, Justin; Ständer, Sonja; Dugas, Martin

    2018-01-01

    A required step for presenting results of clinical studies is the declaration of participants demographic and baseline characteristics as claimed by the FDAAA 801. The common workflow to accomplish this task is to export the clinical data from the used electronic data capture system and import it into statistical software like SAS software or IBM SPSS. This software requires trained users, who have to implement the analysis individually for each item. These expenditures may become an obstacle for small studies. Objective of this work is to design, implement and evaluate an open source application, called ODM Data Analysis, for the semi-automatic analysis of clinical study data. The system requires clinical data in the CDISC Operational Data Model format. After uploading the file, its syntax and data type conformity of the collected data is validated. The completeness of the study data is determined and basic statistics, including illustrative charts for each item, are generated. Datasets from four clinical studies have been used to evaluate the application's performance and functionality. The system is implemented as an open source web application (available at https://odmanalysis.uni-muenster.de) and also provided as Docker image which enables an easy distribution and installation on local systems. Study data is only stored in the application as long as the calculations are performed which is compliant with data protection endeavors. Analysis times are below half an hour, even for larger studies with over 6000 subjects. Medical experts have ensured the usefulness of this application to grant an overview of their collected study data for monitoring purposes and to generate descriptive statistics without further user interaction. The semi-automatic analysis has its limitations and cannot replace the complex analysis of statisticians, but it can be used as a starting point for their examination and reporting.

  18. Image Analysis for Nail-fold Capillaroscopy

    OpenAIRE

    Vucic, Vladimir

    2015-01-01

    Detection of diseases in an early stage is very important since it can make the treatment of patients easier, safer and more ecient. For the detection of rheumatic diseases, and even prediction of tendencies towards such diseases, capillaroscopy is becoming an increasingly recognized method. Nail-fold capillaroscopy is a non-invasive imaging technique that is used for analysis of microcirculation abnormalities that may lead todisease like systematic sclerosis, Reynauds phenomenon and others. ...

  19. Computerized analysis of brain perfusion parameter images

    International Nuclear Information System (INIS)

    Turowski, B.; Haenggi, D.; Wittsack, H.J.; Beck, A.; Aurich, V.

    2007-01-01

    Purpose: The development of a computerized method which allows a direct quantitative comparison of perfusion parameters. The display should allow a clear direct comparison of brain perfusion parameters in different vascular territories and over the course of time. The analysis is intended to be the basis for further evaluation of cerebral vasospasm after subarachnoid hemorrhage (SAH). The method should permit early diagnosis of cerebral vasospasm. Materials and Methods: The Angiotux 2D-ECCET software was developed with a close cooperation between computer scientists and clinicians. Starting from parameter images of brain perfusion, the cortex was marked, segmented and assigned to definite vascular territories. The underlying values were averages for each segment and were displayed in a graph. If a follow-up was available, the mean values of the perfusion parameters were displayed in relation to time. The method was developed under consideration of CT perfusion values but is applicable for other methods of perfusion imaging. Results: Computerized analysis of brain perfusion parameter images allows an immediate comparison of these parameters and follow-up of mean values in a clear and concise manner. Values are related to definite vascular territories. The tabular output facilitates further statistic evaluations. The computerized analysis is precisely reproducible, i. e., repetitions result in exactly the same output. (orig.)

  20. Extracting cardiac shapes and motion of the chick embryo heart outflow tract from four-dimensional optical coherence tomography images

    Science.gov (United States)

    Yin, Xin; Liu, Aiping; Thornburg, Kent L.; Wang, Ruikang K.; Rugonyi, Sandra

    2012-09-01

    Recent advances in optical coherence tomography (OCT), and the development of image reconstruction algorithms, enabled four-dimensional (4-D) (three-dimensional imaging over time) imaging of the embryonic heart. To further analyze and quantify the dynamics of cardiac beating, segmentation procedures that can extract the shape of the heart and its motion are needed. Most previous studies analyzed cardiac image sequences using manually extracted shapes and measurements. However, this is time consuming and subject to inter-operator variability. Automated or semi-automated analyses of 4-D cardiac OCT images, although very desirable, are also extremely challenging. This work proposes a robust algorithm to semi automatically detect and track cardiac tissue layers from 4-D OCT images of early (tubular) embryonic hearts. Our algorithm uses a two-dimensional (2-D) deformable double-line model (DLM) to detect target cardiac tissues. The detection algorithm uses a maximum-likelihood estimator and was successfully applied to 4-D in vivo OCT images of the heart outflow tract of day three chicken embryos. The extracted shapes captured the dynamics of the chick embryonic heart outflow tract wall, enabling further analysis of cardiac motion.

  1. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  2. The Digital Image Processing And Quantitative Analysis In Microscopic Image Characterization

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    2000-01-01

    Many electron microscopes although have produced digital images, but not all of them are equipped with a supporting unit to process and analyse image data quantitatively. Generally the analysis of image has to be made visually and the measurement is realized manually. The development of mathematical method for geometric analysis and pattern recognition, allows automatic microscopic image analysis with computer. Image processing program can be used for image texture and structure periodic analysis by the application of Fourier transform. Because the development of composite materials. Fourier analysis in frequency domain become important for measure the crystallography orientation. The periodic structure analysis and crystal orientation are the key to understand many material properties like mechanical strength. stress, heat conductivity, resistance, capacitance and other material electric and magnetic properties. In this paper will be shown the application of digital image processing in microscopic image characterization and analysis in microscopic image

  3. Automatic dirt trail analysis in dermoscopy images.

    Science.gov (United States)

    Cheng, Beibei; Joe Stanley, R; Stoecker, William V; Osterwise, Christopher T P; Stricklin, Sherea M; Hinton, Kristen A; Moss, Randy H; Oliviero, Margaret; Rabinovitz, Harold S

    2013-02-01

    Basal cell carcinoma (BCC) is the most common cancer in the US. Dermatoscopes are devices used by physicians to facilitate the early detection of these cancers based on the identification of skin lesion structures often specific to BCCs. One new lesion structure, referred to as dirt trails, has the appearance of dark gray, brown or black dots and clods of varying sizes distributed in elongated clusters with indistinct borders, often appearing as curvilinear trails. In this research, we explore a dirt trail detection and analysis algorithm for extracting, measuring, and characterizing dirt trails based on size, distribution, and color in dermoscopic skin lesion images. These dirt trails are then used to automatically discriminate BCC from benign skin lesions. For an experimental data set of 35 BCC images with dirt trails and 79 benign lesion images, a neural network-based classifier achieved a 0.902 are under a receiver operating characteristic curve using a leave-one-out approach. Results obtained from this study show that automatic detection of dirt trails in dermoscopic images of BCC is feasible. This is important because of the large number of these skin cancers seen every year and the challenge of discovering these earlier with instrumentation. © 2011 John Wiley & Sons A/S.

  4. Remote Sensing Digital Image Analysis An Introduction

    CERN Document Server

    Richards, John A

    2013-01-01

    Remote Sensing Digital Image Analysis provides the non-specialist with a treatment of the quantitative analysis of satellite and aircraft derived remotely sensed data. Since the first edition of the book there have been significant developments in the algorithms used for the processing and analysis of remote sensing imagery; nevertheless many of the fundamentals have substantially remained the same.  This new edition presents material that has retained value since those early days, along with new techniques that can be incorporated into an operational framework for the analysis of remote sensing data. The book is designed as a teaching text for the senior undergraduate and postgraduate student, and as a fundamental treatment for those engaged in research using digital image processing in remote sensing.  The presentation level is for the mathematical non-specialist.  Since the very great number of operational users of remote sensing come from the earth sciences communities, the text is pitched at a leve...

  5. Fast prostate segmentation for brachytherapy based on joint fusion of images and labels

    Science.gov (United States)

    Nouranian, Saman; Ramezani, Mahdi; Mahdavi, S. Sara; Spadinger, Ingrid; Morris, William J.; Salcudean, Septimiu E.; Abolmaesumi, Purang

    2014-03-01

    Brachytherapy as one of the treatment methods for prostate cancer takes place by implantation of radioactive seeds inside the gland. The standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate which are segmented in order to plan the appropriate seed placement. The segmentation process is usually performed either manually or semi-automatically and is associated with subjective errors because the prostate visibility is limited in ultrasound images. The current segmentation process also limits the possibility of intra-operative delineation of the prostate to perform real-time dosimetry. In this paper, we propose a computationally inexpensive and fully automatic segmentation approach that takes advantage of previously segmented images to form a joint space of images and their segmentations. We utilize joint Independent Component Analysis method to generate a model which is further employed to produce a probability map of the target segmentation. We evaluate this approach on the transrectal ultrasound volume images of 60 patients using a leave-one-out cross-validation approach. The results are compared with the manually segmented prostate contours that were used by clinicians to plan brachytherapy procedures. We show that the proposed approach is fast with comparable accuracy and precision to those found in previous studies on TRUS segmentation.

  6. [Imaging Mass Spectrometry in Histopathologic Analysis].

    Science.gov (United States)

    Yamazaki, Fumiyoshi; Seto, Mitsutoshi

    2015-04-01

    Matrix-assisted laser desorption/ionization (MALDI)-imaging mass spectrometry (IMS) enables visualization of the distribution of a range of biomolecules by integrating biochemical information from mass spectrometry with positional information from microscopy. IMS identifies a target molecule. In addition, IMS enables global analysis of biomolecules containing unknown molecules by detecting the ratio of the molecular weight to electric charge without any target, which makes it possible to identify novel molecules. IMS generates data on the distribution of lipids and small molecules in tissues, which is difficult to visualize with either conventional counter-staining or immunohistochemistry. In this review, we firstly introduce the principle of imaging mass spectrometry and recent advances in the sample preparation method. Secondly, we present findings regarding biological samples, especially pathological ones. Finally, we discuss the limitations and problems of the IMS technique and clinical application, such as in drug development.

  7. Machine Learning Interface for Medical Image Analysis.

    Science.gov (United States)

    Zhang, Yi C; Kagen, Alexander C

    2017-10-01

    TensorFlow is a second-generation open-source machine learning software library with a built-in framework for implementing neural networks in wide variety of perceptual tasks. Although TensorFlow usage is well established with computer vision datasets, the TensorFlow interface with DICOM formats for medical imaging remains to be established. Our goal is to extend the TensorFlow API to accept raw DICOM images as input; 1513 DaTscan DICOM images were obtained from the Parkinson's Progression Markers Initiative (PPMI) database. DICOM pixel intensities were extracted and shaped into tensors, or n-dimensional arrays, to populate the training, validation, and test input datasets for machine learning. A simple neural network was constructed in TensorFlow to classify images into normal or Parkinson's disease groups. Training was executed over 1000 iterations for each cross-validation set. The gradient descent optimization and Adagrad optimization algorithms were used to minimize cross-entropy between the predicted and ground-truth labels. Cross-validation was performed ten times to produce a mean accuracy of 0.938 ± 0.047 (95 % CI 0.908-0.967). The mean sensitivity was 0.974 ± 0.043 (95 % CI 0.947-1.00) and mean specificity was 0.822 ± 0.207 (95 % CI 0.694-0.950). We extended the TensorFlow API to enable DICOM compatibility in the context of DaTscan image analysis. We implemented a neural network classifier that produces diagnostic accuracies on par with excellent results from previous machine learning models. These results indicate the potential role of TensorFlow as a useful adjunct diagnostic tool in the clinical setting.

  8. Phase Image Analysis in Conduction Disturbance Patients

    International Nuclear Information System (INIS)

    Kwark, Byeng Su; Choi, Si Wan; Kang, Seung Sik; Park, Ki Nam; Lee, Kang Wook; Jeon, Eun Seok; Park, Chong Hun

    1994-01-01

    It is known that the normal His-Purkinje system provides for nearly synchronous activation of right (RV) and left (LV) ventricles. When His-Purkinje conduction is abnormal, the resulting sequence of ventricular contraction must be correspondingly abnormal. These abnormal mechanical consequences were difficult to demonstrate because of the complexity and the rapidity of its events. To determine the relationship of the phase changes and the abnormalities of ventricular conduction, we performed phase image analysis of Tc-RBC gated blood pool scintigrams in patients with intraventricular conduction disturbances (24 complete left bundle branch block (C-LBBB), 15 complete right bundle branch block (C-RBBB), 13 Wolff-Parkinson-White syndrome (WPW), 10 controls). The results were as follows; 1) The ejection fraction (EF), peak ejection rate (PER), and peak filling rate (PFR) of LV in gated blood pool scintigraphy (GBPS) were significantly lower in patients with C-LBBB than in controls (44.4 ± 13.9% vs 69.9 ± 4.2%, 2.48 ± 0.98 vs 3.51 ± 0,62, 1.76 ± 0.71 vs 3.38 ± 0.92, respectively, p<0.05). 2) In the phase angle analysis of LV, Standard deviation (SD), width of half maximum of phase angle (FWHM), and range of phase angle were significantly increased in patients with C-LBBB than in controls (20.6 + 18.1 vs S.6 + I.8, 22. 5 + 9.2 vs 16.0 + 3.9, 95.7 + 31.7 vs 51.3 + 5.4, respectively, p<0.05). 3) There was no significant difference in EF, PER, PFR between patients with the WolffParkinson-White syndrome and controls. 4) Standard deviation and range of phase angle were significantly higher in patients with WPW syndrome than in controls (10.6 + 2.6 vs 8.6 + 1.8, p<0.05, 69.8 + 11.7 vs 51.3 + 5 4, p<0.001, respectively), however, there was no difference between the two groups in full width of half maximum. 5) Phase image analysis revealed relatively uniform phase across the both ventriles in patients with normal conduction, but markedly delayed phase in the left ventricle

  9. Phase Image Analysis in Conduction Disturbance Patients

    Energy Technology Data Exchange (ETDEWEB)

    Kwark, Byeng Su; Choi, Si Wan; Kang, Seung Sik; Park, Ki Nam; Lee, Kang Wook; Jeon, Eun Seok; Park, Chong Hun [Chung Nam University Hospital, Daejeon (Korea, Republic of)

    1994-03-15

    It is known that the normal His-Purkinje system provides for nearly synchronous activation of right (RV) and left (LV) ventricles. When His-Purkinje conduction is abnormal, the resulting sequence of ventricular contraction must be correspondingly abnormal. These abnormal mechanical consequences were difficult to demonstrate because of the complexity and the rapidity of its events. To determine the relationship of the phase changes and the abnormalities of ventricular conduction, we performed phase image analysis of Tc-RBC gated blood pool scintigrams in patients with intraventricular conduction disturbances (24 complete left bundle branch block (C-LBBB), 15 complete right bundle branch block (C-RBBB), 13 Wolff-Parkinson-White syndrome (WPW), 10 controls). The results were as follows; 1) The ejection fraction (EF), peak ejection rate (PER), and peak filling rate (PFR) of LV in gated blood pool scintigraphy (GBPS) were significantly lower in patients with C-LBBB than in controls (44.4 +- 13.9% vs 69.9 +- 4.2%, 2.48 +- 0.98 vs 3.51 +- 0,62, 1.76 +- 0.71 vs 3.38 +- 0.92, respectively, p<0.05). 2) In the phase angle analysis of LV, Standard deviation (SD), width of half maximum of phase angle (FWHM), and range of phase angle were significantly increased in patients with C-LBBB than in controls (20.6 + 18.1 vs S.6 + I.8, 22. 5 + 9.2 vs 16.0 + 3.9, 95.7 + 31.7 vs 51.3 + 5.4, respectively, p<0.05). 3) There was no significant difference in EF, PER, PFR between patients with the WolffParkinson-White syndrome and controls. 4) Standard deviation and range of phase angle were significantly higher in patients with WPW syndrome than in controls (10.6 + 2.6 vs 8.6 + 1.8, p<0.05, 69.8 + 11.7 vs 51.3 + 5 4, p<0.001, respectively), however, there was no difference between the two groups in full width of half maximum. 5) Phase image analysis revealed relatively uniform phase across the both ventriles in patients with normal conduction, but markedly delayed phase in the left ventricle

  10. A report on digital image processing and analysis

    International Nuclear Information System (INIS)

    Singh, B.; Alex, J.; Haridasan, G.

    1989-01-01

    This report presents developments in software, connected with digital image processing and analysis in the Centre. In image processing, one resorts to either alteration of grey level values so as to enhance features in the image or resorts to transform domain operations for restoration or filtering. Typical transform domain operations like Karhunen-Loeve transforms are statistical in nature and are used for a good registration of images or template - matching. Image analysis procedures segment grey level images into images contained within selectable windows, for the purpose of estimating geometrical features in the image, like area, perimeter, projections etc. In short, in image processing both the input and output are images, whereas in image analyses, the input is an image whereas the output is a set of numbers and graphs. (author). 19 refs

  11. Uses of software in digital image analysis: a forensic report

    Science.gov (United States)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  12. MRI histogram analysis enables objective and continuous classification of intervertebral disc degeneration.

    Science.gov (United States)

    Waldenberg, Christian; Hebelka, Hanna; Brisby, Helena; Lagerstrand, Kerstin Magdalena

    2018-05-01

    Magnetic resonance imaging (MRI) is the best diagnostic imaging method for low back pain. However, the technique is currently not utilized in its full capacity, often failing to depict painful intervertebral discs (IVDs), potentially due to the rough degeneration classification system used clinically today. MR image histograms, which reflect the IVD heterogeneity, may offer sensitive imaging biomarkers for IVD degeneration classification. This study investigates the feasibility of using histogram analysis as means of objective and continuous grading of IVD degeneration. Forty-nine IVDs in ten low back pain patients (six males, 25-69 years) were examined with MRI (T2-weighted images and T2-maps). Each IVD was semi-automatically segmented on three mid-sagittal slices. Histogram features of the IVD were extracted from the defined regions of interest and correlated to Pfirrmann grade. Both T2-weighted images and T2-maps displayed similar histogram features. Histograms of well-hydrated IVDs displayed two separate peaks, representing annulus fibrosus and nucleus pulposus. Degenerated IVDs displayed decreased peak separation, where the separation was shown to correlate strongly with Pfirrmann grade (P histogram appearances. Histogram features correlated well with IVD degeneration, suggesting that IVD histogram analysis is a suitable tool for objective and continuous IVD degeneration classification. As histogram analysis revealed IVD heterogeneity, it may be a clinical tool for characterization of regional IVD degeneration effects. To elucidate the usefulness of histogram analysis in patient management, IVD histogram features between asymptomatic and symptomatic individuals needs to be compared.

  13. Analysis of image plane's Illumination in Image-forming System

    International Nuclear Information System (INIS)

    Duan Lihua; Zeng Yan'an; Zhang Nanyangsheng; Wang Zhiguo; Yin Shiliang

    2011-01-01

    In the detection of optical radiation, the detecting accuracy is affected by optic power distribution of the detector's surface to a large extent. In addition, in the image-forming system, the quality of the image is greatly determined by the uniformity of the image's illumination distribution. However, in the practical optical system, affected by the factors such as field of view, false light and off axis and so on, the distribution of the image's illumination tends to be non uniform, so it is necessary to discuss the image plane's illumination in image-forming systems. In order to analyze the characteristics of the image-forming system at a full range, on the basis of photometry, the formulas to calculate the illumination of the imaging plane have been summarized by the numbers. Moreover, the relationship between the horizontal offset of the light source and the illumination of the image has been discussed in detail. After that, the influence of some key factors such as aperture angle, off-axis distance and horizontal offset on illumination of the image has been brought forward. Through numerical simulation, various theoretical curves of those key factors have been given. The results of the numerical simulation show that it is recommended to aggrandize the diameter of the exit pupil to increase the illumination of the image. The angle of view plays a negative role in the illumination distribution of the image, that is, the uniformity of the illumination distribution can be enhanced by compressing the angle of view. Lastly, it is proved that telecentric optical design is an effective way to advance the uniformity of the illumination distribution.

  14. Fuzzy logic and image processing techniques for the interpretation of seismic data

    International Nuclear Information System (INIS)

    Orozco-del-Castillo, M G; Ortiz-Alemán, C; Rodríguez-Castellanos, A; Urrutia-Fucugauchi, J

    2011-01-01

    Since interpretation of seismic data is usually a tedious and repetitive task, the ability to do so automatically or semi-automatically has become an important objective of recent research. We believe that the vagueness and uncertainty in the interpretation process makes fuzzy logic an appropriate tool to deal with seismic data. In this work we developed a semi-automated fuzzy inference system to detect the internal architecture of a mass transport complex (MTC) in seismic images. We propose that the observed characteristics of a MTC can be expressed as fuzzy if-then rules consisting of linguistic values associated with fuzzy membership functions. The constructions of the fuzzy inference system and various image processing techniques are presented. We conclude that this is a well-suited problem for fuzzy logic since the application of the proposed methodology yields a semi-automatically interpreted MTC which closely resembles the MTC from expert manual interpretation

  15. Difference Image Analysis of Galactic Microlensing. I. Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Alcock, C.; Allsman, R. A.; Alves, D.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K. (and others)

    1999-08-20

    This is a preliminary report on the application of Difference Image Analysis (DIA) to Galactic bulge images. The aim of this analysis is to increase the sensitivity to the detection of gravitational microlensing. We discuss how the DIA technique simplifies the process of discovering microlensing events by detecting only objects that have variable flux. We illustrate how the DIA technique is not limited to detection of so-called ''pixel lensing'' events but can also be used to improve photometry for classical microlensing events by removing the effects of blending. We will present a method whereby DIA can be used to reveal the true unblended colors, positions, and light curves of microlensing events. We discuss the need for a technique to obtain the accurate microlensing timescales from blended sources and present a possible solution to this problem using the existing Hubble Space Telescope color-magnitude diagrams of the Galactic bulge and LMC. The use of such a solution with both classical and pixel microlensing searches is discussed. We show that one of the major causes of systematic noise in DIA is differential refraction. A technique for removing this systematic by effectively registering images to a common air mass is presented. Improvements to commonly used image differencing techniques are discussed. (c) 1999 The American Astronomical Society.

  16. ANALYSIS AND APPLICATION OF LINEAMENTS EXTRACTION USING GF-1 SATELLITE IMAGES IN LOESS COVERED

    Directory of Open Access Journals (Sweden)

    L. Han

    2018-04-01

    Full Text Available Faults, folds and other tectonics regions belong to the weak areas of geology, will form linear geomorphology as a result of erosion, which appears as lineaments on the earth surface. Lineaments control the distribution of regional formation, groundwater, and geothermal, etc., so it is an important indicator for the evaluation of the strength and stability of the geological structure. The current algorithms mostly are artificial visual interpretation and computer semi-automatic extraction, not only time-consuming, but labour-intensive. It is difficult to guarantee the accuracy due to the dependence on the expert’s knowledge, experience, and the computer hardware and software. Therefore, an integrated algorithm is proposed based on the GF-1 satellite image data, taking the loess area in the northern part of Jinlinghe basin as an example. Firstly, the best bands with 4-3-2 composition is chosen using optimum index factor (OIF. Secondly, line edge is highlighted by Gaussian high-pass filter and tensor voting. Finally, the Hough Transform is used to detect the geologic lineaments. Thematic maps of geological structure in this area are mapped through the extraction of lineaments. The experimental results show that, influenced by the northern margin of Qinling Mountains and the declined Weihe Basin, the lineaments are mostly distributed over the terrain lines, and mainly in the NW, NE, NNE, and ENE directions. It provided a reliable basis for analysing tectonic stress trend because of the agreement with the existing regional geological survey. The algorithm is more practical and has higher robustness, less disturbed by human factors.

  17. Analysis and Application of Lineaments Extraction Using GF-1 Satellite Images in Loess Covered

    Science.gov (United States)

    Han, L.; Liu, Z.; Zhao, Z.; Ning, Y.

    2018-04-01

    Faults, folds and other tectonics regions belong to the weak areas of geology, will form linear geomorphology as a result of erosion, which appears as lineaments on the earth surface. Lineaments control the distribution of regional formation, groundwater, and geothermal, etc., so it is an important indicator for the evaluation of the strength and stability of the geological structure. The current algorithms mostly are artificial visual interpretation and computer semi-automatic extraction, not only time-consuming, but labour-intensive. It is difficult to guarantee the accuracy due to the dependence on the expert's knowledge, experience, and the computer hardware and software. Therefore, an integrated algorithm is proposed based on the GF-1 satellite image data, taking the loess area in the northern part of Jinlinghe basin as an example. Firstly, the best bands with 4-3-2 composition is chosen using optimum index factor (OIF). Secondly, line edge is highlighted by Gaussian high-pass filter and tensor voting. Finally, the Hough Transform is used to detect the geologic lineaments. Thematic maps of geological structure in this area are mapped through the extraction of lineaments. The experimental results show that, influenced by the northern margin of Qinling Mountains and the declined Weihe Basin, the lineaments are mostly distributed over the terrain lines, and mainly in the NW, NE, NNE, and ENE directions. It provided a reliable basis for analysing tectonic stress trend because of the agreement with the existing regional geological survey. The algorithm is more practical and has higher robustness, less disturbed by human factors.

  18. Construction, implementation and testing of an image identification system using computer vision methods for fruit flies with economic importance (Diptera: Tephritidae).

    Science.gov (United States)

    Wang, Jiang-Ning; Chen, Xiao-Lin; Hou, Xin-Wen; Zhou, Li-Bing; Zhu, Chao-Dong; Ji, Li-Qiang

    2017-07-01

    Many species of Tephritidae are damaging to fruit, which might negatively impact international fruit trade. Automatic or semi-automatic identification of fruit flies are greatly needed for diagnosing causes of damage and quarantine protocols for economically relevant insects. A fruit fly image identification system named AFIS1.0 has been developed using 74 species belonging to six genera, which include the majority of pests in the Tephritidae. The system combines automated image identification and manual verification, balancing operability and accuracy. AFIS1.0 integrates image analysis and expert system into a content-based image retrieval framework. In the the automatic identification module, AFIS1.0 gives candidate identification results. Afterwards users can do manual selection based on comparing unidentified images with a subset of images corresponding to the automatic identification result. The system uses Gabor surface features in automated identification and yielded an overall classification success rate of 87% to the species level by Independent Multi-part Image Automatic Identification Test. The system is useful for users with or without specific expertise on Tephritidae in the task of rapid and effective identification of fruit flies. It makes the application of computer vision technology to fruit fly recognition much closer to production level. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  19. An expert image analysis system for chromosome analysis application

    International Nuclear Information System (INIS)

    Wu, Q.; Suetens, P.; Oosterlinck, A.; Van den Berghe, H.

    1987-01-01

    This paper reports a recent study on applying a knowledge-based system approach as a new attempt to solve the problem of chromosome classification. A theoretical framework of an expert image analysis system is proposed, based on such a study. In this scheme, chromosome classification can be carried out under a hypothesize-and-verify paradigm, by integrating a rule-based component, in which the expertise of chromosome karyotyping is formulated with an existing image analysis system which uses conventional pattern recognition techniques. Results from the existing system can be used to bring in hypotheses, and with the rule-based verification and modification procedures, improvement of the classification performance can be excepted

  20. The Scientific Image in Behavior Analysis.

    Science.gov (United States)

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press.

  1. Etching and image analysis of the microstructure in marble

    DEFF Research Database (Denmark)

    Alm, Ditte; Brix, Susanne; Howe-Rasmussen, Helle

    2005-01-01

    of grains exposed on that surface are measured on the microscope images using image analysis by the program Adobe Photoshop 7.0 with Image Processing Toolkit 4.0. The parameters measured by the program on microscope images of thin sections of two marble types are used for calculation of the coefficient...

  2. Parallel 3-D image processing for nuclear emulsion

    International Nuclear Information System (INIS)

    Nakano, Toshiyuki

    2001-01-01

    The history of nuclear plate was explained. The first nuclear plate was named as pellicles covered with 600 μm of emulsion in Europe. In Japan Emulsion Cloud Chamber (ECC) using thin emulsion (50 μm) type nuclear plate was developed in 1960. Then, the semi-automatic analyzer (1971) and automatic analyzer (1980), Track Selector (TS) with memory stored 16 layer images in 512 x 512 x 16 pixel were developed. Moreover, NTS (New Track Selector), speeding up analyzer, was produced for analysis of results of CHORUS experiment in 1996. Simultaneous readout of 16 layer images had been carried out, but UTS (Ultra Track Selector) made possible to progressive treatment of 16 layers of some data and determination of traces in all angles. Direct detection of tau neutrino (VT) was studied by DONUT (FNAL E872) using UTS and nuclear plate. Neutrino beam was produced by 800 GeV proton beam hitting the fixed target. About 1100 phenomena of neutrino reactions were observed during six months of irradiation. 203 phenomena were detected. 4 examples were shown in this paper. OPERA experiment by SK is explained. (S.Y.)

  3. A High-Fidelity Haze Removal Method Based on HOT for Visible Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Hou Jiang

    2016-10-01

    Full Text Available Spatially varying haze is a common feature of most satellite images currently used for land cover classification and mapping and can significantly affect image quality. In this paper, we present a high-fidelity haze removal method based on Haze Optimized Transformation (HOT, comprising of three steps: semi-automatic HOT transform, HOT perfection and percentile based dark object subtraction (DOS. Since digital numbers (DNs of band red and blue are highly correlated in clear sky, the R-squared criterion is utilized to search the relative clearest regions of the whole scene automatically. After HOT transform, spurious HOT responses are first masked out and filled by means of four-direction scan and dynamic interpolation, and then homomorphic filter is performed to compensate for loss of HOT of masked-out regions with large areas. To avoid patches and halo artifacts, a procedure called percentile DOS is implemented to eliminate the influence of haze. Scenes including various land cover types are selected to validate the proposed method, and a comparison analysis with HOT and Background Suppressed Haze Thickness Index (BSHTI is performed. Three quality assessment indicators are selected to evaluate the haze removed effect on image quality from different perspective and band profiles are utilized to analyze the spectral consistency. Experiment results verify the effectiveness of the proposed method for haze removal and the superiority of it in preserving the natural color of object itself, enhancing local contrast, and maintaining structural information of original image.

  4. Neural Network for Nanoscience Scanning Electron Microscope Image Recognition.

    Science.gov (United States)

    Modarres, Mohammad Hadi; Aversa, Rossella; Cozzini, Stefano; Ciancio, Regina; Leto, Angelo; Brandino, Giuseppe Piero

    2017-10-16

    In this paper we applied transfer learning techniques for image recognition, automatic categorization, and labeling of nanoscience images obtained by scanning electron microscope (SEM). Roughly 20,000 SEM images were manually classified into 10 categories to form a labeled training set, which can be used as a reference set for future applications of deep learning enhanced algorithms in the nanoscience domain. The categories chosen spanned the range of 0-Dimensional (0D) objects such as particles, 1D nanowires and fibres, 2D films and coated surfaces, and 3D patterned surfaces such as pillars. The training set was used to retrain on the SEM dataset and to compare many convolutional neural network models (Inception-v3, Inception-v4, ResNet). We obtained compatible results by performing a feature extraction of the different models on the same dataset. We performed additional analysis of the classifier on a second test set to further investigate the results both on particular cases and from a statistical point of view. Our algorithm was able to successfully classify around 90% of a test dataset consisting of SEM images, while reduced accuracy was found in the case of images at the boundary between two categories or containing elements of multiple categories. In these cases, the image classification did not identify a predominant category with a high score. We used the statistical outcomes from testing to deploy a semi-automatic workflow able to classify and label images generated by the SEM. Finally, a separate training was performed to determine the volume fraction of coherently aligned nanowires in SEM images. The results were compared with what was obtained using the Local Gradient Orientation method. This example demonstrates the versatility and the potential of transfer learning to address specific tasks of interest in nanoscience applications.

  5. Application of automatic image analysis in wood science

    Science.gov (United States)

    Charles W. McMillin

    1982-01-01

    In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...

  6. Brain-inspired algorithms for retinal image analysis

    NARCIS (Netherlands)

    ter Haar Romeny, B.M.; Bekkers, E.J.; Zhang, J.; Abbasi-Sureshjani, S.; Huang, F.; Duits, R.; Dasht Bozorg, Behdad; Berendschot, T.T.J.M.; Smit-Ockeloen, I.; Eppenhof, K.A.J.; Feng, J.; Hannink, J.; Schouten, J.; Tong, M.; Wu, H.; van Triest, J.W.; Zhu, S.; Chen, D.; He, W.; Xu, L.; Han, P.; Kang, Y.

    2016-01-01

    Retinal image analysis is a challenging problem due to the precise quantification required and the huge numbers of images produced in screening programs. This paper describes a series of innovative brain-inspired algorithms for automated retinal image analysis, recently developed for the RetinaCheck

  7. From Pixels to Geographic Objects in Remote Sensing Image Analysis

    NARCIS (Netherlands)

    Addink, E.A.; Van Coillie, Frieke M.B.; Jong, Steven M. de

    Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received

  8. An image scanner for real time analysis of spark chamber images

    International Nuclear Information System (INIS)

    Cesaroni, F.; Penso, G.; Locci, A.M.; Spano, M.A.

    1975-01-01

    The notes describes the semiautomatic scanning system at LNF for the analysis of spark chamber images. From the projection of the images on the scanner table, the trajectory in the real space is reconstructed

  9. Textural features for radar image analysis

    Science.gov (United States)

    Shanmugan, K. S.; Narayanan, V.; Frost, V. S.; Stiles, J. A.; Holtzman, J. C.

    1981-01-01

    Texture is seen as an important spatial feature useful for identifying objects or regions of interest in an image. While textural features have been widely used in analyzing a variety of photographic images, they have not been used in processing radar images. A procedure for extracting a set of textural features for characterizing small areas in radar images is presented, and it is shown that these features can be used in classifying segments of radar images corresponding to different geological formations.

  10. Analysis of RTM extended images for VTI media

    KAUST Repository

    Li, Vladimir; Tsvankin, Ilya; Alkhalifah, Tariq Ali

    2015-01-01

    velocity analysis remain generally valid in the extended image space for complex media. The dependence of RMO on errors in the anisotropy parameters provides essential insights for anisotropic wavefield tomography using extended images.

  11. Direct identification of fungi using image analysis

    DEFF Research Database (Denmark)

    Dørge, Thorsten Carlheim; Carstensen, Jens Michael; Frisvad, Jens Christian

    1999-01-01

    Filamentous fungi have often been characterized, classified or identified with a major emphasis on macromorphological characters, i.e. the size, texture and color of fungal colonies grown on one or more identification media. This approach has been rejcted by several taxonomists because of the sub......Filamentous fungi have often been characterized, classified or identified with a major emphasis on macromorphological characters, i.e. the size, texture and color of fungal colonies grown on one or more identification media. This approach has been rejcted by several taxonomists because...... of the subjectivity in the visual evaluation and quantification (if any)of such characters and the apparent large variability of the features. We present an image analysis approach for objective identification and classification of fungi. The approach is exemplified by several isolates of nine different species...... of the genus Penicillium, known to be very difficult to identify correctly. The fungi were incubated on YES and CYA for one week at 25 C (3 point inoculation) in 9 cm Petri dishes. The cultures are placed under a camera where a digital image of the front of the colonies is acquired under optimal illumination...

  12. Image sequence analysis in nuclear medicine: (1) Parametric imaging using statistical modelling

    International Nuclear Information System (INIS)

    Liehn, J.C.; Hannequin, P.; Valeyre, J.

    1989-01-01

    This is a review of parametric imaging methods on Nuclear Medicine. A Parametric Image is an image in which each pixel value is a function of the value of the same pixel of an image sequence. The Local Model Method is the fitting of each pixel time activity curve by a model which parameter values form the Parametric Images. The Global Model Method is the modelling of the changes between two images. It is applied to image comparison. For both methods, the different models, the identification criterion, the optimization methods and the statistical properties of the images are discussed. The analysis of one or more Parametric Images is performed using 1D or 2D histograms. The statistically significant Parametric Images, (Images of significant Variances, Amplitudes and Differences) are also proposed [fr

  13. Biostatistical analysis of quantitative immunofluorescence microscopy images.

    Science.gov (United States)

    Giles, C; Albrecht, M A; Lam, V; Takechi, R; Mamo, J C

    2016-12-01

    Semiquantitative immunofluorescence microscopy has become a key methodology in biomedical research. Typical statistical workflows are considered in the context of avoiding pseudo-replication and marginalising experimental error. However, immunofluorescence microscopy naturally generates hierarchically structured data that can be leveraged to improve statistical power and enrich biological interpretation. Herein, we describe a robust distribution fitting procedure and compare several statistical tests, outlining their potential advantages/disadvantages in the context of biological interpretation. Further, we describe tractable procedures for power analysis that incorporates the underlying distribution, sample size and number of images captured per sample. The procedures outlined have significant potential for increasing understanding of biological processes and decreasing both ethical and financial burden through experimental optimization. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  14. Computerised image analysis of biocrystallograms originating from agricultural products

    DEFF Research Database (Denmark)

    Andersen, Jens-Otto; Henriksen, Christian B.; Laursen, J.

    1999-01-01

    Procedures are presented for computerised image analysis of iocrystallogram images, originating from biocrystallization investigations of agricultural products. The biocrystallization method is based on the crystallographic phenomenon that when adding biological substances, such as plant extracts...... on up to eight parameters indicated strong relationships, with R2 up to 0.98. It is concluded that the procedures were able to discriminate the seven groups of images, and are applicable for biocrystallization investigations of agricultural products. Perspectives for the application of image analysis...

  15. Image analysis and microscopy: a useful combination

    Directory of Open Access Journals (Sweden)

    Pinotti L.

    2009-01-01

    Full Text Available The TSE Roadmap published in 2005 (DG for Health and Consumer Protection, 2005 suggests that short and medium term (2005-2009 amendments to control BSE policy should include “a relaxation of certain measures of the current total feed ban when certain conditions are met”. The same document noted “the starting point when revising the current feed ban provisions should be risk-based but at the same time taking into account the control tools in place to evaluate and ensure the proper implementation of this feed ban”. The clear implication is that adequate analytical methods to detect constituents of animal origin in feedstuffs are required. The official analytical method for the detection of constituents of animal origin in feedstuffs is the microscopic examination technique as described in Commission Directive 2003/126/EC of 23 December 2003 [OJ L 339, 24.12.2003, 78]. Although the microscopic method is usually able to distinguish fish from land animal material, it is often unable to distinguish between different terrestrial animals. Fulfillments of the requirements of Regulation 1774/2002/EC laying down health rules concerning animal by-products not intended for human consumption, clearly implies that it must be possible to identify the origin animal materials, at higher taxonomic levels than in the past. Thus improvements in all methods of detecting constituents of animal origin are required, including the microscopic method. This article will examine the problem of meat and bone meal in animal feeds, and the use of microscopic methods in association with computer image analysis to identify the source species of these feedstuff contaminants. Image processing, integrated with morphometric measurements can provide accurate and reliable results and can be a very useful aid to the analyst in the characterization, analysis and control of feedstuffs.

  16. Forensic image analysis - CCTV distortion and artefacts.

    Science.gov (United States)

    Seckiner, Dilan; Mallett, Xanthé; Roux, Claude; Meuwly, Didier; Maynard, Philip

    2018-04-01

    As a result of the worldwide deployment of surveillance cameras, authorities have gained a powerful tool that captures footage of activities of people in public areas. Surveillance cameras allow continuous monitoring of the area and allow footage to be obtained for later use, if a criminal or other act of interest occurs. Following this, a forensic practitioner, or expert witness can be required to analyse the footage of the Person of Interest. The examination ultimately aims at evaluating the strength of evidence at source and activity levels. In this paper, both source and activity levels are inferred from the trace, obtained in the form of CCTV footage. The source level alludes to features observed within the anatomy and gait of an individual, whilst the activity level relates to activity undertaken by the individual within the footage. The strength of evidence depends on the value of the information recorded, where the activity level is robust, yet source level requires further development. It is therefore suggested that the camera and the associated distortions should be assessed first and foremost and, where possible, quantified, to determine the level of each type of distortion present within the footage. A review of the 'forensic image analysis' review is presented here. It will outline the image distortion types and detail the limitations of differing surveillance camera systems. The aim is to highlight various types of distortion present particularly from surveillance footage, as well as address gaps in current literature in relation to assessment of CCTV distortions in tandem with gait analysis. Future work will consider the anatomical assessment from surveillance footage. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    International Nuclear Information System (INIS)

    STOYANOVA, R.S.; OCHS, M.F.; BROWN, T.R.; ROONEY, W.D.; LI, X.; LEE, J.H.; SPRINGER, C.S.

    1999-01-01

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content

  18. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  19. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  20. Machine learning based analysis of cardiovascular images

    NARCIS (Netherlands)

    Wolterink, JM

    2017-01-01

    Cardiovascular diseases (CVDs), including coronary artery disease (CAD) and congenital heart disease (CHD) are the global leading cause of death. Computed tomography (CT) and magnetic resonance imaging (MRI) allow non-invasive imaging of cardiovascular structures. This thesis presents machine

  1. Analysis of Pregerminated Barley Using Hyperspectral Image Analysis

    DEFF Research Database (Denmark)

    Arngren, Morten; Hansen, Per Waaben; Eriksen, Birger

    2011-01-01

    imaging system in a mathematical modeling framework to identify pregerminated barley at an early stage of approximately 12 h of pregermination. Our model only assigns pregermination as the cause for a single kernel’s lack of germination and is unable to identify dormancy, kernel damage etc. The analysis...... is based on more than 750 Rosalina barley kernels being pregerminated at 8 different durations between 0 and 60 h based on the BRF method. Regerminating the kernels reveals a grouping of the pregerminated kernels into three categories: normal, delayed and limited germination. Our model employs a supervised...

  2. Image quality analysis of digital mammographic equipments

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, P.; Pascual, A.; Verdu, G. [Valencia Univ. Politecnica, Chemical and Nuclear Engineering Dept. (Spain); Rodenas, F. [Valencia Univ. Politecnica, Applied Mathematical Dept. (Spain); Campayo, J.M. [Valencia Univ. Hospital Clinico, Servicio de Radiofisica y Proteccion Radiologica (Spain); Villaescusa, J.I. [Hospital Clinico La Fe, Servicio de Proteccion Radiologica, Valencia (Spain)

    2006-07-01

    The image quality assessment of a radiographic phantom image is one of the fundamental points in a complete quality control programme. The good functioning result of all the process must be an image with an appropriate quality to carry out a suitable diagnostic. Nowadays, the digital radiographic equipments are replacing the traditional film-screen equipments and it is necessary to update the parameters to guarantee the quality of the process. Contrast-detail phantoms are applied to digital radiography to study the threshold contrast detail sensitivity at operation conditions of the equipment. The phantom that is studied in this work is C.D.M.A.M. 3.4, which facilitates the evaluation of image contrast and detail resolution. One of the most extended indexes to measure the image quality in an objective way is the Image Quality Figure (I.Q.F.). This parameter is useful to calculate the image quality taking into account the contrast and detail resolution of the image analysed. The contrast-detail curve is useful as a measure of the image quality too, because it is a graphical representation in which the hole thickness and diameter are plotted for each contrast-detail combination detected in the radiographic image of the phantom. It is useful for the comparison of the functioning of different radiographic image systems, for phantom images under the same exposition conditions. The aim of this work is to study the image quality of different images contrast-detail phantom C.D.M.A.M. 3.4, carrying out the automatic detection of the contrast-detail combination and to establish a parameter which characterize in an objective way the mammographic image quality. This is useful to compare images obtained at different digital mammographic equipments to study the functioning of the equipments. (authors)

  3. Image quality analysis of digital mammographic equipments

    International Nuclear Information System (INIS)

    Mayo, P.; Pascual, A.; Verdu, G.; Rodenas, F.; Campayo, J.M.; Villaescusa, J.I.

    2006-01-01

    The image quality assessment of a radiographic phantom image is one of the fundamental points in a complete quality control programme. The good functioning result of all the process must be an image with an appropriate quality to carry out a suitable diagnostic. Nowadays, the digital radiographic equipments are replacing the traditional film-screen equipments and it is necessary to update the parameters to guarantee the quality of the process. Contrast-detail phantoms are applied to digital radiography to study the threshold contrast detail sensitivity at operation conditions of the equipment. The phantom that is studied in this work is C.D.M.A.M. 3.4, which facilitates the evaluation of image contrast and detail resolution. One of the most extended indexes to measure the image quality in an objective way is the Image Quality Figure (I.Q.F.). This parameter is useful to calculate the image quality taking into account the contrast and detail resolution of the image analysed. The contrast-detail curve is useful as a measure of the image quality too, because it is a graphical representation in which the hole thickness and diameter are plotted for each contrast-detail combination detected in the radiographic image of the phantom. It is useful for the comparison of the functioning of different radiographic image systems, for phantom images under the same exposition conditions. The aim of this work is to study the image quality of different images contrast-detail phantom C.D.M.A.M. 3.4, carrying out the automatic detection of the contrast-detail combination and to establish a parameter which characterize in an objective way the mammographic image quality. This is useful to compare images obtained at different digital mammographic equipments to study the functioning of the equipments. (authors)

  4. Machine learning approaches in medical image analysis

    DEFF Research Database (Denmark)

    de Bruijne, Marleen

    2016-01-01

    Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols......, learning from weak labels, and interpretation and evaluation of results....

  5. Principal component analysis of psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...

  6. An application of image processing techniques in computed tomography image analysis

    DEFF Research Database (Denmark)

    McEvoy, Fintan

    2007-01-01

    number of animals and image slices, automation of the process was desirable. The open-source and free image analysis program ImageJ was used. A macro procedure was created that provided the required functionality. The macro performs a number of basic image processing procedures. These include an initial...... process designed to remove the scanning table from the image and to center the animal in the image. This is followed by placement of a vertical line segment from the mid point of the upper border of the image to the image center. Measurements are made between automatically detected outer and inner...... boundaries of subcutaneous adipose tissue along this line segment. This process was repeated as the image was rotated (with the line position remaining unchanged) so that measurements around the complete circumference were obtained. Additionally, an image was created showing all detected boundary points so...

  7. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    Science.gov (United States)

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-10-01

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    Science.gov (United States)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  9. Semi-Automatic Measurement of the Airway Dimension by Computed Tomography Using the Full-With-Half- Maximum Method: a Study of the Measurement Accuracy according to the Orientation of an Artificial Airway

    International Nuclear Information System (INIS)

    Kim, Nam Kug; Seo, Joon Beom; Song, Koun Sik; Chae, Eun Jin; Kang, Suk Ho

    2008-01-01

    To develop an algorithm to measure the dimensions of an airway oriented obliquely on a volumetric CT, as well as assess the effect of the imaging parameters on the correct measurement of the airway dimension. An airway phantom with 11 poly-acryl tubes of various lumen diameters and wall thicknesses was scanned using a 16-MDCT (multidetector CT) at various tilt angles (0, 30, 45, and 60 ). The CT images were reconstructed at various reconstruction kernels and thicknesses. The axis of each airway was determined using the 3D thinning algorithm, with images perpendicular to the axis being reconstructed. The luminal radius and wall thickness was measured by the full-width-half-maximum method. The influence of the CT parameters (the size of the airways, obliquity on the radius and wall thickness) was assessed by comparing the actual dimension of each tube with the estimated values. The 3D thinning algorithm correctly determined the axis of the oblique airway in all tubes (mean error: 0.91 ± 0.82 .deg. ). A sharper reconstruction kernel, thicker image thickness and larger tilt angle of the airway axis resulted in a significant decrease of the measured wall thickness and an increase of the measured luminal radius. Use of a standard kernel and a 0.75-mm slice thickness resulted in the most accurate measurement of airway dimension, which was independent of obliquity. The airway obliquity and imaging parameters have a strong influence on the accuracy of the airway wall measurement. For the accurate measurement of airway thickness, the CT images should be reconstructed with a standard kernel and a 0.75 mm slice thickness

  10. Towards automatic quantitative analysis of cardiac MR perfusion images

    NARCIS (Netherlands)

    Breeuwer, M.; Quist, M.; Spreeuwers, Lieuwe Jan; Paetsch, I.; Al-Saadi, N.; Nagel, E.

    2001-01-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and reliable automatic image analysis methods. This paper focuses on the automatic evaluation of

  11. Subsurface offset behaviour in velocity analysis with extended reflectivity images

    NARCIS (Netherlands)

    Mulder, W.A.

    2013-01-01

    Migration velocity analysis with the constant-density acoustic wave equation can be accomplished by the focusing of extended migration images, obtained by introducing a subsurface shift in the imaging condition. A reflector in a wrong velocity model will show up as a curve in the extended image. In

  12. Visual Analytics Applied to Image Analysis : From Segmentation to Classification

    NARCIS (Netherlands)

    Rauber, Paulo

    2017-01-01

    Image analysis is the field of study concerned with extracting information from images. This field is immensely important for commercial and scientific applications, from identifying people in photographs to recognizing diseases in medical images. The goal behind the work presented in this thesis is

  13. Mesh Processing in Medical-Image Analysis-a Tutorial

    DEFF Research Database (Denmark)

    Levine, Joshua A.; Paulsen, Rasmus Reinhold; Zhang, Yongjie

    2012-01-01

    Medical-image analysis requires an understanding of sophisticated scanning modalities, constructing geometric models, building meshes to represent domains, and downstream biological applications. These four steps form an image-to-mesh pipeline. For research in this field to progress, the imaging...

  14. Intrasubject registration for change analysis in medical imaging

    NARCIS (Netherlands)

    Staring, M.

    2008-01-01

    Image matching is important for the comparison of medical images. Comparison is of clinical relevance for the analysis of differences due to changes in the health of a patient. For example, when a disease is imaged at two time points, then one wants to know if it is stable, has regressed, or

  15. Hierarchical layered and semantic-based image segmentation using ergodicity map

    Science.gov (United States)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects

  16. Image quality preferences among radiographers and radiologists. A conjoint analysis

    International Nuclear Information System (INIS)

    Ween, Borgny; Kristoffersen, Doris Tove; Hamilton, Glenys A.; Olsen, Dag Rune

    2005-01-01

    Purpose: The aim of this study was to investigate the image quality preferences among radiographers and radiologists. The radiographers' preferences are mainly related to technical parameters, whereas radiologists assess image quality based on diagnostic value. Methods: A conjoint analysis was undertaken to survey image quality preferences; the study included 37 respondents: 19 radiographers and 18 radiologists. Digital urograms were post-processed into 8 images with different properties of image quality for 3 different patients. The respondents were asked to rank the images according to their personally perceived subjective image quality. Results: Nearly half of the radiographers and radiologists were consistent in their ranking of the image characterised as 'very best image quality'. The analysis showed, moreover, that chosen filtration level and image intensity were responsible for 72% and 28% of the preferences, respectively. The corresponding figures for each of the two professions were 76% and 24% for the radiographers, and 68% and 32% for the radiologists. In addition, there were larger variations in image preferences among the radiologists, as compared to the radiographers. Conclusions: Radiographers revealed a more consistent preference than the radiologists with respect to image quality. There is a potential for image quality improvement by developing sets of image property criteria

  17. Lexical Sentiment Analysis in Slovenian Texts

    OpenAIRE

    VOLČANŠEK, MATEJA

    2015-01-01

    The goal of this thesis is to create a sentiment dictionary for the Slovenian language which can be used in lexical methods for automatic sentiment analysis. We start from a sentiment dictionary for the English language, translate it semi-automatically to Slovenian and curate its content. We test the performance of using the translated dictionary for automated lexical sentiment analysis on a corpus of 5000 manually annotated Slovenian news articles gathered from the main Slovenian news por...

  18. Convergence analysis in near-field imaging

    International Nuclear Information System (INIS)

    Bao, Gang; Li, Peijun

    2014-01-01

    This paper is devoted to the mathematical analysis of the direct and inverse modeling of the diffraction by a perfectly conducting grating surface in the near-field regime. It is motivated by our effort to analyze recent significant numerical results, in order to solve a class of inverse rough surface scattering problems in near-field imaging. In a model problem, the diffractive grating surface is assumed to be a small and smooth deformation of a plane surface. On the basis of the variational method, the direct problem is shown to have a unique weak solution. An analytical solution is introduced as a convergent power series in the deformation parameter by using the transformed field and Fourier series expansions. A local uniqueness result is proved for the inverse problem where only a single incident field is needed. On the basis of the analytic solution of the direct problem, an explicit reconstruction formula is presented for recovering the grating surface function with resolution beyond the Rayleigh criterion. Error estimates for the reconstructed grating surface are established with fully revealed dependence on such quantities as the surface deformation parameter, measurement distance, noise level of the scattering data, and regularity of the exact grating surface function. (paper)

  19. IMAGE ANALYSIS FOR MODELLING SHEAR BEHAVIOUR

    Directory of Open Access Journals (Sweden)

    Philippe Lopez

    2011-05-01

    Full Text Available Through laboratory research performed over the past ten years, many of the critical links between fracture characteristics and hydromechanical and mechanical behaviour have been made for individual fractures. One of the remaining challenges at the laboratory scale is to directly link fracture morphology of shear behaviour with changes in stress and shear direction. A series of laboratory experiments were performed on cement mortar replicas of a granite sample with a natural fracture perpendicular to the axis of the core. Results show that there is a strong relationship between the fracture's geometry and its mechanical behaviour under shear stress and the resulting damage. Image analysis, geostatistical, stereological and directional data techniques are applied in combination to experimental data. The results highlight the role of geometric characteristics of the fracture surfaces (surface roughness, size, shape, locations and orientations of asperities to be damaged in shear behaviour. A notable improvement in shear understanding is that shear behaviour is controlled by the apparent dip in the shear direction of elementary facets forming the fracture.

  20. Measure by image analysis of industrial radiographs

    International Nuclear Information System (INIS)

    Brillault, B.

    1988-01-01

    A digital radiographic picture processing system for non destructive testing intends to provide the expert with computer tool, to precisely quantify radiographic images. The author describes the main problems, from the image formation to its characterization. She also insists on the necessity to define a precise process in order to automatize the system. Some examples illustrate the efficiency of digital processing for radiographic images [fr

  1. MORPHOLOGY BY IMAGE ANALYSIS K. Belaroui and M. N Pons ...

    African Journals Online (AJOL)

    31 déc. 2012 ... Keywords: Characterization; particle size; morphology; image analysis; porous media. 1. INTRODUCTION. La puissance de l'analyse d'images comme ... en une image numérique au moyen d'un convertisseur analogique digital (A/D). Les points de l'image sont disposés suivant une grille en réseau carré, ...

  2. PIZZARO: Forensic analysis and restoration of image and video data

    Czech Academy of Sciences Publication Activity Database

    Kamenický, Jan; Bartoš, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozámský, Adam; Saic, Stanislav; Šroubek, Filip; Šorel, Michal; Zita, Aleš; Zitová, Barbara; Šíma, Z.; Švarc, P.; Hořínek, J.

    2016-01-01

    Roč. 264, č. 1 (2016), s. 153-166 ISSN 0379-0738 R&D Projects: GA MV VG20102013064; GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Image forensic analysis * Image restoration * Image tampering detection * Image source identification Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/kamenicky-0459504.pdf

  3. A segmentation and point-matching enhanced efficient deformable image registration method for dose accumulation between HDR CT images

    International Nuclear Information System (INIS)

    Zhen, Xin; Chen, Haibin; Zhou, Linghong; Yan, Hao; Jiang, Steve; Jia, Xun; Gu, Xuejun; Mell, Loren K; Yashar, Catheryn M; Cervino, Laura

    2015-01-01

    Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based ‘thin-plate-spline robust point matching’ algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses. (paper)

  4. A segmentation and point-matching enhanced efficient deformable image registration method for dose accumulation between HDR CT images

    Science.gov (United States)

    Zhen, Xin; Chen, Haibin; Yan, Hao; Zhou, Linghong; Mell, Loren K.; Yashar, Catheryn M.; Jiang, Steve; Jia, Xun; Gu, Xuejun; Cervino, Laura

    2015-04-01

    Deformable image registration (DIR) of fractional high-dose-rate (HDR) CT images is challenging due to the presence of applicators in the brachytherapy image. Point-to-point correspondence fails because of the undesired deformation vector fields (DVF) propagated from the applicator region (AR) to the surrounding tissues, which can potentially introduce significant DIR errors in dose mapping. This paper proposes a novel segmentation and point-matching enhanced efficient DIR (named SPEED) scheme to facilitate dose accumulation among HDR treatment fractions. In SPEED, a semi-automatic seed point generation approach is developed to obtain the incremented fore/background point sets to feed the random walks algorithm, which is used to segment and remove the AR, leaving empty AR cavities in the HDR CT images. A feature-based ‘thin-plate-spline robust point matching’ algorithm is then employed for AR cavity surface points matching. With the resulting mapping, a DVF defining on each voxel is estimated by B-spline approximation, which serves as the initial DVF for the subsequent Demons-based DIR between the AR-free HDR CT images. The calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative analysis and visual inspection of the DIR results indicate that SPEED can suppress the impact of applicator on DIR, and accurately register HDR CT images as well as deform and add interfractional HDR doses.

  5. New approaches in intelligent image analysis techniques, methodologies and applications

    CERN Document Server

    Nakamatsu, Kazumi

    2016-01-01

    This book presents an Introduction and 11 independent chapters, which are devoted to various new approaches of intelligent image processing and analysis. The book also presents new methods, algorithms and applied systems for intelligent image processing, on the following basic topics: Methods for Hierarchical Image Decomposition; Intelligent Digital Signal Processing and Feature Extraction; Data Clustering and Visualization via Echo State Networks; Clustering of Natural Images in Automatic Image Annotation Systems; Control System for Remote Sensing Image Processing; Tissue Segmentation of MR Brain Images Sequence; Kidney Cysts Segmentation in CT Images; Audio Visual Attention Models in Mobile Robots Navigation; Local Adaptive Image Processing; Learning Techniques for Intelligent Access Control; Resolution Improvement in Acoustic Maps. Each chapter is self-contained with its own references. Some of the chapters are devoted to the theoretical aspects while the others are presenting the practical aspects and the...

  6. Analysis of engineering drawings and raster map images

    CERN Document Server

    Henderson, Thomas C

    2013-01-01

    Presents up-to-date methods and algorithms for the automated analysis of engineering drawings and digital cartographic maps Discusses automatic engineering drawing and map analysis techniques Covers detailed accounts of the use of unsupervised segmentation algorithms to map images

  7. ANALYSIS OF SST IMAGES BY WEIGHTED ENSEMBLE TRANSFORM KALMAN FILTER

    OpenAIRE

    Sai , Gorthi; Beyou , Sébastien; Memin , Etienne

    2011-01-01

    International audience; This paper presents a novel, efficient scheme for the analysis of Sea Surface Temperature (SST) ocean images. We consider the estimation of the velocity fields and vorticity values from a sequence of oceanic images. The contribution of this paper lies in proposing a novel, robust and simple approach based onWeighted Ensemble Transform Kalman filter (WETKF) data assimilation technique for the analysis of real SST images, that may contain coast regions or large areas of ...

  8. Validation of an enhanced knowledge-based method for segmentation and quantitative analysis of intrathoracic airway trees from three-dimensional CT images

    International Nuclear Information System (INIS)

    Sonka, M.; Park, W.; Hoffman, E.A.

    1995-01-01

    Accurate assessment of airway physiology, evaluated in terms of geometric changes, is critically dependent upon the accurate imaging and image segmentation of the three-dimensional airway tree structure. The authors have previously reported a knowledge-based method for three-dimensional airway tree segmentation from high resolution CT (HRCT) images. Here, they report a substantially improved version of the method. In the current implementation, the method consists of several stages. First, the lung borders are automatically determined in the three-dimensional set of HRCT data. The primary airway tree is semi-automatically identified. In the next stage, potential airways are determined in individual CT slices using a rule-based system that uses contextual information and a priori knowledge about pulmonary anatomy. Using three-dimensional connectivity properties of the pulmonary airway tree, the three-dimensional tree is constructed from the set of adjacent slices. The method's performance and accuracy were assessed in five 3D HRCT canine images. Computer-identified airways matched 226/258 observer-defined airways (87.6%); the computer method failed to detect the airways in the remaining 32 locations. By visual assessment of rendered airway trees, the experienced observers judged the computer-detected airway trees as highly realistic

  9. An introduction to diffusion tensor image analysis.

    Science.gov (United States)

    O'Donnell, Lauren J; Westin, Carl-Fredrik

    2011-04-01

    Diffusion tensor magnetic resonance imaging (DTI) is a relatively new technology that is popular for imaging the white matter of the brain. This article provides a basic and broad overview of DTI to enable the reader to develop an intuitive understanding of these types of data, and an awareness of their strengths and weaknesses. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Biomedical Image Analysis: Rapid prototyping with Mathematica

    NARCIS (Netherlands)

    Haar Romenij, ter B.M.; Almsick, van M.A.

    2004-01-01

    Digital acquisition techniques have caused an explosion in the production of medical images, especially with the advent of multi-slice CT and volume MRI. One third of the financial investments in a modern hospital's equipment are dedicated to imaging. Emerging screening programs add to this flood of

  11. Multi-spectral Image Analysis for Astaxanthin Coating Classification

    DEFF Research Database (Denmark)

    Ljungqvist, Martin Georg; Ersbøll, Bjarne Kjær; Nielsen, Michael Engelbrecht

    2011-01-01

    Industrial quality inspection using image analysis on astaxanthin coating in aquaculture feed pellets is of great importance for automatic production control. In this study multi-spectral image analysis of pellets was performed using LDA, QDA, SNV and PCA on pixel level and mean value of pixels...

  12. Geographic Object-Based Image Analysis: Towards a new paradigm

    NARCIS (Netherlands)

    Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.A.|info:eu-repo/dai/nl/224281216; Queiroz Feitosa, R.; van der Meer, F.D.|info:eu-repo/dai/nl/138940908; van der Werff, H.M.A.; van Coillie, F.; Tiede, A.

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature

  13. A short introduction to image analysis - Matlab exercises

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg

    2000-01-01

    This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding.......This document contain a short introduction to Image analysis. In addition small exercises has been prepared in order to support the theoretical understanding....

  14. Analysis of licensed South African diagnostic imaging equipment ...

    African Journals Online (AJOL)

    Analysis of licensed South African diagnostic imaging equipment. ... Pan African Medical Journal ... Introduction: Objective: To conduct an analysis of all registered South Africa (SA) diagnostic radiology equipment, assess the number of equipment units per capita by imaging modality, and compare SA figures with published ...

  15. Analysis of sharpness increase by image noise

    Science.gov (United States)

    Kurihara, Takehito; Aoki, Naokazu; Kobayashi, Hiroyuki

    2009-02-01

    Motivated by the reported increase in sharpness by image noise, we investigated how noise affects sharpness perception. We first used natural images of tree bark with different amounts of noise to see whether noise enhances sharpness. Although the result showed sharpness decreased as noise amount increased, some observers seemed to perceive more sharpness with increasing noise, while the others did not. We next used 1D and 2D uni-frequency patterns as stimuli in an attempt to reduce such variability in the judgment. The result showed, for higher frequency stimuli, sharpness decreased as the noise amount increased, while sharpness of the lower frequency stimuli increased at a certain noise level. From this result, we thought image noise might reduce sharpness at edges, but be able to improve sharpness of lower frequency component or texture in image. To prove this prediction, we experimented again with the natural image used in the first experiment. Stimuli were made by applying noise separately to edge or to texture part of the image. The result showed noise, when added to edge region, only decreased sharpness, whereas when added to texture, could improve sharpness. We think it is the interaction between noise and texture that sharpens image.

  16. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  17. Photoacoustic image reconstruction: a quantitative analysis

    Science.gov (United States)

    Sperl, Jonathan I.; Zell, Karin; Menzenbach, Peter; Haisch, Christoph; Ketzer, Stephan; Marquart, Markus; Koenig, Hartmut; Vogel, Mika W.

    2007-07-01

    Photoacoustic imaging is a promising new way to generate unprecedented contrast in ultrasound diagnostic imaging. It differs from other medical imaging approaches, in that it provides spatially resolved information about optical absorption of targeted tissue structures. Because the data acquisition process deviates from standard clinical ultrasound, choice of the proper image reconstruction method is crucial for successful application of the technique. In the literature, multiple approaches have been advocated, and the purpose of this paper is to compare four reconstruction techniques. Thereby, we focused on resolution limits, stability, reconstruction speed, and SNR. We generated experimental and simulated data and reconstructed images of the pressure distribution using four different methods: delay-and-sum (DnS), circular backprojection (CBP), generalized 2D Hough transform (HTA), and Fourier transform (FTA). All methods were able to depict the point sources properly. DnS and CBP produce blurred images containing typical superposition artifacts. The HTA provides excellent SNR and allows a good point source separation. The FTA is the fastest and shows the best FWHM. In our study, we found the FTA to show the best overall performance. It allows a very fast and theoretically exact reconstruction. Only a hardware-implemented DnS might be faster and enable real-time imaging. A commercial system may also perform several methods to fully utilize the new contrast mechanism and guarantee optimal resolution and fidelity.

  18. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  19. Image Sharing Technologies and Reduction of Imaging Utilization: A Systematic Review and Meta-analysis

    Science.gov (United States)

    Vest, Joshua R.; Jung, Hye-Young; Ostrovsky, Aaron; Das, Lala Tanmoy; McGinty, Geraldine B.

    2016-01-01

    Introduction Image sharing technologies may reduce unneeded imaging by improving provider access to imaging information. A systematic review and meta-analysis were conducted to summarize the impact of image sharing technologies on patient imaging utilization. Methods Quantitative evaluations of the effects of PACS, regional image exchange networks, interoperable electronic heath records, tools for importing physical media, and health information exchange systems on utilization were identified through a systematic review of the published and gray English-language literature (2004–2014). Outcomes, standard effect sizes (ESs), settings, technology, populations, and risk of bias were abstracted from each study. The impact of image sharing technologies was summarized with random-effects meta-analysis and meta-regression models. Results A total of 17 articles were included in the review, with a total of 42 different studies. Image sharing technology was associated with a significant decrease in repeat imaging (pooled effect size [ES] = −0.17; 95% confidence interval [CI] = [−0.25, −0.09]; P utilization (pooled ES = 0.20; 95% CI = [0.07, 0.32]; P = .002). For all outcomes combined, image sharing technology was not associated with utilization. Most studies were at risk for bias. Conclusions Image sharing technology was associated with reductions in repeat and unnecessary imaging, in both the overall literature and the most-rigorous studies. Stronger evidence is needed to further explore the role of specific technologies and their potential impact on various modalities, patient populations, and settings. PMID:26614882

  20. Vector sparse representation of color image using quaternion matrix analysis.

    Science.gov (United States)

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.

  1. Interpretation of medical images by model guided analysis

    International Nuclear Information System (INIS)

    Karssemeijer, N.

    1989-01-01

    Progress in the development of digital pictorial information systems stimulates a growing interest in the use of image analysis techniques in medicine. Especially when precise quantitative information is required the use of fast and reproducable computer analysis may be more appropriate than relying on visual judgement only. Such quantitative information can be valuable, for instance, in diagnostics or in irradiation therapy planning. As medical images are mostly recorded in a prescribed way, human anatomy guarantees a common image structure for each particular type of exam. In this thesis it is investigated how to make use of this a priori knowledge to guide image analysis. For that purpose models are developed which are suited to capture common image structure. The first part of this study is devoted to an analysis of nuclear medicine images of myocardial perfusion. In ch. 2 a model of these images is designed in order to represent characteristic image properties. It is shown that for these relatively simple images a compact symbolic description can be achieved, without significant loss of diagnostically importance of several image properties. Possibilities for automatic interpretation of more complex images is investigated in the following chapters. The central topic is segmentation of organs. Two methods are proposed and tested on a set of abdominal X-ray CT scans. Ch. 3 describes a serial approach based on a semantic network and the use of search areas. Relational constraints are used to guide the image processing and to classify detected image segments. In teh ch.'s 4 and 5 a more general parallel approach is utilized, based on a markov random field image model. A stochastic model used to represent prior knowledge about the spatial arrangement of organs is implemented as an external field. (author). 66 refs.; 27 figs.; 6 tabs

  2. Multifractal analysis of three-dimensional histogram from color images

    International Nuclear Information System (INIS)

    Chauveau, Julien; Rousseau, David; Richard, Paul; Chapeau-Blondeau, Francois

    2010-01-01

    Natural images, especially color or multicomponent images, are complex information-carrying signals. To contribute to the characterization of this complexity, we investigate the possibility of multiscale organization in the colorimetric structure of natural images. This is realized by means of a multifractal analysis applied to the three-dimensional histogram from natural color images. The observed behaviors are confronted to those of reference models with known multifractal properties. We use for this purpose synthetic random images with trivial monofractal behavior, and multidimensional multiplicative cascades known for their actual multifractal behavior. The behaviors observed on natural images exhibit similarities with those of the multifractal multiplicative cascades and display the signature of elaborate multiscale organizations stemming from the histograms of natural color images. This type of characterization of colorimetric properties can be helpful to various tasks of digital image processing, as for instance modeling, classification, indexing.

  3. Knowledge-based image analysis: some aspects on the analysis of images using other types of information

    Energy Technology Data Exchange (ETDEWEB)

    Eklundh, J O

    1982-01-01

    The computer vision approach to image analysis is discussed from two aspects. First, this approach is constrasted to the pattern recognition approach. Second, how external knowledge and information and models from other fields of science and engineering can be used for image and scene analysis is discussed. In particular, the connections between computer vision and computer graphics are pointed out.

  4. Introducing PLIA: Planetary Laboratory for Image Analysis

    Science.gov (United States)

    Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.

    2005-08-01

    We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  5. Urban Boundary Extraction and Urban Sprawl Measurement Using High-Resolution Remote Sensing Images: a Case Study of China's Provincial

    Science.gov (United States)

    Wang, H.; Ning, X.; Zhang, H.; Liu, Y.; Yu, F.

    2018-04-01

    Urban boundary is an important indicator for urban sprawl analysis. However, methods of urban boundary extraction were inconsistent, and construction land or urban impervious surfaces was usually used to represent urban areas with coarse-resolution images, resulting in lower precision and incomparable urban boundary products. To solve above problems, a semi-automatic method of urban boundary extraction was proposed by using high-resolution image and geographic information data. Urban landscape and form characteristics, geographical knowledge were combined to generate a series of standardized rules for urban boundary extraction. Urban boundaries of China's 31 provincial capitals in year 2000, 2005, 2010 and 2015 were extracted with above-mentioned method. Compared with other two open urban boundary products, accuracy of urban boundary in this study was the highest. Urban boundary, together with other thematic data, were integrated to measure and analyse urban sprawl. Results showed that China's provincial capitals had undergone a rapid urbanization from year 2000 to 2015, with the area change from 6520 square kilometres to 12398 square kilometres. Urban area of provincial capital had a remarkable region difference and a high degree of concentration. Urban land became more intensive in general. Urban sprawl rate showed inharmonious with population growth rate. About sixty percent of the new urban areas came from cultivated land. The paper provided a consistent method of urban boundary extraction and urban sprawl measurement using high-resolution remote sensing images. The result of urban sprawl of China's provincial capital provided valuable urbanization information for government and public.

  6. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee

    International Nuclear Information System (INIS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K; Ourselin, Sebastien

    2007-01-01

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis

  7. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee

    Energy Technology Data Exchange (ETDEWEB)

    Fripp, Jurgen [BioMedIA Lab, Autonomous Systems Laboratory, CSIRO ICT Centre, Level 20, 300 Adelaide street, Brisbane, QLD 4001 (Australia); Crozier, Stuart [School of Information Technology and Electrical Engineering, University of Queensland, St Lucia, QLD 4072 (Australia); Warfield, Simon K [Computational Radiology Laboratory, Harvard Medical School, Children' s Hospital Boston, 300 Longwood Avenue, Boston, MA 02115 (United States); Ourselin, Sebastien [BioMedIA Lab, Autonomous Systems Laboratory, CSIRO ICT Centre, Level 20, 300 Adelaide street, Brisbane, QLD 4001 (Australia)

    2007-03-21

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.

  8. Applying Image Matching to Video Analysis

    Science.gov (United States)

    2010-09-01

    image groups, classified by the background scene, are the flag, the kitchen, the telephone, the bookshelf , the title screen, the...Kitchen 136 Telephone 3 Bookshelf 81 Title Screen 10 Map 1 24 Map 2 16 command line. This implementation of a Bloom filter uses two arbitrary...with the Bookshelf images. This scene is a much closer shot than the Kitchen scene so the host occupies much of the background. Algorithms for face

  9. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.

    Science.gov (United States)

    Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos

    2017-11-01

    The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.

  10. Diagnostic imaging analysis of the impacted mesiodens

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Jeong Jun; Choi, Bo Ram; Jeong, Hwan Seok; Huh, Kyung Hoe; Yi, Won Jin; Heo, Min Suk; Lee, Sam Sun; Choi, Soon Chul [School of Dentistry, Seoul National University, Seoul (Korea, Republic of)

    2010-06-15

    The research was performed to predict the three dimensional relationship between the impacted mesiodens and the maxillary central incisors and the proximity with the anatomic structures by comparing their panoramic images with the CT images. Among the patients visiting Seoul National University Dental Hospital from April 2003 to July 2007, those with mesiodens were selected (154 mesiodens of 120 patients). The numbers, shapes, orientation and positional relationship of mesiodens with maxillary central incisors were investigated in the panoramic images. The proximity with the anatomical structures and complications were investigated in the CT images as well. The sex ratio (M : F) was 2.28 : 1 and the mean number of mesiodens per one patient was 1.28. Conical shape was 84.4% and inverted orientation was 51.9%. There were more cases of anatomical structures encroachment, especially on the nasal floor and nasopalatine duct, when the mesiodens was not superimposed with the central incisor. There were, however, many cases of the nasopalatine duct encroachment when the mesiodens was superimpoised with the apical 1/3 of central incisor (52.6%). Delayed eruption (55.6%), crown rotation (66.7%) and crown resorption (100%) were observed when the mesiodens was superimposed with the crown of the central incisor. It is possible to predict three dimensional relationship between the impacted mesiodens and the maxillary central incisors in the panoramic images, but more details should be confirmed by the CT images when necessary.

  11. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    Science.gov (United States)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  12. Theoretical analysis of radiographic images by nonstationary Poisson processes

    International Nuclear Information System (INIS)

    Tanaka, Kazuo; Uchida, Suguru; Yamada, Isao.

    1980-01-01

    This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process. (author)

  13. Automated thermal mapping techniques using chromatic image analysis

    Science.gov (United States)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  14. Quantitative methods for the analysis of electron microscope images

    DEFF Research Database (Denmark)

    Skands, Peter Ulrik Vallø

    1996-01-01

    The topic of this thesis is an general introduction to quantitative methods for the analysis of digital microscope images. The images presented are primarily been acquired from Scanning Electron Microscopes (SEM) and interfermeter microscopes (IFM). The topic is approached though several examples...... foundation of the thesis fall in the areas of: 1) Mathematical Morphology; 2) Distance transforms and applications; and 3) Fractal geometry. Image analysis opens in general the possibility of a quantitative and statistical well founded measurement of digital microscope images. Herein lies also the conditions...

  15. Methods for processing and analysis functional and anatomical brain images: computerized tomography, emission tomography and nuclear resonance imaging

    International Nuclear Information System (INIS)

    Mazoyer, B.M.

    1988-01-01

    The various methods for brain image processing and analysis are presented and compared. The following topics are developed: the physical basis of brain image comparison (nature and formation of signals intrinsic performance of the methods image characteristics); mathematical methods for image processing and analysis (filtering, functional parameter extraction, morphological analysis, robotics and artificial intelligence); methods for anatomical localization (neuro-anatomy atlas, proportional stereotaxic atlas, numerized atlas); methodology of cerebral image superposition (normalization, retiming); image networks [fr

  16. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    Science.gov (United States)

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  17. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    Science.gov (United States)

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  18. 5-ALA induced fluorescent image analysis of actinic keratosis

    Science.gov (United States)

    Cho, Yong-Jin; Bae, Youngwoo; Choi, Eung-Ho; Jung, Byungjo

    2010-02-01

    In this study, we quantitatively analyzed 5-ALA induced fluorescent images of actinic keratosis using digital fluorescent color and hyperspectral imaging modalities. UV-A was utilized to induce fluorescent images and actinic keratosis (AK) lesions were demarcated from surrounding the normal region with different methods. Eight subjects with AK lesion were participated in this study. In the hyperspectral imaging modality, spectral analysis method was utilized for hyperspectral cube image and AK lesions were demarcated from the normal region. Before image acquisition, we designated biopsy position for histopathology of AK lesion and surrounding normal region. Erythema index (E.I.) values on both regions were calculated from the spectral cube data. Image analysis of subjects resulted in two different groups: the first group with the higher fluorescence signal and E.I. on AK lesion than the normal region; the second group with lower fluorescence signal and without big difference in E.I. between two regions. In fluorescent color image analysis of facial AK, E.I. images were calculated on both normal and AK lesions and compared with the results of hyperspectral imaging modality. The results might indicate that the different intensity of fluorescence and E.I. among the subjects with AK might be interpreted as different phases of morphological and metabolic changes of AK lesions.

  19. Rapid analysis and exploration of fluorescence microscopy images.

    Science.gov (United States)

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason M; Steininger, Robert J; Wu, Lani F; Altschuler, Steven J

    2014-03-19

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.

  20. Research of second harmonic generation images based on texture analysis

    Science.gov (United States)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  1. Uncooled LWIR imaging: applications and market analysis

    Science.gov (United States)

    Takasawa, Satomi

    2015-05-01

    The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.

  2. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  3. Analysis of live cell images: Methods, tools and opportunities.

    Science.gov (United States)

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.

  4. Analysis of the gammaholographic image formation

    International Nuclear Information System (INIS)

    Fonroget, J.; Roucayrol, J.C.; Perrin, J.; Belvaux, Y.; Paris-11 Univ., 91 - Orsay

    1975-01-01

    Gammaholography, or coded opening gammagraphy, is a new gammagraphic method in which the standard collimators are replaced by one or more modulator screens placed between the detector and the radioactive object. The recording obtained is a coded image or incoherent hologram which contains three-dimensional information on the object and can be decoded analogically in a very short time. The formation of the image has been analyzed in the coding and optical decoding phases in the case of a single coding screen modulated according to a Fresnel zoned lattice. The analytical expression established for the modulation transfer function (MTF) of the system can be used to study, by computerized simulation, the influence of the number of zones on the quality of the image [fr

  5. Imaging analysis of dedifferentiated chondrosarcoma of bone

    International Nuclear Information System (INIS)

    Xie Yuanzhong; Kong Qingkui; Wang Xia; Li Changqing

    2004-01-01

    Objective: To analyze the radiological findings of dedifferentiated chondrosarcoma, and to explore the imaging features of dedifferentiated tissue. Methods: The X-ray and CT findings of 13 cases with dedifferentiated chondrosarcoma of bone were analyzed retrospectively, and studied with clinic and corresponding histological changes. Results: The dedifferentiated chondrosarcoma not only had the radiological findings of typical chondrosarcoma but also had the imaging features of dedifferentiated tissues. In 13 patients, periosteal reactions were found in 11 cases, ossifications in 8 cases, soft tissue masses in 12 cases, calcifications in 10 cases, and the site of calcifications in 8 cases was in the center of the focus. Conclusion: The dedifferentiated chondrosarcoma showed special imaging features, which includes ossification, calcification, periosteal reaction, and soft tissue mass. These features were not found in typical chondrosarcoma. Recognizing these specific features is helpful to the diagnosis of dedifferentiated chondrosarcoma. (author)

  6. System Matrix Analysis for Computed Tomography Imaging

    Science.gov (United States)

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  7. Analysis of Non Local Image Denoising Methods

    Science.gov (United States)

    Pardo, Álvaro

    Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.

  8. Analysis of PETT images in psychiatric disorders

    International Nuclear Information System (INIS)

    Brodie, J.D.; Gomez-Mont, F.; Volkow, N.D.; Corona, J.F.; Wolf, A.P.; Wolkin, A.; Russell, J.A.G.; Christman, D.; Jaeger, J.

    1983-01-01

    A quantitative method is presented for studying the pattern of metabolic activity in a set of Positron Emission Transaxial Tomography (PETT) images. Using complex Fourier coefficients as a feature vector for each image, cluster, principal components, and discriminant function analyses are used to empirically describe metabolic differences between control subjects and patients with DSM III diagnosis for schizophrenia or endogenous depression. We also present data on the effects of neuroleptic treatment on the local cerebral metabolic rate of glucose utilization (LCMRGI) in a group of chronic schizophrenics using the region of interest approach. 15 references, 4 figures, 3 tables

  9. Analysis of PETT images in psychiatric disorders

    Energy Technology Data Exchange (ETDEWEB)

    Brodie, J.D.; Gomez-Mont, F.; Volkow, N.D.; Corona, J.F.; Wolf, A.P.; Wolkin, A.; Russell, J.A.G.; Christman, D.; Jaeger, J.

    1983-01-01

    A quantitative method is presented for studying the pattern of metabolic activity in a set of Positron Emission Transaxial Tomography (PETT) images. Using complex Fourier coefficients as a feature vector for each image, cluster, principal components, and discriminant function analyses are used to empirically describe metabolic differences between control subjects and patients with DSM III diagnosis for schizophrenia or endogenous depression. We also present data on the effects of neuroleptic treatment on the local cerebral metabolic rate of glucose utilization (LCMRGI) in a group of chronic schizophrenics using the region of interest approach. 15 references, 4 figures, 3 tables.

  10. Independent component analysis based filtering for penumbral imaging

    International Nuclear Information System (INIS)

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-01-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters

  11. Architectural design and analysis of a programmable image processor

    International Nuclear Information System (INIS)

    Siyal, M.Y.; Chowdhry, B.S.; Rajput, A.Q.K.

    2003-01-01

    In this paper we present an architectural design and analysis of a programmable image processor, nicknamed Snake. The processor was designed with a high degree of parallelism to speed up a range of image processing operations. Data parallelism found in array processors has been included into the architecture of the proposed processor. The implementation of commonly used image processing algorithms and their performance evaluation are also discussed. The performance of Snake is also compared with other types of processor architectures. (author)

  12. GEOPOSITIONING PRECISION ANALYSIS OF MULTIPLE IMAGE TRIANGULATION USING LRO NAC LUNAR IMAGES

    Directory of Open Access Journals (Sweden)

    K. Di

    2016-06-01

    Full Text Available This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC Narrow Angle Camera (NAC images at the Chang’e-3(CE-3 landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  13. Some selected quantitative methods of thermal image analysis in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. ViCAR: An Adaptive and Landmark-Free Registration of Time Lapse Image Data from Microfluidics Experiments

    Directory of Open Access Journals (Sweden)

    Georges Hattab

    2017-05-01

    Full Text Available In order to understand gene function in bacterial life cycles, time lapse bioimaging is applied in combination with different marker protocols in so called microfluidics chambers (i.e., a multi-well plate. In one experiment, a series of T images is recorded for one visual field, with a pixel resolution of 60 nm/px. Any (semi-automatic analysis of the data is hampered by a strong image noise, low contrast and, last but not least, considerable irregular shifts during the acquisition. Image registration corrects such shifts enabling next steps of the analysis (e.g., feature extraction or tracking. Image alignment faces two obstacles in this microscopic context: (a highly dynamic structural changes in the sample (i.e., colony growth and (b an individual data set-specific sample environment which makes the application of landmarks-based alignments almost impossible. We present a computational image registration solution, we refer to as ViCAR: (Visual (Cues based (Adaptive (Registration, for such microfluidics experiments, consisting of (1 the detection of particular polygons (outlined and segmented ones, referred to as visual cues, (2 the adaptive retrieval of three coordinates throughout different sets of frames, and finally (3 an image registration based on the relation of these points correcting both rotation and translation. We tested ViCAR with different data sets and have found that it provides an effective spatial alignment thereby paving the way to extract temporal features pertinent to each resulting bacterial colony. By using ViCAR, we achieved an image registration with 99.9% of image closeness, based on the average rmsd of 4.10−2 pixels, and superior results compared to a state of the art algorithm.

  15. Precision Statistical Analysis of Images Based on Brightness Distribution

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2017-07-01

    Full Text Available Study the content of images is considered an important topic in which reasonable and accurate analysis of images are generated. Recently image analysis becomes a vital field because of huge number of images transferred via transmission media in our daily life. These crowded media with images lead to highlight in research area of image analysis. In this paper, the implemented system is passed into many steps to perform the statistical measures of standard deviation and mean values of both color and grey images. Whereas the last step of the proposed method concerns to compare the obtained results in different cases of the test phase. In this paper, the statistical parameters are implemented to characterize the content of an image and its texture. Standard deviation, mean and correlation values are used to study the intensity distribution of the tested images. Reasonable results are obtained for both standard deviation and mean value via the implementation of the system. The major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.

  16. On two methods of statistical image analysis

    NARCIS (Netherlands)

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, K.L.

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition,

  17. Complications of Whipple surgery: imaging analysis.

    Science.gov (United States)

    Bhosale, Priya; Fleming, Jason; Balachandran, Aparna; Charnsangavej, Chuslip; Tamm, Eric P

    2013-04-01

    The purpose of this article is to describe and illustrate anatomic findings after the Whipple procedure, and the appearance of its complications, on imaging. Knowledge of the cross-sectional anatomy following the Whipple procedure, and clinical findings for associated complications, are essential to rapidly and accurately diagnose such complications on postoperative studies in order to optimize treatment.

  18. The cumulative verification image analysis tool for offline evaluation of portal images

    International Nuclear Information System (INIS)

    Wong, John; Yan Di; Michalski, Jeff; Graham, Mary; Halverson, Karen; Harms, William; Purdy, James

    1995-01-01

    Purpose: Daily portal images acquired using electronic portal imaging devices contain important information about the setup variation of the individual patient. The data can be used to evaluate the treatment and to derive correction for the individual patient. The large volume of images also require software tools for efficient analysis. This article describes the approach of cumulative verification image analysis (CVIA) specifically designed as an offline tool to extract quantitative information from daily portal images. Methods and Materials: The user interface, image and graphics display, and algorithms of the CVIA tool have been implemented in ANSCI C using the X Window graphics standards. The tool consists of three major components: (a) definition of treatment geometry and anatomical information; (b) registration of portal images with a reference image to determine setup variation; and (c) quantitative analysis of all setup variation measurements. The CVIA tool is not automated. User interaction is required and preferred. Successful alignment of anatomies on portal images at present remains mostly dependent on clinical judgment. Predefined templates of block shapes and anatomies are used for image registration to enhance efficiency, taking advantage of the fact that much of the tool's operation is repeated in the analysis of daily portal images. Results: The CVIA tool is portable and has been implemented on workstations with different operating systems. Analysis of 20 sequential daily portal images can be completed in less than 1 h. The temporal information is used to characterize setup variation in terms of its systematic, random and time-dependent components. The cumulative information is used to derive block overlap isofrequency distributions (BOIDs), which quantify the effective coverage of the prescribed treatment area throughout the course of treatment. Finally, a set of software utilities is available to facilitate feedback of the information for

  19. ANALYSIS OF MULTIPATH PIXELS IN SAR IMAGES

    Directory of Open Access Journals (Sweden)

    J. W. Zhao

    2016-06-01

    Full Text Available As the received radar signal is the sum of signal contributions overlaid in one single pixel regardless of the travel path, the multipath effect should be seriously tackled as the multiple bounce returns are added to direct scatter echoes which leads to ghost scatters. Most of the existing solution towards the multipath is to recover the signal propagation path. To facilitate the signal propagation simulation process, plenty of aspects such as sensor parameters, the geometry of the objects (shape, location, orientation, mutual position between adjacent buildings and the physical parameters of the surface (roughness, correlation length, permittivitywhich determine the strength of radar signal backscattered to the SAR sensor should be given in previous. However, it's not practical to obtain the highly detailed object model in unfamiliar area by field survey as it's a laborious work and time-consuming. In this paper, SAR imaging simulation based on RaySAR is conducted at first aiming at basic understanding of multipath effects and for further comparison. Besides of the pre-imaging simulation, the product of the after-imaging, which refers to radar images is also taken into consideration. Both Cosmo-SkyMed ascending and descending SAR images of Lupu Bridge in Shanghai are used for the experiment. As a result, the reflectivity map and signal distribution map of different bounce level are simulated and validated by 3D real model. The statistic indexes such as the phase stability, mean amplitude, amplitude dispersion, coherence and mean-sigma ratio in case of layover are analyzed with combination of the RaySAR output.

  20. Direct identification of pure penicillium species using image analysis

    DEFF Research Database (Denmark)

    Dørge, Thorsten Carlheim; Carstensen, Jens Michael; Frisvad, Jens Christian

    2000-01-01

    This paper presents a method for direct identification of fungal species solely by means of digital image analysis of colonies as seen after growth on a standard medium. The method described is completely automated and hence objective once digital images of the reference fungi have been establish...

  1. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, Marlene; Rosenvinge, Flemming Schønning; Spillum, Erik

    2015-01-01

    in systems relying on colorimetry or turbidometry (such as Vitek-2, Phoenix, MicroScan WalkAway). The objective was to examine an automated image analysis algorithm for quantification of filamentous bacteria using the 3D digital microscopy imaging system, oCelloScope. Results Three E. coli strains displaying...

  2. [Evaluation of dental plaque by quantitative digital image analysis system].

    Science.gov (United States)

    Huang, Z; Luan, Q X

    2016-04-18

    To analyze the plaque staining image by using image analysis software, to verify the maneuverability, practicability and repeatability of this technique, and to evaluate the influence of different plaque stains. In the study, 30 volunteers were enrolled from the new dental students of Peking University Health Science Center in accordance with the inclusion criteria. The digital images of the anterior teeth were acquired after plaque stained according to filming standardization.The image analysis was performed using Image Pro Plus 7.0, and the Quigley-Hein plaque indexes of the anterior teeth were evaluated. The plaque stain area percentage and the corresponding dental plaque index were highly correlated,and the Spearman correlation coefficient was 0.776 (Pchart showed only a few spots outside the 95% consistency boundaries. The different plaque stains image analysis results showed that the difference of the tooth area measurements was not significant, while the difference of the plaque area measurements significant (P<0.01). This method is easy in operation and control,highly related to the calculated percentage of plaque area and traditional plaque index, and has good reproducibility.The different plaque staining method has little effect on image segmentation results.The sensitive plaque stain for image analysis is suggested.

  3. Basic strategies for valid cytometry using image analysis

    NARCIS (Netherlands)

    Jonker, A.; Geerts, W. J.; Chieco, P.; Moorman, A. F.; Lamers, W. H.; van Noorden, C. J.

    1997-01-01

    The present review provides a starting point for setting up an image analysis system for quantitative densitometry and absorbance or fluorescence measurements in cell preparations, tissue sections or gels. Guidelines for instrumental settings that are essential for the valid application of image

  4. Subsurface offset behaviour in velocity analysis with extended reflectivity images

    NARCIS (Netherlands)

    Mulder, W.A.

    2012-01-01

    Migration velocity analysis with the wave equation can be accomplished by focusing of extended migration images, obtained by introducing a subsurface offset or shift. A reflector in the wrong velocity model will show up as a curve in the extended image. In the correct model, it should collapse to a

  5. A Survey on Deep Learning in Medical Image Analysis

    NARCIS (Netherlands)

    Litjens, G.J.; Kooi, T.; Ehteshami Bejnordi, B.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Laak, J.A.W.M. van der; Ginneken, B. van; Sanchez, C.I.

    2017-01-01

    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared

  6. Analysis of Two-Dimensional Electrophoresis Gel Images

    DEFF Research Database (Denmark)

    Pedersen, Lars

    2002-01-01

    This thesis describes and proposes solutions to some of the currently most important problems in pattern recognition and image analysis of two-dimensional gel electrophoresis (2DGE) images. 2DGE is the leading technique to separate individual proteins in biological samples with many biological...

  7. Occupancy Analysis of Sports Arenas Using Thermal Imaging

    DEFF Research Database (Denmark)

    Gade, Rikke; Jørgensen, Anders; Moeslund, Thomas B.

    2012-01-01

    This paper presents a system for automatic analysis of the occupancy of sports arenas. By using a thermal camera for image capturing the number of persons and their location on the court are found without violating any privacy issues. The images are binarised with an automatic threshold method...

  8. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  9. Automated Image Analysis Corrosion Working Group Update: February 1, 2018

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-01

    These are slides for the automated image analysis corrosion working group update. The overall goals were: automate the detection and quantification of features in images (faster, more accurate), how to do this (obtain data, analyze data), focus on Laser Scanning Confocal Microscope (LCM) data (laser intensity, laser height/depth, optical RGB, optical plus laser RGB).

  10. On the applicability of numerical image mapping for PIV image analysis near curved interfaces

    International Nuclear Information System (INIS)

    Masullo, Alessandro; Theunissen, Raf

    2017-01-01

    This paper scrutinises the general suitability of image mapping for particle image velocimetry (PIV) applications. Image mapping can improve PIV measurement accuracy by eliminating overlap between the PIV interrogation windows and an interface, as illustrated by some examples in the literature. Image mapping transforms the PIV images using a curvilinear interface-fitted mesh prior to performing the PIV cross correlation. However, degrading effects due to particle image deformation and the Jacobian transformation inherent in the mapping along curvilinear grid lines have never been deeply investigated. Here, the implementation of image mapping from mesh generation to image resampling is presented in detail, and related error sources are analysed. Systematic comparison with standard PIV approaches shows that image mapping is effective only in a very limited set of flow conditions and geometries, and depends strongly on a priori knowledge of the boundary shape and streamlines. In particular, with strongly curved geometries or streamlines that are not parallel to the interface, the image-mapping approach is easily outperformed by more traditional image analysis methodologies invoking suitable spatial relocation of the obtained displacement vector. (paper)

  11. The ImageJ ecosystem: An open platform for biomedical image analysis.

    Science.gov (United States)

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.

  12. Determination of fish gender using fractal analysis of ultrasound images

    DEFF Research Database (Denmark)

    McEvoy, Fintan J.; Tomkiewicz, Jonna; Støttrup, Josianne

    2009-01-01

    The gender of cod Gadus morhua can be determined by considering the complexity in their gonadal ultrasonographic appearance. The fractal dimension (DB) can be used to describe this feature in images. B-mode gonadal ultrasound images in 32 cod, where gender was known, were collected. Fractal...... by subjective analysis alone. The mean (and standard deviation) of the fractal dimension DB for male fish was 1.554 (0.073) while for female fish it was 1.468 (0.061); the difference was statistically significant (P=0.001). The area under the ROC curve was 0.84 indicating the value of fractal analysis in gender...... result. Fractal analysis is useful for gender determination in cod. This or a similar form of analysis may have wide application in veterinary imaging as a tool for quantification of complexity in images...

  13. Analysis of PET hypoxia imaging in the quantitative imaging for personalized cancer medicine program

    International Nuclear Information System (INIS)

    Yeung, Ivan; Driscoll, Brandon; Keller, Harald; Shek, Tina; Jaffray, David; Hedley, David

    2014-01-01

    Quantitative imaging is an important tool in clinical trials of testing novel agents and strategies for cancer treatment. The Quantitative Imaging Personalized Cancer Medicine Program (QIPCM) provides clinicians and researchers participating in multi-center clinical trials with a central repository for their imaging data. In addition, a set of tools provide standards of practice (SOP) in end-to-end quality assurance of scanners and image analysis. The four components for data archiving and analysis are the Clinical Trials Patient Database, the Clinical Trials PACS, the data analysis engine(s) and the high-speed networks that connect them. The program provides a suite of software which is able to perform RECIST, dynamic MRI, CT and PET analysis. The imaging data can be assessed securely from remote and analyzed by researchers with these software tools, or with tools provided by the users and installed at the server. Alternatively, QIPCM provides a service for data analysis on the imaging data according developed SOP. An example of a clinical study in which patients with unresectable pancreatic adenocarcinoma were studied with dynamic PET-FAZA for hypoxia measurement will be discussed. We successfully quantified the degree of hypoxia as well as tumor perfusion in a group of 20 patients in terms of SUV and hypoxic fraction. It was found that there is no correlation between bulk tumor perfusion and hypoxia status in this cohort. QIPCM also provides end-to-end QA testing of scanners used in multi-center clinical trials. Based on quality assurance data from multiple CT-PET scanners, we concluded that quality control of imaging was vital in the success in multi-center trials as different imaging and reconstruction parameters in PET imaging could lead to very different results in hypoxia imaging. (author)

  14. Digital image analysis of X-ray television with an image digitizer

    International Nuclear Information System (INIS)

    Mochizuki, Yasuo; Akaike, Hisahiko; Ogawa, Hitoshi; Kyuma, Yukishige

    1995-01-01

    When video signals of X-ray fluoroscopy were transformed from analog-to-digital ones with an image digitizer, their digital characteristic curves, pre-sampling MTF's and digital Wiener spectral could be measured. This method was advant ageous in that it was able to carry out data sampling because the pixel values inputted could be verified on a CRT. The system of image analysis by this method is inexpensive and effective in evaluating the image quality of digital system. Also, it is expected that this method can be used as a tool for learning the measurement techniques and physical characteristics of digital image quality effectively. (author)

  15. Analysis and clinical usefullness of cardiac ECT images

    International Nuclear Information System (INIS)

    Hayashi, Makoto; Kagawa, Masaaki; Yamada, Yukinori

    1983-01-01

    We estimated basically and clinically myocardial ECT image and ECG gated cardiac blood-pool ECT image. ROC curve is used for the evaluation of the accuracy in diagnostic myocardial infarction. The accuracy in diagnostic of MI is superior in myocardial ECT image and ECT estimation is unnecessary skillfulness and experience. We can absene the whole defect of MI than planar image by using ECT. LVEDV between estimated volume and contrast volume is according to it and get one step for automatic analysis of cardiac volume. (author)

  16. Multivariate statistical analysis for x-ray photoelectron spectroscopy spectral imaging: Effect of image acquisition time

    International Nuclear Information System (INIS)

    Peebles, D.E.; Ohlhausen, J.A.; Kotula, P.G.; Hutton, S.; Blomfield, C.

    2004-01-01

    The acquisition of spectral images for x-ray photoelectron spectroscopy (XPS) is a relatively new approach, although it has been used with other analytical spectroscopy tools for some time. This technique provides full spectral information at every pixel of an image, in order to provide a complete chemical mapping of the imaged surface area. Multivariate statistical analysis techniques applied to the spectral image data allow the determination of chemical component species, and their distribution and concentrations, with minimal data acquisition and processing times. Some of these statistical techniques have proven to be very robust and efficient methods for deriving physically realistic chemical components without input by the user other than the spectral matrix itself. The benefits of multivariate analysis of the spectral image data include significantly improved signal to noise, improved image contrast and intensity uniformity, and improved spatial resolution - which are achieved due to the effective statistical aggregation of the large number of often noisy data points in the image. This work demonstrates the improvements in chemical component determination and contrast, signal-to-noise level, and spatial resolution that can be obtained by the application of multivariate statistical analysis to XPS spectral images

  17. Developments in Dynamic Analysis for quantitative PIXE true elemental imaging

    International Nuclear Information System (INIS)

    Ryan, C.G.

    2001-01-01

    Dynamic Analysis (DA) is a method for projecting quantitative major and trace element images from PIXE event data-streams (off-line or on-line) obtained using the Nuclear Microprobe. The method separates full elemental spectral signatures to produce images that strongly reject artifacts due to overlapping elements, detector effects (such as escape peaks and tailing) and background. The images are also quantitative, stored in ppm-charge units, enabling images to be directly interrogated for the concentrations of all elements in areas of the images. Recent advances in the method include the correction for changing X-ray yields due to varying sample compositions across the image area and the construction of statistical variance images. The resulting accuracy of major element concentrations extracted directly from these images is better than 3% relative as determined from comparisons with electron microprobe point analysis. These results are complemented by error estimates derived from the variance images together with detection limits. This paper provides an update of research on these issues, introduces new software designed to make DA more accessible, and illustrates the application of the method to selected geological problems.

  18. ImageJ-MATLAB: a bidirectional framework for scientific image analysis interoperability.

    Science.gov (United States)

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2017-02-15

    ImageJ-MATLAB is a lightweight Java library facilitating bi-directional interoperability between MATLAB and ImageJ. By defining a standard for translation between matrix and image data structures, researchers are empowered to select the best tool for their image-analysis tasks. Freely available extension to ImageJ2 ( http://imagej.net/Downloads ). Installation and use instructions available at http://imagej.net/MATLAB_Scripting. Tested with ImageJ 2.0.0-rc-54 , Java 1.8.0_66 and MATLAB R2015b. eliceiri@wisc.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  19. Chemical imaging and solid state analysis at compact surfaces using UV imaging

    DEFF Research Database (Denmark)

    Wu, Jian X.; Rehder, Sönke; van den Berg, Frans

    2014-01-01

    and excipients in a non-invasive way, as well as mapping the glibenclamide solid state form. An exploratory data analysis supported the critical evaluation of the mapping results and the selection of model parameters for the chemical mapping. The present study demonstrated that the multi-wavelength UV imaging......Fast non-destructive multi-wavelength UV imaging together with multivariate image analysis was utilized to visualize distribution of chemical components and their solid state form at compact surfaces. Amorphous and crystalline solid forms of the antidiabetic compound glibenclamide...

  20. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  1. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  2. Development of motion image prediction method using principal component analysis

    International Nuclear Information System (INIS)

    Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma

    2012-01-01

    Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)

  3. A software platform for the analysis of dermatology images

    Science.gov (United States)

    Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon

    2017-11-01

    The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.

  4. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  5. High-throughput high-volume nuclear imaging for preclinical in vivo compound screening§.

    Science.gov (United States)

    Macholl, Sven; Finucane, Ciara M; Hesterman, Jacob; Mather, Stephen J; Pauplis, Rachel; Scully, Deirdre; Sosabowski, Jane K; Jouannot, Erwan

    2017-12-01

    Preclinical single-photon emission computed tomography (SPECT)/CT imaging studies are hampered by low throughput, hence are found typically within small volume feasibility studies. Here, imaging and image analysis procedures are presented that allow profiling of a large volume of radiolabelled compounds within a reasonably short total study time. Particular emphasis was put on quality control (QC) and on fast and unbiased image analysis. 2-3 His-tagged proteins were simultaneously radiolabelled by 99m Tc-tricarbonyl methodology and injected intravenously (20 nmol/kg; 100 MBq; n = 3) into patient-derived xenograft (PDX) mouse models. Whole-body SPECT/CT images of 3 mice simultaneously were acquired 1, 4, and 24 h post-injection, extended to 48 h and/or by 0-2 h dynamic SPECT for pre-selected compounds. Organ uptake was quantified by automated multi-atlas and manual segmentations. Data were plotted automatically, quality controlled and stored on a collaborative image management platform. Ex vivo uptake data were collected semi-automatically and analysis performed as for imaging data. >500 single animal SPECT images were acquired for 25 proteins over 5 weeks, eventually generating >3500 ROI and >1000 items of tissue data. SPECT/CT images clearly visualized uptake in tumour and other tissues even at 48 h post-injection. Intersubject uptake variability was typically 13% (coefficient of variation, COV). Imaging results correlated well with ex vivo data. The large data set of tumour, background and systemic uptake/clearance data from 75 mice for 25 compounds allows identification of compounds of interest. The number of animals required was reduced considerably by longitudinal imaging compared to dissection experiments. All experimental work and analyses were accomplished within 3 months expected to be compatible with drug development programmes. QC along all workflow steps, blinding of the imaging contract research organization to compound properties and

  6. An approach for quantitative image quality analysis for CT

    Science.gov (United States)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  7. [Quantitative data analysis for live imaging of bone.

    Science.gov (United States)

    Seno, Shigeto

    Bone tissue is a hard tissue, it was difficult to observe the interior of the bone tissue alive. With the progress of microscopic technology and fluorescent probe technology in recent years, it becomes possible to observe various activities of various cells forming bone society. On the other hand, the quantitative increase in data and the diversification and complexity of the images makes it difficult to perform quantitative analysis by visual inspection. It has been expected to develop a methodology for processing microscopic images and data analysis. In this article, we introduce the research field of bioimage informatics which is the boundary area of biology and information science, and then outline the basic image processing technology for quantitative analysis of live imaging data of bone.

  8. Image analysis of multiple moving wood pieces in real time

    Science.gov (United States)

    Wang, Weixing

    2006-02-01

    This paper presents algorithms for image processing and image analysis of wood piece materials. The algorithms were designed for auto-detection of wood piece materials on a moving conveyor belt or a truck. When wood objects on moving, the hard task is to trace the contours of the objects in n optimal way. To make the algorithms work efficiently in the plant, a flexible online system was designed and developed, which mainly consists of image acquisition, image processing, object delineation and analysis. A number of newly-developed algorithms can delineate wood objects with high accuracy and high speed, and in the wood piece analysis part, each wood piece can be characterized by a number of visual parameters which can also be used for constructing experimental models directly in the system.

  9. New approach to gallbladder ultrasonic images analysis and lesions recognition.

    Science.gov (United States)

    Bodzioch, Sławomir; Ogiela, Marek R

    2009-03-01

    This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards detection of disease symptoms on processed images. First, in this paper, there is presented a new method of filtering gallbladder contours from USG images. A major stage in this filtration is to segment and section off areas occupied by the said organ. In most cases this procedure is based on filtration that plays a key role in the process of diagnosing pathological changes. Unfortunately ultrasound images present among the most troublesome methods of analysis owing to the echogenic inconsistency of structures under observation. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours. The algorithm is based on rank filtration, as well as on the analysis of histogram sections on tested organs. The second part concerns detecting lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. Usually the final stage is to make a diagnosis based on the detected symptoms. This last stage can be carried out through either dedicated expert systems or more classic pattern analysis approach like using rules to determine illness basing on detected symptoms. This paper discusses the pattern analysis algorithms for gallbladder image interpretation towards classification of the most frequent illness symptoms of this organ.

  10. Advanced Color Image Processing and Analysis

    CERN Document Server

    2013-01-01

    This volume does much more than survey modern advanced color processing. Starting with a historical perspective on ways we have classified color, it sets out the latest numerical techniques for analyzing and processing colors, the leading edge in our search to accurately record and print what we see. The human eye perceives only a fraction of available light wavelengths, yet we live in a multicolor world of myriad shining hues. Colors rich in metaphorical associations make us “purple with rage” or “green with envy” and cause us to “see red.” Defining colors has been the work of centuries, culminating in today’s complex mathematical coding that nonetheless remains a work in progress: only recently have we possessed the computing capacity to process the algebraic matrices that reproduce color more accurately. With chapters on dihedral color and image spectrometers, this book provides technicians and researchers with the knowledge they need to grasp the intricacies of today’s color imaging.

  11. Automated image analysis of atomic force microscopy images of rotavirus particles

    International Nuclear Information System (INIS)

    Venkataraman, S.; Allison, D.P.; Qi, H.; Morrell-Falvey, J.L.; Kallewaard, N.L.; Crowe, J.E.; Doktycz, M.J.

    2006-01-01

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM

  12. Automated image analysis of atomic force microscopy images of rotavirus particles

    Energy Technology Data Exchange (ETDEWEB)

    Venkataraman, S. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Allison, D.P. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Department of Biochemistry, Cellular, and Molecular Biology, University of Tennessee, Knoxville, TN 37996 (United States); Molecular Imaging Inc. Tempe, AZ, 85282 (United States); Qi, H. [Department of Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996 (United States); Morrell-Falvey, J.L. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Kallewaard, N.L. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Crowe, J.E. [Vanderbilt University Medical Center, Nashville, TN 37232-2905 (United States); Doktycz, M.J. [Life Sciences Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)]. E-mail: doktyczmj@ornl.gov

    2006-06-15

    A variety of biological samples can be imaged by the atomic force microscope (AFM) under environments that range from vacuum to ambient to liquid. Generally imaging is pursued to evaluate structural features of the sample or perhaps identify some structural changes in the sample that are induced by the investigator. In many cases, AFM images of sample features and induced structural changes are interpreted in general qualitative terms such as markedly smaller or larger, rougher, highly irregular, or smooth. Various manual tools can be used to analyze images and extract more quantitative data, but this is usually a cumbersome process. To facilitate quantitative AFM imaging, automated image analysis routines are being developed. Viral particles imaged in water were used as a test case to develop an algorithm that automatically extracts average dimensional information from a large set of individual particles. The extracted information allows statistical analyses of the dimensional characteristics of the particles and facilitates interpretation related to the binding of the particles to the surface. This algorithm is being extended for analysis of other biological samples and physical objects that are imaged by AFM.

  13. Fractal-Based Image Analysis In Radiological Applications

    Science.gov (United States)

    Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.

    1987-10-01

    We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.

  14. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  15. NEPR Principle Component Analysis - NOAA TIFF Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This GeoTiff is a representation of seafloor topography in Northeast Puerto Rico derived from a bathymetry model with a principle component analysis (PCA). The area...

  16. SynPAnal: software for rapid quantification of the density and intensity of protein puncta from fluorescence microscopy images of neurons.

    Directory of Open Access Journals (Sweden)

    Eric Danielson

    Full Text Available Continuous modification of the protein composition at synapses is a driving force for the plastic changes of synaptic strength, and provides the fundamental molecular mechanism of synaptic plasticity and information storage in the brain. Studying synaptic protein turnover is not only important for understanding learning and memory, but also has direct implication for understanding pathological conditions like aging, neurodegenerative diseases, and psychiatric disorders. Proteins involved in synaptic transmission and synaptic plasticity are typically concentrated at synapses of neurons and thus appear as puncta (clusters in immunofluorescence microscopy images. Quantitative measurement of the changes in puncta density, intensity, and sizes of specific proteins provide valuable information on their function in synaptic transmission, circuit development, synaptic plasticity, and synaptopathy. Unfortunately, puncta quantification is very labor intensive and time consuming. In this article, we describe a software tool designed for the rapid semi-automatic detection and quantification of synaptic protein puncta from 2D immunofluorescence images generated by confocal laser scanning microscopy. The software, dubbed as SynPAnal (for Synaptic Puncta Analysis, streamlines data quantification for puncta density and average intensity, thereby increases data analysis throughput compared to a manual method. SynPAnal is stand-alone software written using the JAVA programming language, and thus is portable and platform-free.

  17. A parallel solution for high resolution histological image analysis.

    Science.gov (United States)

    Bueno, G; González, R; Déniz, O; García-Rojo, M; González-García, J; Fernández-Carrobles, M M; Vállez, N; Salido, J

    2012-10-01

    This paper describes a general methodology for developing parallel image processing algorithms based on message passing for high resolution images (on the order of several Gigabytes). These algorithms have been applied to histological images and must be executed on massively parallel processing architectures. Advances in new technologies for complete slide digitalization in pathology have been combined with developments in biomedical informatics. However, the efficient use of these digital slide systems is still a challenge. The image processing that these slides are subject to is still limited both in terms of data processed and processing methods. The work presented here focuses on the need to design and develop parallel image processing tools capable of obtaining and analyzing the entire gamut of information included in digital slides. Tools have been developed to assist pathologists in image analysis and diagnosis, and they cover low and high-level image processing methods applied to histological images. Code portability, reusability and scalability have been tested by using the following parallel computing architectures: distributed memory with massive parallel processors and two networks, INFINIBAND and Myrinet, composed of 17 and 1024 nodes respectively. The parallel framework proposed is flexible, high performance solution and it shows that the efficient processing of digital microscopic images is possible and may offer important benefits to pathology laboratories. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Acne image analysis: lesion localization and classification

    Science.gov (United States)

    Abas, Fazly Salleh; Kaffenberger, Benjamin; Bikowski, Joseph; Gurcan, Metin N.

    2016-03-01

    Acne is a common skin condition present predominantly in the adolescent population, but may continue into adulthood. Scarring occurs commonly as a sequel to severe inflammatory acne. The presence of acne and resultant scars are more than cosmetic, with a significant potential to alter quality of life and even job prospects. The psychosocial effects of acne and scars can be disturbing and may be a risk factor for serious psychological concerns. Treatment efficacy is generally determined based on an invalidated gestalt by the physician and patient. However, the validated assessment of acne can be challenging and time consuming. Acne can be classified into several morphologies including closed comedones (whiteheads), open comedones (blackheads), papules, pustules, cysts (nodules) and scars. For a validated assessment, the different morphologies need to be counted independently, a method that is far too time consuming considering the limited time available for a consultation. However, it is practical to record and analyze images since dermatologists can validate the severity of acne within seconds after uploading an image. This paper covers the processes of region-ofinterest determination using entropy-based filtering and thresholding as well acne lesion feature extraction. Feature extraction methods using discrete wavelet frames and gray-level co-occurence matrix were presented and their effectiveness in separating the six major acne lesion classes were discussed. Several classifiers were used to test the extracted features. Correct classification accuracy as high as 85.5% was achieved using the binary classification tree with fourteen principle components used as descriptors. Further studies are underway to further improve the algorithm performance and validate it on a larger database.

  19. Antibiogramj: A tool for analysing images from disk diffusion tests.

    Science.gov (United States)

    Alonso, C A; Domínguez, C; Heras, J; Mata, E; Pascual, V; Torres, C; Zarazaga, M

    2017-05-01

    Disk diffusion testing, known as antibiogram, is widely applied in microbiology to determine the antimicrobial susceptibility of microorganisms. The measurement of the diameter of the zone of growth inhibition of microorganisms around the antimicrobial disks in the antibiogram is frequently performed manually by specialists using a ruler. This is a time-consuming and error-prone task that might be simplified using automated or semi-automated inhibition zone readers. However, most readers are usually expensive instruments with embedded software that require significant changes in laboratory design and workflow. Based on the workflow employed by specialists to determine the antimicrobial susceptibility of microorganisms, we have designed a software tool that, from images of disk diffusion tests, semi-automatises the process. Standard computer vision techniques are employed to achieve such an automatisation. We present AntibiogramJ, a user-friendly and open-source software tool to semi-automatically determine, measure and categorise inhibition zones of images from disk diffusion tests. AntibiogramJ is implemented in Java and deals with images captured with any device that incorporates a camera, including digital cameras and mobile phones. The fully automatic procedure of AntibiogramJ for measuring inhibition zones achieves an overall agreement of 87% with an expert microbiologist; moreover, AntibiogramJ includes features to easily detect when the automatic reading is not correct and fix it manually to obtain the correct result. AntibiogramJ is a user-friendly, platform-independent, open-source, and free tool that, up to the best of our knowledge, is the most complete software tool for antibiogram analysis without requiring any investment in new equipment or changes in the laboratory. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Quantitative analysis and classification of AFM images of human hair.

    Science.gov (United States)

    Gurden, S P; Monteiro, V F; Longo, E; Ferreira, M M C

    2004-07-01

    The surface topography of human hair, as defined by the outer layer of cellular sheets, termed cuticles, largely determines the cosmetic properties of the hair. The condition of the cuticles is of great cosmetic importance, but also has the potential to aid diagnosis in the medical and forensic sciences. Atomic force microscopy (AFM) has been demonstrated to offer unique advantages for analysis of the hair surface, mainly due to the high image resolution and the ease of sample preparation. This article presents an algorithm for the automatic analysis of AFM images of human hair. The cuticular structure is characterized using a series of descriptors, such as step height, tilt angle and cuticle density, allowing quantitative analysis and comparison of different images. The usefulness of this approach is demonstrated by a classification study. Thirty-eight AFM images were measured, consisting of hair samples from (a) untreated and bleached hair samples, and (b) the root and distal ends of the hair fibre. The multivariate classification technique partial least squares discriminant analysis is used to test the ability of the algorithm to characterize the images according to the properties of the hair samples. Most of the images (86%) were found to be classified correctly.