WorldWideScience

Sample records for robust tissue classification

  1. Robust tissue classification for reproducible wound assessment in telemedicine environments

    Science.gov (United States)

    Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves

    2010-04-01

    In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.

  2. Robust multi-site MR data processing: iterative optimization of bias correction, tissue classification, and registration.

    Science.gov (United States)

    Young Kim, Eun; Johnson, Hans J

    2013-01-01

    A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.

  3. Automating the expert consensus paradigm for robust lung tissue classification

    Science.gov (United States)

    Rajagopalan, Srinivasan; Karwoski, Ronald A.; Raghunath, Sushravya; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Clinicians confirm the efficacy of dynamic multidisciplinary interactions in diagnosing Lung disease/wellness from CT scans. However, routine clinical practice cannot readily accomodate such interactions. Current schemes for automating lung tissue classification are based on a single elusive disease differentiating metric; this undermines their reliability in routine diagnosis. We propose a computational workflow that uses a collection (#: 15) of probability density functions (pdf)-based similarity metrics to automatically cluster pattern-specific (#patterns: 5) volumes of interest (#VOI: 976) extracted from the lung CT scans of 14 patients. The resultant clusters are refined for intra-partition compactness and subsequently aggregated into a super cluster using a cluster ensemble technique. The super clusters were validated against the consensus agreement of four clinical experts. The aggregations correlated strongly with expert consensus. By effectively mimicking the expertise of physicians, the proposed workflow could make automation of lung tissue classification a clinical reality.

  4. Tissue Classification

    DEFF Research Database (Denmark)

    Van Leemput, Koen; Puonti, Oula

    2015-01-01

    Computational methods for automatically segmenting magnetic resonance images of the brain have seen tremendous advances in recent years. So-called tissue classification techniques, aimed at extracting the three main brain tissue classes (white matter, gray matter, and cerebrospinal fluid), are now...... well established. In their simplest form, these methods classify voxels independently based on their intensity alone, although much more sophisticated models are typically used in practice. This article aims to give an overview of often-used computational techniques for brain tissue classification...

  5. A Dirichlet process mixture model for brain MRI tissue classification.

    Science.gov (United States)

    Ferreira da Silva, Adelino R

    2007-04-01

    Accurate classification of magnetic resonance images according to tissue type or region of interest has become a critical requirement in diagnosis, treatment planning, and cognitive neuroscience. Several authors have shown that finite mixture models give excellent results in the automated segmentation of MR images of the human normal brain. However, performance and robustness of finite mixture models deteriorate when the models have to deal with a variety of anatomical structures. In this paper, we propose a nonparametric Bayesian model for tissue classification of MR images of the brain. The model, known as Dirichlet process mixture model, uses Dirichlet process priors to overcome the limitations of current parametric finite mixture models. To validate the accuracy and robustness of our method we present the results of experiments carried out on simulated MR brain scans, as well as on real MR image data. The results are compared with similar results from other well-known MRI segmentation methods.

  6. Robust multi-tissue gene panel for cancer detection

    Directory of Open Access Journals (Sweden)

    Talantov Dmitri

    2010-06-01

    Full Text Available Abstract Background We have identified a set of genes whose relative mRNA expression levels in various solid tumors can be used to robustly distinguish cancer from matching normal tissue. Our current feature set consists of 113 gene probes for 104 unique genes, originally identified as differentially expressed in solid primary tumors in microarray data on Affymetrix HG-U133A platform in five tissue types: breast, colon, lung, prostate and ovary. For each dataset, we first identified a set of genes significantly differentially expressed in tumor vs. normal tissue at p-value = 0.05 using an experimentally derived error model. Our common cancer gene panel is the intersection of these sets of significantly dysregulated genes and can distinguish tumors from normal tissue on all these five tissue types. Methods Frozen tumor specimens were obtained from two commercial vendors Clinomics (Pittsfield, MA and Asterand (Detroit, MI. Biotinylated targets were prepared using published methods (Affymetrix, CA and hybridized to Affymetrix U133A GeneChips (Affymetrix, CA. Expression values for each gene were calculated using Affymetrix GeneChip analysis software MAS 5.0. We then used a software package called Genes@Work for differential expression discovery, and SVM light linear kernel for building classification models. Results We validate the predictability of this gene list on several publicly available data sets generated on the same platform. Of note, when analysing the lung cancer data set of Spira et al, using an SVM linear kernel classifier, our gene panel had 94.7% leave-one-out accuracy compared to 87.8% using the gene panel in the original paper. In addition, we performed high-throughput validation on the Dana Farber Cancer Institute GCOD database and several GEO datasets. Conclusions Our result showed the potential for this panel as a robust classification tool for multiple tumor types on the Affymetrix platform, as well as other whole genome arrays

  7. Robust electrocardiogram (ECG) beat classification using discrete wavelet transform

    International Nuclear Information System (INIS)

    Minhas, Fayyaz-ul-Amir Afsar; Arif, Muhammad

    2008-01-01

    This paper presents a robust technique for the classification of six types of heartbeats through an electrocardiogram (ECG). Features extracted from the QRS complex of the ECG using a wavelet transform along with the instantaneous RR-interval are used for beat classification. The wavelet transform utilized for feature extraction in this paper can also be employed for QRS delineation, leading to reduction in overall system complexity as no separate feature extraction stage would be required in the practical implementation of the system. Only 11 features are used for beat classification with the classification accuracy of ∼99.5% through a KNN classifier. Another main advantage of this method is its robustness to noise, which is illustrated in this paper through experimental results. Furthermore, principal component analysis (PCA) has been used for feature reduction, which reduces the number of features from 11 to 6 while retaining the high beat classification accuracy. Due to reduction in computational complexity (using six features, the time required is ∼4 ms per beat), a simple classifier and noise robustness (at 10 dB signal-to-noise ratio, accuracy is 95%), this method offers substantial advantages over previous techniques for implementation in a practical ECG analyzer

  8. Quantum Cascade Laser-Based Infrared Microscopy for Label-Free and Automated Cancer Classification in Tissue Sections.

    Science.gov (United States)

    Kuepper, Claus; Kallenbach-Thieltges, Angela; Juette, Hendrik; Tannapfel, Andrea; Großerueschkamp, Frederik; Gerwert, Klaus

    2018-05-16

    A feasibility study using a quantum cascade laser-based infrared microscope for the rapid and label-free classification of colorectal cancer tissues is presented. Infrared imaging is a reliable, robust, automated, and operator-independent tissue classification method that has been used for differential classification of tissue thin sections identifying tumorous regions. However, long acquisition time by the so far used FT-IR-based microscopes hampered the clinical translation of this technique. Here, the used quantum cascade laser-based microscope provides now infrared images for precise tissue classification within few minutes. We analyzed 110 patients with UICC-Stage II and III colorectal cancer, showing 96% sensitivity and 100% specificity of this label-free method as compared to histopathology, the gold standard in routine clinical diagnostics. The main hurdle for the clinical translation of IR-Imaging is overcome now by the short acquisition time for high quality diagnostic images, which is in the same time range as frozen sections by pathologists.

  9. A Robust Geometric Model for Argument Classification

    Science.gov (United States)

    Giannone, Cristina; Croce, Danilo; Basili, Roberto; de Cao, Diego

    Argument classification is the task of assigning semantic roles to syntactic structures in natural language sentences. Supervised learning techniques for frame semantics have been recently shown to benefit from rich sets of syntactic features. However argument classification is also highly dependent on the semantics of the involved lexicals. Empirical studies have shown that domain dependence of lexical information causes large performance drops in outside domain tests. In this paper a distributional approach is proposed to improve the robustness of the learning model against out-of-domain lexical phenomena.

  10. Learning features for tissue classification with the classification restricted Boltzmann machine

    DEFF Research Database (Denmark)

    van Tulder, Gijs; de Bruijne, Marleen

    2014-01-01

    Performance of automated tissue classification in medical imaging depends on the choice of descriptive features. In this paper, we show how restricted Boltzmann machines (RBMs) can be used to learn features that are especially suited for texture-based tissue classification. We introduce the convo...... outperform conventional RBM-based feature learning, which is unsupervised and uses only a generative learning objective, as well as often-used filter banks. We show that a mixture of generative and discriminative learning can produce filters that give a higher classification accuracy....

  11. Median Robust Extended Local Binary Pattern for Texture Classification.

    Science.gov (United States)

    Liu, Li; Lao, Songyang; Fieguth, Paul W; Guo, Yulan; Wang, Xiaogang; Pietikäinen, Matti

    2016-03-01

    Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.

  12. Pathological Bases for a Robust Application of Cancer Molecular Classification

    Directory of Open Access Journals (Sweden)

    Salvador J. Diaz-Cano

    2015-04-01

    Full Text Available Any robust classification system depends on its purpose and must refer to accepted standards, its strength relying on predictive values and a careful consideration of known factors that can affect its reliability. In this context, a molecular classification of human cancer must refer to the current gold standard (histological classification and try to improve it with key prognosticators for metastatic potential, staging and grading. Although organ-specific examples have been published based on proteomics, transcriptomics and genomics evaluations, the most popular approach uses gene expression analysis as a direct correlate of cellular differentiation, which represents the key feature of the histological classification. RNA is a labile molecule that varies significantly according with the preservation protocol, its transcription reflect the adaptation of the tumor cells to the microenvironment, it can be passed through mechanisms of intercellular transference of genetic information (exosomes, and it is exposed to epigenetic modifications. More robust classifications should be based on stable molecules, at the genetic level represented by DNA to improve reliability, and its analysis must deal with the concept of intratumoral heterogeneity, which is at the origin of tumor progression and is the byproduct of the selection process during the clonal expansion and progression of neoplasms. The simultaneous analysis of multiple DNA targets and next generation sequencing offer the best practical approach for an analytical genomic classification of tumors.

  13. Support vector machine classification and validation of cancer tissue samples using microarray expression data.

    Science.gov (United States)

    Furey, T S; Cristianini, N; Duffy, N; Bednarski, D W; Schummer, M; Haussler, D

    2000-10-01

    DNA microarray experiments generating thousands of gene expression measurements, are being used to gather information from tissue and cell samples regarding gene expression differences that will be useful in diagnosing disease. We have developed a new method to analyse this kind of data using support vector machines (SVMs). This analysis consists of both classification of the tissue samples, and an exploration of the data for mis-labeled or questionable tissue results. We demonstrate the method in detail on samples consisting of ovarian cancer tissues, normal ovarian tissues, and other normal tissues. The dataset consists of expression experiment results for 97,802 cDNAs for each tissue. As a result of computational analysis, a tissue sample is discovered and confirmed to be wrongly labeled. Upon correction of this mistake and the removal of an outlier, perfect classification of tissues is achieved, but not with high confidence. We identify and analyse a subset of genes from the ovarian dataset whose expression is highly differentiated between the types of tissues. To show robustness of the SVM method, two previously published datasets from other types of tissues or cells are analysed. The results are comparable to those previously obtained. We show that other machine learning methods also perform comparably to the SVM on many of those datasets. The SVM software is available at http://www.cs. columbia.edu/ approximately bgrundy/svm.

  14. Integrating Globality and Locality for Robust Representation Based Classification

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available The representation based classification method (RBCM has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC method and collaborative representation classification (CRC method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a “locality representation” method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the “globality representation.” On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy.

  15. Automated Detection of Connective Tissue by Tissue Counter Analysis and Classification and Regression Trees

    Directory of Open Access Journals (Sweden)

    Josef Smolle

    2001-01-01

    Full Text Available Objective: To evaluate the feasibility of the CART (Classification and Regression Tree procedure for the recognition of microscopic structures in tissue counter analysis. Methods: Digital microscopic images of H&E stained slides of normal human skin and of primary malignant melanoma were overlayed with regularly distributed square measuring masks (elements and grey value, texture and colour features within each mask were recorded. In the learning set, elements were interactively labeled as representing either connective tissue of the reticular dermis, other tissue components or background. Subsequently, CART models were based on these data sets. Results: Implementation of the CART classification rules into the image analysis program showed that in an independent test set 94.1% of elements classified as connective tissue of the reticular dermis were correctly labeled. Automated measurements of the total amount of tissue and of the amount of connective tissue within a slide showed high reproducibility (r=0.97 and r=0.94, respectively; p < 0.001. Conclusions: CART procedure in tissue counter analysis yields simple and reproducible classification rules for tissue elements.

  16. Robust Semi-Supervised Manifold Learning Algorithm for Classification

    Directory of Open Access Journals (Sweden)

    Mingxia Chen

    2018-01-01

    Full Text Available In the recent years, manifold learning methods have been widely used in data classification to tackle the curse of dimensionality problem, since they can discover the potential intrinsic low-dimensional structures of the high-dimensional data. Given partially labeled data, the semi-supervised manifold learning algorithms are proposed to predict the labels of the unlabeled points, taking into account label information. However, these semi-supervised manifold learning algorithms are not robust against noisy points, especially when the labeled data contain noise. In this paper, we propose a framework for robust semi-supervised manifold learning (RSSML to address this problem. The noisy levels of the labeled points are firstly predicted, and then a regularization term is constructed to reduce the impact of labeled points containing noise. A new robust semi-supervised optimization model is proposed by adding the regularization term to the traditional semi-supervised optimization model. Numerical experiments are given to show the improvement and efficiency of RSSML on noisy data sets.

  17. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    Science.gov (United States)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  18. Automated Tissue Classification Framework for Reproducible Chronic Wound Assessment

    Directory of Open Access Journals (Sweden)

    Rashmi Mukherjee

    2014-01-01

    Full Text Available The aim of this paper was to develop a computer assisted tissue classification (granulation, necrotic, and slough scheme for chronic wound (CW evaluation using medical image processing and statistical machine learning techniques. The red-green-blue (RGB wound images grabbed by normal digital camera were first transformed into HSI (hue, saturation, and intensity color space and subsequently the “S” component of HSI color channels was selected as it provided higher contrast. Wound areas from 6 different types of CW were segmented from whole images using fuzzy divergence based thresholding by minimizing edge ambiguity. A set of color and textural features describing granulation, necrotic, and slough tissues in the segmented wound area were extracted using various mathematical techniques. Finally, statistical learning algorithms, namely, Bayesian classification and support vector machine (SVM, were trained and tested for wound tissue classification in different CW images. The performance of the wound area segmentation protocol was further validated by ground truth images labeled by clinical experts. It was observed that SVM with 3rd order polynomial kernel provided the highest accuracies, that is, 86.94%, 90.47%, and 75.53%, for classifying granulation, slough, and necrotic tissues, respectively. The proposed automated tissue classification technique achieved the highest overall accuracy, that is, 87.61%, with highest kappa statistic value (0.793.

  19. FAST AND ROBUST SEGMENTATION AND CLASSIFICATION FOR CHANGE DETECTION IN URBAN POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    X. Roynard

    2016-06-01

    Full Text Available Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  20. Towards precise classification of cancers based on robust gene functional expression profiles

    Directory of Open Access Journals (Sweden)

    Zhu Jing

    2005-03-01

    Full Text Available Abstract Background Development of robust and efficient methods for analyzing and interpreting high dimension gene expression profiles continues to be a focus in computational biology. The accumulated experiment evidence supports the assumption that genes express and perform their functions in modular fashions in cells. Therefore, there is an open space for development of the timely and relevant computational algorithms that use robust functional expression profiles towards precise classification of complex human diseases at the modular level. Results Inspired by the insight that genes act as a module to carry out a highly integrated cellular function, we thus define a low dimension functional expression profile for data reduction. After annotating each individual gene to functional categories defined in a proper gene function classification system such as Gene Ontology applied in this study, we identify those functional categories enriched with differentially expressed genes. For each functional category or functional module, we compute a summary measure (s for the raw expression values of the annotated genes to capture the overall activity level of the module. In this way, we can treat the gene expressions within a functional module as an integrative data point to replace the multiple values of individual genes. We compare the classification performance of decision trees based on functional expression profiles with the conventional gene expression profiles using four publicly available datasets, which indicates that precise classification of tumour types and improved interpretation can be achieved with the reduced functional expression profiles. Conclusion This modular approach is demonstrated to be a powerful alternative approach to analyzing high dimension microarray data and is robust to high measurement noise and intrinsic biological variance inherent in microarray data. Furthermore, efficient integration with current biological knowledge

  1. Classification of coronary artery tissues using optical coherence tomography imaging in Kawasaki disease

    Science.gov (United States)

    Abdolmanafi, Atefeh; Prasad, Arpan Suravi; Duong, Luc; Dahdah, Nagib

    2016-03-01

    Intravascular imaging modalities, such as Optical Coherence Tomography (OCT) allow nowadays improving diagnosis, treatment, follow-up, and even prevention of coronary artery disease in the adult. OCT has been recently used in children following Kawasaki disease (KD), the most prevalent acquired coronary artery disease during childhood with devastating complications. The assessment of coronary artery layers with OCT and early detection of coronary sequelae secondary to KD is a promising tool for preventing myocardial infarction in this population. More importantly, OCT is promising for tissue quantification of the inner vessel wall, including neo intima luminal myofibroblast proliferation, calcification, and fibrous scar deposits. The goal of this study is to classify the coronary artery layers of OCT imaging obtained from a series of KD patients. Our approach is focused on developing a robust Random Forest classifier built on the idea of randomly selecting a subset of features at each node and based on second- and higher-order statistical texture analysis which estimates the gray-level spatial distribution of images by specifying the local features of each pixel and extracting the statistics from their distribution. The average classification accuracy for intima and media are 76.36% and 73.72% respectively. Random forest classifier with texture analysis promises for classification of coronary artery tissue.

  2. Implementation of several mathematical algorithms to breast tissue density classification

    International Nuclear Information System (INIS)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-01-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories. - Highlights: • Breast density classification can be obtained by suitable mathematical algorithms. • Mathematical processing help radiologists to obtain the BI-RADS classification. • The entropy and joint entropy show high performance for density classification

  3. Robust volume assessment of brain tissues for 3-dimensional fourier transformation MRI via a novel multispectral technique.

    Directory of Open Access Journals (Sweden)

    Jyh-Wen Chai

    Full Text Available A new TRIO algorithm method integrating three different algorithms is proposed to perform brain MRI segmentation in the native coordinate space, with no need of transformation to a standard coordinate space or the probability maps for segmentation. The method is a simple voxel-based algorithm, derived from multispectral remote sensing techniques, and only requires minimal operator input to depict GM, WM, and CSF tissue clusters to complete classification of a 3D high-resolution multislice-multispectral MRI data. Results showed very high accuracy and reproducibility in classification of GM, WM, and CSF in multislice-multispectral synthetic MRI data. The similarity indexes, expressing overlap between classification results and the ground truth, were 0.951, 0.962, and 0.956 for GM, WM, and CSF classifications in the image data with 3% noise level and 0% non-uniformity intensity. The method particularly allows for classification of CSF with 0.994, 0.961 and 0.996 of accuracy, sensitivity and specificity in images data with 3% noise level and 0% non-uniformity intensity, which had seldom performed well in previous studies. As for clinical MRI data, the quantitative data of brain tissue volumes aligned closely with the brain morphometrics in three different study groups of young adults, elderly volunteers, and dementia patients. The results also showed very low rates of the intra- and extra-operator variability in measurements of the absolute volumes and volume fractions of cerebral GM, WM, and CSF in three different study groups. The mean coefficients of variation of GM, WM, and CSF volume measurements were in the range of 0.03% to 0.30% of intra-operator measurements and 0.06% to 0.45% of inter-operator measurements. In conclusion, the TRIO algorithm exhibits a remarkable ability in robust classification of multislice-multispectral brain MR images, which would be potentially applicable for clinical brain volumetric analysis and explicitly promising

  4. Improving Cross-Day EEG-Based Emotion Classification Using Robust Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Yuan-Pin Lin

    2017-07-01

    Full Text Available Constructing a robust emotion-aware analytical framework using non-invasively recorded electroencephalogram (EEG signals has gained intensive attentions nowadays. However, as deploying a laboratory-oriented proof-of-concept study toward real-world applications, researchers are now facing an ecological challenge that the EEG patterns recorded in real life substantially change across days (i.e., day-to-day variability, arguably making the pre-defined predictive model vulnerable to the given EEG signals of a separate day. The present work addressed how to mitigate the inter-day EEG variability of emotional responses with an attempt to facilitate cross-day emotion classification, which was less concerned in the literature. This study proposed a robust principal component analysis (RPCA-based signal filtering strategy and validated its neurophysiological validity and machine-learning practicability on a binary emotion classification task (happiness vs. sadness using a five-day EEG dataset of 12 subjects when participated in a music-listening task. The empirical results showed that the RPCA-decomposed sparse signals (RPCA-S enabled filtering off the background EEG activity that contributed more to the inter-day variability, and predominately captured the EEG oscillations of emotional responses that behaved relatively consistent along days. Through applying a realistic add-day-in classification validation scheme, the RPCA-S progressively exploited more informative features (from 12.67 ± 5.99 to 20.83 ± 7.18 and improved the cross-day binary emotion-classification accuracy (from 58.31 ± 12.33% to 64.03 ± 8.40% as trained the EEG signals from one to four recording days and tested against one unseen subsequent day. The original EEG features (prior to RPCA processing neither achieved the cross-day classification (the accuracy was around chance level nor replicated the encouraging improvement due to the inter-day EEG variability. This result

  5. Neutral face classification using personalized appearance models for fast and robust emotion detection.

    Science.gov (United States)

    Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha

    2015-09-01

    Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.

  6. Interactive classification and content-based retrieval of tissue images

    Science.gov (United States)

    Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof

    2002-11-01

    We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.

  7. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    Directory of Open Access Journals (Sweden)

    Rui Sun

    2016-08-01

    Full Text Available Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  8. Artificial neural net system for interactive tissue classification with MR imaging and image segmentation

    International Nuclear Information System (INIS)

    Clarke, L.P.; Silbiger, M.; Naylor, C.; Brown, K.

    1990-01-01

    This paper reports on the development of interactive methods for MR tissue classification that permit mathematically rigorous methods for three-dimensional image segmentation and automatic organ/tumor contouring, as required for surgical and RTP planning. The authors investigate a number of image-intensity based tissue- classification methods that make no implicit assumptions on the MR parameters and hence are not limited by image data set. Similarly, we have trained artificial neural net (ANN) systems for both supervised and unsupervised tissue classification

  9. Data-driven classification of ventilated lung tissues using electrical impedance tomography

    International Nuclear Information System (INIS)

    Gómez-Laberge, Camille; Hogan, Matthew J; Elke, Gunnar; Weiler, Norbert; Frerichs, Inéz; Adler, Andy

    2011-01-01

    Current methods for identifying ventilated lung regions utilizing electrical impedance tomography images rely on dividing the image into arbitrary regions of interest (ROI), manually delineating ROI, or forming ROI with pixels whose signal properties surpass an arbitrary threshold. In this paper, we propose a novel application of a data-driven classification method to identify ventilated lung ROI based on forming k clusters from pixels with correlated signals. A standard first-order model for lung mechanics is then applied to determine which ROI correspond to ventilated lung tissue. We applied the method in an experimental study of 16 mechanically ventilated swine in the supine position, which underwent changes in positive end-expiratory pressure (PEEP) and fraction of inspired oxygen (F I O 2 ). In each stage of the experimental protocol, the method performed best with k = 4 and consistently identified 3 lung tissue ROI and 1 boundary tissue ROI in 15 of the 16 subjects. When testing for changes from baseline in lung position, tidal volume, and respiratory system compliance, we found that PEEP displaced the ventilated lung region dorsally by 2 cm, decreased tidal volume by 1.3%, and increased the respiratory system compliance time constant by 0.3 s. F I O 2 decreased tidal volume by 0.7%. All effects were tested at p < 0.05 with n = 16. These findings suggest that the proposed ROI detection method is robust and sensitive to ventilation dynamics in the experimental setting

  10. Joint learning and weighting of visual vocabulary for bag-of-feature based tissue classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-12-01

    Automated classification of tissue types of Region of Interest (ROI) in medical images has been an important application in Computer-Aided Diagnosis (CAD). Recently, bag-of-feature methods which treat each ROI as a set of local features have shown their power in this field. Two important issues of bag-of-feature strategy for tissue classification are investigated in this paper: the visual vocabulary learning and weighting, which are always considered independently in traditional methods by neglecting the inner relationship between the visual words and their weights. To overcome this problem, we develop a novel algorithm, Joint-ViVo, which learns the vocabulary and visual word weights jointly. A unified objective function based on large margin is defined for learning of both visual vocabulary and visual word weights, and optimized alternately in the iterative algorithm. We test our algorithm on three tissue classification tasks: classifying breast tissue density in mammograms, classifying lung tissue in High-Resolution Computed Tomography (HRCT) images, and identifying brain tissue type in Magnetic Resonance Imaging (MRI). The results show that Joint-ViVo outperforms the state-of-art methods on tissue classification problems. © 2013 Elsevier Ltd.

  11. Breast tissue classification using x-ray scattering measurements and multivariate data analysis

    Science.gov (United States)

    Ryan, Elaine A.; Farquharson, Michael J.

    2007-11-01

    This study utilized two radiation scatter interactions in order to differentiate malignant from non-malignant breast tissue. These two interactions were Compton scatter, used to measure the electron density of the tissues, and coherent scatter to obtain a measure of structure. Measurements of these parameters were made using a laboratory experimental set-up comprising an x-ray tube and HPGe detector. The breast tissue samples investigated comprise five different tissue classifications: adipose, malignancy, fibroadenoma, normal fibrous tissue and tissue that had undergone fibrocystic change. The coherent scatter spectra were analysed using a peak fitting routine, and a technique involving multivariate analysis was used to combine the peak fitted scatter profile spectra and the electron density values into a tissue classification model. The number of variables used in the model was refined by finding the sensitivity and specificity of each model and concentrating on differentiating between two tissues at a time. The best model that was formulated had a sensitivity of 54% and a specificity of 100%.

  12. Breast tissue classification using x-ray scattering measurements and multivariate data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, Elaine A; Farquharson, Michael J [School of Allied Health Sciences, City University, Charterhouse Square, London EC1M 6PA (United Kingdom)

    2007-11-21

    This study utilized two radiation scatter interactions in order to differentiate malignant from non-malignant breast tissue. These two interactions were Compton scatter, used to measure the electron density of the tissues, and coherent scatter to obtain a measure of structure. Measurements of these parameters were made using a laboratory experimental set-up comprising an x-ray tube and HPGe detector. The breast tissue samples investigated comprise five different tissue classifications: adipose, malignancy, fibroadenoma, normal fibrous tissue and tissue that had undergone fibrocystic change. The coherent scatter spectra were analysed using a peak fitting routine, and a technique involving multivariate analysis was used to combine the peak fitted scatter profile spectra and the electron density values into a tissue classification model. The number of variables used in the model was refined by finding the sensitivity and specificity of each model and concentrating on differentiating between two tissues at a time. The best model that was formulated had a sensitivity of 54% and a specificity of 100%.

  13. Tissue classification and segmentation of pressure injuries using convolutional neural networks.

    Science.gov (United States)

    Zahia, Sofia; Sierra-Sosa, Daniel; Garcia-Zapirain, Begonya; Elmaghraby, Adel

    2018-06-01

    This paper presents a new approach for automatic tissue classification in pressure injuries. These wounds are localized skin damages which need frequent diagnosis and treatment. Therefore, a reliable and accurate systems for segmentation and tissue type identification are needed in order to achieve better treatment results. Our proposed system is based on a Convolutional Neural Network (CNN) devoted to performing optimized segmentation of the different tissue types present in pressure injuries (granulation, slough, and necrotic tissues). A preprocessing step removes the flash light and creates a set of 5x5 sub-images which are used as input for the CNN network. The network output will classify every sub-image of the validation set into one of the three classes studied. The metrics used to evaluate our approach show an overall average classification accuracy of 92.01%, an average total weighted Dice Similarity Coefficient of 91.38%, and an average precision per class of 97.31% for granulation tissue, 96.59% for necrotic tissue, and 77.90% for slough tissue. Our system has been proven to make recognition of complicated structures in biomedical images feasible. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. MRI Brain Images Healthy and Pathological Tissues Classification with the Aid of Improved Particle Swarm Optimization and Neural Network

    Science.gov (United States)

    Sheejakumari, V.; Sankara Gomathi, B.

    2015-01-01

    The advantages of magnetic resonance imaging (MRI) over other diagnostic imaging modalities are its higher spatial resolution and its better discrimination of soft tissue. In the previous tissues classification method, the healthy and pathological tissues are classified from the MRI brain images using HGANN. But the method lacks sensitivity and accuracy measures. The classification method is inadequate in its performance in terms of these two parameters. So, to avoid these drawbacks, a new classification method is proposed in this paper. Here, new tissues classification method is proposed with improved particle swarm optimization (IPSO) technique to classify the healthy and pathological tissues from the given MRI images. Our proposed classification method includes the same four stages, namely, tissue segmentation, feature extraction, heuristic feature selection, and tissue classification. The method is implemented and the results are analyzed in terms of various statistical performance measures. The results show the effectiveness of the proposed classification method in classifying the tissues and the achieved improvement in sensitivity and accuracy measures. Furthermore, the performance of the proposed technique is evaluated by comparing it with the other segmentation methods. PMID:25977706

  15. Implementation of several mathematical algorithms to breast tissue density classification

    Science.gov (United States)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-02-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories.

  16. Towards an efficient and robust foot classification from pedobarographic images.

    Science.gov (United States)

    Oliveira, Francisco P M; Sousa, Andreia; Santos, Rubim; Tavares, João Manuel R S

    2012-01-01

    This paper presents a new computational framework for automatic foot classification from digital plantar pressure images. It classifies the foot as left or right and simultaneously calculates two well-known footprint indices: the Cavanagh's arch index (AI) and the modified AI. The accuracy of the framework was evaluated using a set of plantar pressure images from two common pedobarographic devices. The results were outstanding, as all feet under analysis were correctly classified as left or right and no significant differences were observed between the footprint indices calculated using the computational solution and the traditional manual method. The robustness of the proposed framework to arbitrary foot orientations and to the acquisition device was also tested and confirmed.

  17. Combining extreme learning machines using support vector machines for breast tissue classification.

    Science.gov (United States)

    Daliri, Mohammad Reza

    2015-01-01

    In this paper, we present a new approach for breast tissue classification using the features derived from electrical impedance spectroscopy. This method is composed of a feature extraction method, feature selection phase and a classification step. The feature extraction phase derives the features from the electrical impedance spectra. The extracted features consist of the impedivity at zero frequency (I0), the phase angle at 500 KHz, the high-frequency slope of phase angle, the impedance distance between spectral ends, the area under spectrum, the normalised area, the maximum of the spectrum, the distance between impedivity at I0 and the real part of the maximum frequency point and the length of the spectral curve. The system uses the information theoretic criterion as a strategy for feature selection and the combining extreme learning machines (ELMs) for the classification phase. The results of several ELMs are combined using the support vector machines classifier, and the result of classification is reported as a measure of the performance of the system. The results indicate that the proposed system achieves high accuracy in classification of breast tissues using the electrical impedance spectroscopy.

  18. Classification between normal and tumor tissues based on the pair-wise gene expression ratio

    International Nuclear Information System (INIS)

    Yap, YeeLeng; Zhang, XueWu; Ling, MT; Wang, XiangHong; Wong, YC; Danchin, Antoine

    2004-01-01

    Precise classification of cancer types is critically important for early cancer diagnosis and treatment. Numerous efforts have been made to use gene expression profiles to improve precision of tumor classification. However, reliable cancer-related signals are generally lacking. Using recent datasets on colon and prostate cancer, a data transformation procedure from single gene expression to pair-wise gene expression ratio is proposed. Making use of the internal consistency of each expression profiling dataset this transformation improves the signal to noise ratio of the dataset and uncovers new relevant cancer-related signals (features). The efficiency in using the transformed dataset to perform normal/tumor classification was investigated using feature partitioning with informative features (gene annotation) as discriminating axes (single gene expression or pair-wise gene expression ratio). Classification results were compared to the original datasets for up to 10-feature model classifiers. 82 and 262 genes that have high correlation to tissue phenotype were selected from the colon and prostate datasets respectively. Remarkably, data transformation of the highly noisy expression data successfully led to lower the coefficient of variation (CV) for the within-class samples as well as improved the correlation with tissue phenotypes. The transformed dataset exhibited lower CV when compared to that of single gene expression. In the colon cancer set, the minimum CV decreased from 45.3% to 16.5%. In prostate cancer, comparable CV was achieved with and without transformation. This improvement in CV, coupled with the improved correlation between the pair-wise gene expression ratio and tissue phenotypes, yielded higher classification efficiency, especially with the colon dataset – from 87.1% to 93.5%. Over 90% of the top ten discriminating axes in both datasets showed significant improvement after data transformation. The high classification efficiency achieved suggested

  19. Classification of breast tissue using a laboratory system for small-angle x-ray scattering (SAXS)

    International Nuclear Information System (INIS)

    Sidhu, S; Siu, K K W; Falzon, G; Hart, S A; Fox, J G; Lewis, R A

    2011-01-01

    Structural changes in breast tissue at the nanometre scale have been shown to differentiate between tissue types using synchrotron SAXS techniques. Classification of breast tissues using information acquired from a laboratory SAXS camera source could possibly provide a means of adopting SAXS as a viable diagnostic procedure. Tissue samples were obtained from surgical waste from 66 patients and structural components of the tissues were examined between q = 0.25 and 2.3 nm -1 . Principal component analysis showed that the amplitude of the fifth-order axial Bragg peak, the magnitude of the integrated intensity and the full-width at half-maximum of the fat peak were significantly different between tissue types. A discriminant analysis showed that excellent classification can be achieved; however, only 30% of the tissue samples provided the 16 variables required for classification. This suggests that the presence of disease is represented by a combination of factors, rather than one specific trait. A closer examination of the amorphous scattering intensity showed not only a trend of increased scattering intensity with disease severity, but also a corresponding decrease in the size of the scatterers contributing to this intensity.

  20. The method of diagnosis and classification of the gingival line defects of the teeth hard tissues

    Directory of Open Access Journals (Sweden)

    Olena Bulbuk

    2017-06-01

    Full Text Available For solving the problem of diagnosis and treatment of hard tissue defects the significant role belongs to the choice of tactics for dental treatment of hard tissue defects located in the gingival line of any tooth. This work aims to study the problems of diagnosis and classification of gingival line defects of the teeth hard tissues. That will contribute to the objectification of differentiated diagnostic and therapeutic approaches in the dental treatment of various clinical variants of these defects localization. The objective of the study – is to develop the anatomical-functional classification for differentiated estimation of hard tissue defects in the gingival part, as the basis for the application of differential diagnostic-therapeutic approaches to the dental treatment of hard tissue defects disposed in the gingival part of any tooth. Materials and methods of investigation: There was conducted the examination of 48 patients with hard tissue defects located in the gingival part of any tooth. To assess the magnitude of gingival line destruction the periodontal probe and X-ray examination were used. Results. The result of the performed research the classification of the gingival line defects of the hard tissues was offered using exponent power. The value of this indicator is equal to an integer number expressed in millimeters of distance from the epithelial attachment to the cavity’s bottom of defect. Conclusions. The proposed classification fills an obvious gap in academic representations about hard tissue defects located in the gingival part of any tooth. Also it offers the prospects of consensus on differentiated diagnostic-therapeutic approaches in different clinical variants of location.  This classification builds methodological “bridge of continuity” between therapeutic and prosthetic dentistry in the field of treatment of the gingival line defects of dental hard tissues.

  1. A classification of the mechanisms producing pathological tissue changes.

    Science.gov (United States)

    Grippo, John O; Oh, Daniel S

    2013-05-01

    The objectives are to present a classification of mechanisms which can produce pathological changes in body tissues and fluids, as well as to clarify and define the term biocorrosion, which has had a singular use in engineering. Considering the emerging field of biomedical engineering, it is essential to use precise definitions in the lexicons of engineering, bioengineering and related sciences such as medicine, dentistry and veterinary medicine. The mechanisms of stress, friction and biocorrosion and their pathological effects on tissues are described. Biocorrosion refers to the chemical, biochemical and electrochemical changes by degradation or induced growth of living body tissues and fluids. Various agents which can affect living tissues causing biocorrosion are enumerated which support the necessity and justify the use of this encompassing and more precise definition of biocorrosion. A distinction is made between the mechanisms of corrosion and biocorrosion.

  2. A robust probabilistic collaborative representation based classification for multimodal biometrics

    Science.gov (United States)

    Zhang, Jing; Liu, Huanxi; Ding, Derui; Xiao, Jianli

    2018-04-01

    Most of the traditional biometric recognition systems perform recognition with a single biometric indicator. These systems have suffered noisy data, interclass variations, unacceptable error rates, forged identity, and so on. Due to these inherent problems, it is not valid that many researchers attempt to enhance the performance of unimodal biometric systems with single features. Thus, multimodal biometrics is investigated to reduce some of these defects. This paper proposes a new multimodal biometric recognition approach by fused faces and fingerprints. For more recognizable features, the proposed method extracts block local binary pattern features for all modalities, and then combines them into a single framework. For better classification, it employs the robust probabilistic collaborative representation based classifier to recognize individuals. Experimental results indicate that the proposed method has improved the recognition accuracy compared to the unimodal biometrics.

  3. Mechanically robust cryogels with injectability and bioprinting supportability for adipose tissue engineering.

    Science.gov (United States)

    Qi, Dianjun; Wu, Shaohua; Kuss, Mitchell A; Shi, Wen; Chung, Soonkyu; Deegan, Paul T; Kamenskiy, Alexey; He, Yini; Duan, Bin

    2018-05-26

    Bioengineered adipose tissues have gained increased interest as a promising alternative to autologous tissue flaps and synthetic adipose fillers for soft tissue augmentation and defect reconstruction in clinic. Although many scaffolding materials and biofabrication methods have been investigated for adipose tissue engineering in the last decades, there are still challenges to recapitulate the appropriate adipose tissue microenvironment, maintain volume stability, and induce vascularization to achieve long-term function and integration. In the present research, we fabricated cryogels consisting of methacrylated gelatin, methacrylated hyaluronic acid, and 4arm poly(ethylene glycol) acrylate (PEG-4A) by using cryopolymerization. The cryogels were repeatedly injectable and stretchable, and the addition of PEG-4A improved the robustness and mechanical properties. The cryogels supported human adipose progenitor cell (HWA) and adipose derived mesenchymal stromal cell adhesion, proliferation, and adipogenic differentiation and maturation, regardless of the addition of PEG-4A. The HWA laden cryogels facilitated the co-culture of human umbilical vein endothelial cells (HUVEC) and capillary-like network formation, which in return also promoted adipogenesis. We further combined cryogels with 3D bioprinting to generate handleable adipose constructs with clinically relevant size. 3D bioprinting enabled the deposition of multiple bioinks onto the cryogels. The bioprinted flap-like constructs had an integrated structure without delamination and supported vascularization. Adipose tissue engineering is promising for reconstruction of soft tissue defects, and also challenging for restoring and maintaining soft tissue volume and shape, and achieving vascularization and integration. In this study, we fabricated cryogels with mechanical robustness, injectability, and stretchability by using cryopolymerization. The cryogels promoted cell adhesion, proliferation, and adipogenic

  4. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  5. Using robust principal component analysis to alleviate day-to-day variability in EEG based emotion classification.

    Science.gov (United States)

    Ping-Keng Jao; Yuan-Pin Lin; Yi-Hsuan Yang; Tzyy-Ping Jung

    2015-08-01

    An emerging challenge for emotion classification using electroencephalography (EEG) is how to effectively alleviate day-to-day variability in raw data. This study employed the robust principal component analysis (RPCA) to address the problem with a posed hypothesis that background or emotion-irrelevant EEG perturbations lead to certain variability across days and somehow submerge emotion-related EEG dynamics. The empirical results of this study evidently validated our hypothesis and demonstrated the RPCA's feasibility through the analysis of a five-day dataset of 12 subjects. The RPCA allowed tackling the sparse emotion-relevant EEG dynamics from the accompanied background perturbations across days. Sequentially, leveraging the RPCA-purified EEG trials from more days appeared to improve the emotion-classification performance steadily, which was not found in the case using the raw EEG features. Therefore, incorporating the RPCA with existing emotion-aware machine-learning frameworks on a longitudinal dataset of each individual may shed light on the development of a robust affective brain-computer interface (ABCI) that can alleviate ecological inter-day variability.

  6. 75 FR 68972 - Medical Devices; General and Plastic Surgery Devices; Classification of Tissue Adhesive With...

    Science.gov (United States)

    2010-11-10

    .... FDA-2010-N-0512] Medical Devices; General and Plastic Surgery Devices; Classification of Tissue... running to unintended areas, etc. B. Wound dehiscence C. Adverse tissue reaction and chemical burns D..., Clinical Studies, Labeling. Adverse tissue reaction and chemical Biocompatibility Animal burns. Testing...

  7. Visualization and tissue classification of human breast cancer images using ultrahigh-resolution OCT (Conference Presentation)

    Science.gov (United States)

    Yao, Xinwen; Gan, Yu; Chang, Ernest W.; Hibshoosh, Hanina; Feldman, Sheldon; Hendon, Christine P.

    2017-02-01

    We employed a home-built ultrahigh resolution (UHR) OCT system at 800nm to image human breast cancer sample ex vivo. The system has an axial resolution of 2.72µm and a lateral resolution of 5.52µm with an extended imaging range of 1.78mm. Over 900 UHR OCT volumes were generated on specimens from 23 breast cancer cases. With better spatial resolution, detailed structures in the breast tissue were better defined. Different types of breast cancer as well as healthy breast tissue can be well delineated from the UHR OCT images. To quantitatively evaluate the advantages of UHR OCT imaging of breast cancer, features derived from OCT intensity images were used as inputs to a machine learning model, the relevance vector machine. A trained machine learning model was employed to evaluate the performance of tissue classification based on UHR OCT images for differentiating tissue types in the breast samples, including adipose tissue, healthy stroma and cancerous region. For adipose tissue, grid-based local features were extracted from OCT intensity data, including standard deviation, entropy, and homogeneity. We showed that it was possible to enhance the classification performance on distinguishing fat tissue from non-fat tissue by using the UHR images when compared with the results based on OCT images from a commercial 1300 nm OCT system. For invasive ductal carcinoma (IDC) and normal stroma differentiation, the classification was based on frame-based features that portray signal penetration depth and tissue reflectivity. The confusing matrix indicated a sensitivity of 97.5% and a sensitivity of 77.8%.

  8. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    Science.gov (United States)

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We

  9. A strategy for tissue self-organization that is robust to cellular heterogeneity and plasticity.

    Science.gov (United States)

    Cerchiari, Alec E; Garbe, James C; Jee, Noel Y; Todhunter, Michael E; Broaders, Kyle E; Peehl, Donna M; Desai, Tejal A; LaBarge, Mark A; Thomson, Matthew; Gartner, Zev J

    2015-02-17

    Developing tissues contain motile populations of cells that can self-organize into spatially ordered tissues based on differences in their interfacial surface energies. However, it is unclear how self-organization by this mechanism remains robust when interfacial energies become heterogeneous in either time or space. The ducts and acini of the human mammary gland are prototypical heterogeneous and dynamic tissues comprising two concentrically arranged cell types. To investigate the consequences of cellular heterogeneity and plasticity on cell positioning in the mammary gland, we reconstituted its self-organization from aggregates of primary cells in vitro. We find that self-organization is dominated by the interfacial energy of the tissue-ECM boundary, rather than by differential homo- and heterotypic energies of cell-cell interaction. Surprisingly, interactions with the tissue-ECM boundary are binary, in that only one cell type interacts appreciably with the boundary. Using mathematical modeling and cell-type-specific knockdown of key regulators of cell-cell cohesion, we show that this strategy of self-organization is robust to severe perturbations affecting cell-cell contact formation. We also find that this mechanism of self-organization is conserved in the human prostate. Therefore, a binary interfacial interaction with the tissue boundary provides a flexible and generalizable strategy for forming and maintaining the structure of two-component tissues that exhibit abundant heterogeneity and plasticity. Our model also predicts that mutations affecting binary cell-ECM interactions are catastrophic and could contribute to loss of tissue architecture in diseases such as breast cancer.

  10. Classification of fibroglandular tissue distribution in the breast based on radiotherapy planning CT

    International Nuclear Information System (INIS)

    Juneja, Prabhjot; Evans, Philip; Windridge, David; Harris, Emma

    2016-01-01

    Accurate segmentation of breast tissues is required for a number of applications such as model based deformable registration in breast radiotherapy. The accuracy of breast tissue segmentation is affected by the spatial distribution (or pattern) of fibroglandular tissue (FT). The goal of this study was to develop and evaluate texture features, determined from planning computed tomography (CT) data, to classify the spatial distribution of FT in the breast. Planning CT data of 23 patients were evaluated in this study. Texture features were derived from the radial glandular fraction (RGF), which described the distribution of FT within three breast regions (posterior, middle, and anterior). Using visual assessment, experts grouped patients according to FT spatial distribution: sparse or non-sparse. Differences in the features between the two groups were investigated using the Wilcoxon rank test. Classification performance of the features was evaluated for a range of support vector machine (SVM) classifiers. Experts found eight patients and 15 patients had sparse and non-sparse spatial distribution of FT, respectively. A large proportion of features (>9 of 13) from the individual breast regions had significant differences (p <0.05) between the sparse and non-sparse group. The features from middle region had most significant differences and gave the highest classification accuracy for all the SVM kernels investigated. Overall, the features from middle breast region achieved highest accuracy (91 %) with the linear SVM kernel. This study found that features based on radial glandular fraction provide a means for discriminating between fibroglandular tissue distributions and could achieve a classification accuracy of 91 %

  11. Extreme Sparse Multinomial Logistic Regression: A Fast and Robust Framework for Hyperspectral Image Classification

    Science.gov (United States)

    Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen

    2017-12-01

    Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.

  12. Incorporation of support vector machines in the LIBS toolbox for sensitive and robust classification amidst unexpected sample and system variability.

    Science.gov (United States)

    Dingari, Narahara Chari; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P; Kumar Gundawar, Manoj

    2012-03-20

    Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real-world applications, e.g., quality assurance and process monitoring. Specifically, variability in sample, system, and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a nonlinear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that the application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), because of its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data-highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples, as well as in related areas of forensic and biological sample analysis.

  13. Efficacy of hidden markov model over support vector machine on multiclass classification of healthy and cancerous cervical tissues

    Science.gov (United States)

    Mukhopadhyay, Sabyasachi; Kurmi, Indrajit; Pratiher, Sawon; Mukherjee, Sukanya; Barman, Ritwik; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2018-02-01

    In this paper, a comparative study between SVM and HMM has been carried out for multiclass classification of cervical healthy and cancerous tissues. In our study, the HMM methodology is more promising to produce higher accuracy in classification.

  14. The differentiation of malignant and benign human breast tissue at surgical margins and biopsy using x-ray interaction data and Bayesian classification

    International Nuclear Information System (INIS)

    Mersov, A.; Mersov, G.; Al-Ebraheem, A.; Cornacchi, S.; Gohla, G.; Lovrics, P.; Farquharson, M.J.

    2014-01-01

    Worldwide, about 1.3 million women are diagnosed with breast cancer annually with an estimated 465,000 deaths. Accordingly, there is a need for high accuracy and speed in diagnosis of lesions suspected of being cancerous. This study assesses the interaction data collected from low energy x-rays within breast tissue samples. Trace element concentrations are assessed using x-ray fluorescence, as well as electron density, and molecular structure which are examined using incoherent and coherent scatter, respectively. Our work to date has shown that such data can provide a quantitative measure of certain tissue characterising parameters and hence, through appropriate modelling, could be used to classify samples for uses such as surgical margin detection and biopsy examination. The parameters used in this study for comparing the normal and tumour tissue sample populations are: levels of elements Ca, Cu, Fe, Br, Zn, Rb, K; the area, FWHM and amplitude from peaks fitted to the coherent scatter profile that are associated with fat, fibre and water content; the ratio of the Compton and coherent scatter peak area, FWHM and amplitude from the incoherent scatter profile. The novelty of the approach to this work lies in the fact that the classification process does not rely on one source of data but combines several measurements, the data from which in this application are modelled using a method based on Bayesian classification. The reliability of the classifications was assessed by its application to diagnostically known data that was not itself included in the thresholds determination. The results of the classification of over 70 breast tissue samples will be presented in this study. Bayesian modelling was carried out using selected significant parameters for classification resulting in 71% of normal tissue samples (n=35) and 66% of tumour tissue samples (n=35) being correctly classified when using all the samples. Bayesian classification using the same variables on all

  15. Neonatal Brain Tissue Classification with Morphological Adaptation and Unified Segmentation

    Directory of Open Access Journals (Sweden)

    Richard eBeare

    2016-03-01

    Full Text Available Measuring the distribution of brain tissue types (tissue classification in neonates is necessary for studying typical and atypical brain development, such as that associated with preterm birth, and may provide biomarkers for neurodevelopmental outcomes. Compared with magnetic resonance images of adults, neonatal images present specific challenges that require the development of specialized, population-specific methods. This paper introduces MANTiS (Morphologically Adaptive Neonatal Tissue Segmentation, which extends the unified segmentation approach to tissue classification implemented in Statistical Parametric Mapping (SPM software to neonates. MANTiS utilizes a combination of unified segmentation, template adaptation via morphological segmentation tools and topological filtering, to segment the neonatal brain into eight tissue classes: cortical gray matter, white matter, deep nuclear gray matter, cerebellum, brainstem, cerebrospinal fluid (CSF, hippocampus and amygdala. We evaluated the performance of MANTiS using two independent datasets. The first dataset, provided by the NeoBrainS12 challenge, consisted of coronal T2-weighted images of preterm infants (born ≤30 weeks’ gestation acquired at 30 weeks’ corrected gestational age (n= 5, coronal T2-weighted images of preterm infants acquired at 40 weeks’ corrected gestational age (n= 5 and axial T2-weighted images of preterm infants acquired at 40 weeks’ corrected gestational age (n= 5. The second dataset, provided by the Washington University NeuroDevelopmental Research (WUNDeR group, consisted of T2-weighted images of preterm infants (born <30 weeks’ gestation acquired shortly after birth (n= 12, preterm infants acquired at term-equivalent age (n= 12, and healthy term-born infants (born ≥38 weeks’ gestation acquired within the first nine days of life (n= 12. For the NeoBrainS12 dataset, mean Dice scores comparing MANTiS with manual segmentations were all above 0.7, except for

  16. Robust cell tracking in epithelial tissues through identification of maximum common subgraphs.

    Science.gov (United States)

    Kursawe, Jochen; Bardenet, Rémi; Zartman, Jeremiah J; Baker, Ruth E; Fletcher, Alexander G

    2016-11-01

    Tracking of cells in live-imaging microscopy videos of epithelial sheets is a powerful tool for investigating fundamental processes in embryonic development. Characterizing cell growth, proliferation, intercalation and apoptosis in epithelia helps us to understand how morphogenetic processes such as tissue invagination and extension are locally regulated and controlled. Accurate cell tracking requires correctly resolving cells entering or leaving the field of view between frames, cell neighbour exchanges, cell removals and cell divisions. However, current tracking methods for epithelial sheets are not robust to large morphogenetic deformations and require significant manual interventions. Here, we present a novel algorithm for epithelial cell tracking, exploiting the graph-theoretic concept of a 'maximum common subgraph' to track cells between frames of a video. Our algorithm does not require the adjustment of tissue-specific parameters, and scales in sub-quadratic time with tissue size. It does not rely on precise positional information, permitting large cell movements between frames and enabling tracking in datasets acquired at low temporal resolution due to experimental constraints such as phototoxicity. To demonstrate the method, we perform tracking on the Drosophila embryonic epidermis and compare cell-cell rearrangements to previous studies in other tissues. Our implementation is open source and generally applicable to epithelial tissues. © 2016 The Authors.

  17. Joint learning and weighting of visual vocabulary for bag-of-feature based tissue classification

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2013-01-01

    their power in this field. Two important issues of bag-of-feature strategy for tissue classification are investigated in this paper: the visual vocabulary learning and weighting, which are always considered independently in traditional methods by neglecting

  18. Breast tissue classification in digital tomosynthesis images based on global gradient minimization and texture features

    Science.gov (United States)

    Qin, Xulei; Lu, Guolan; Sechopoulos, Ioannis; Fei, Baowei

    2014-03-01

    Digital breast tomosynthesis (DBT) is a pseudo-three-dimensional x-ray imaging modality proposed to decrease the effect of tissue superposition present in mammography, potentially resulting in an increase in clinical performance for the detection and diagnosis of breast cancer. Tissue classification in DBT images can be useful in risk assessment, computer-aided detection and radiation dosimetry, among other aspects. However, classifying breast tissue in DBT is a challenging problem because DBT images include complicated structures, image noise, and out-of-plane artifacts due to limited angular tomographic sampling. In this project, we propose an automatic method to classify fatty and glandular tissue in DBT images. First, the DBT images are pre-processed to enhance the tissue structures and to decrease image noise and artifacts. Second, a global smooth filter based on L0 gradient minimization is applied to eliminate detailed structures and enhance large-scale ones. Third, the similar structure regions are extracted and labeled by fuzzy C-means (FCM) classification. At the same time, the texture features are also calculated. Finally, each region is classified into different tissue types based on both intensity and texture features. The proposed method is validated using five patient DBT images using manual segmentation as the gold standard. The Dice scores and the confusion matrix are utilized to evaluate the classified results. The evaluation results demonstrated the feasibility of the proposed method for classifying breast glandular and fat tissue on DBT images.

  19. Automated classification and visualization of healthy and pathological dental tissues based on near-infrared hyper-spectral imaging

    Science.gov (United States)

    Usenik, Peter; Bürmen, Miran; Vrtovec, Tomaž; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2011-03-01

    Despite major improvements in dental healthcare and technology, dental caries remains one of the most prevalent chronic diseases of modern society. The initial stages of dental caries are characterized by demineralization of enamel crystals, commonly known as white spots which are difficult to diagnose. If detected early enough, such demineralization can be arrested and reversed by non-surgical means through well established dental treatments (fluoride therapy, anti-bacterial therapy, low intensity laser irradiation). Near-infrared (NIR) hyper-spectral imaging is a new promising technique for early detection of demineralization based on distinct spectral features of healthy and pathological dental tissues. In this study, we apply NIR hyper-spectral imaging to classify and visualize healthy and pathological dental tissues including enamel, dentin, calculus, dentin caries, enamel caries and demineralized areas. For this purpose, a standardized teeth database was constructed consisting of 12 extracted human teeth with different degrees of natural dental lesions imaged by NIR hyper-spectral system, X-ray and digital color camera. The color and X-ray images of teeth were presented to a clinical expert for localization and classification of the dental tissues, thereby obtaining the gold standard. Principal component analysis was used for multivariate local modeling of healthy and pathological dental tissues. Finally, the dental tissues were classified by employing multiple discriminant analysis. High agreement was observed between the resulting classification and the gold standard with the classification sensitivity and specificity exceeding 85 % and 97 %, respectively. This study demonstrates that NIR hyper-spectral imaging has considerable diagnostic potential for imaging hard dental tissues.

  20. Classification of mislabelled microarrays using robust sparse logistic regression.

    Science.gov (United States)

    Bootkrajang, Jakramate; Kabán, Ata

    2013-04-01

    Previous studies reported that labelling errors are not uncommon in microarray datasets. In such cases, the training set may become misleading, and the ability of classifiers to make reliable inferences from the data is compromised. Yet, few methods are currently available in the bioinformatics literature to deal with this problem. The few existing methods focus on data cleansing alone, without reference to classification, and their performance crucially depends on some tuning parameters. In this article, we develop a new method to detect mislabelled arrays simultaneously with learning a sparse logistic regression classifier. Our method may be seen as a label-noise robust extension of the well-known and successful Bayesian logistic regression classifier. To account for possible mislabelling, we formulate a label-flipping process as part of the classifier. The regularization parameter is automatically set using Bayesian regularization, which not only saves the computation time that cross-validation would take, but also eliminates any unwanted effects of label noise when setting the regularization parameter. Extensive experiments with both synthetic data and real microarray datasets demonstrate that our approach is able to counter the bad effects of labelling errors in terms of predictive performance, it is effective at identifying marker genes and simultaneously it detects mislabelled arrays to high accuracy. The code is available from http://cs.bham.ac.uk/∼jxb008. Supplementary data are available at Bioinformatics online.

  1. Biased visualization of hypoperfused tissue by computed tomography due to short imaging duration: improved classification by image down-sampling and vascular models

    Energy Technology Data Exchange (ETDEWEB)

    Mikkelsen, Irene Klaerke; Ribe, Lars Riisgaard; Bekke, Susanne Lise; Tietze, Anna; Oestergaard, Leif; Mouridsen, Kim [Aarhus University Hospital, Center of Functionally Integrative Neuroscience, Aarhus C (Denmark); Jones, P.S.; Alawneh, Josef [University of Cambridge, Department of Clinical Neurosciences, Cambridge (United Kingdom); Puig, Josep; Pedraza, Salva [Dr. Josep Trueta Girona University Hospitals, Department of Radiology, Girona Biomedical Research Institute, Girona (Spain); Gillard, Jonathan H. [University of Cambridge, Department of Radiology, Cambridge (United Kingdom); Warburton, Elisabeth A. [Cambrigde University Hospitals, Addenbrooke, Stroke Unit, Cambridge (United Kingdom); Baron, Jean-Claude [University of Cambridge, Department of Clinical Neurosciences, Cambridge (United Kingdom); Centre Hospitalier Sainte Anne, INSERM U894, Paris (France)

    2015-07-15

    Lesion detection in acute stroke by computed-tomography perfusion (CTP) can be affected by incomplete bolus coverage in veins and hypoperfused tissue, so-called bolus truncation (BT), and low contrast-to-noise ratio (CNR). We examined the BT-frequency and hypothesized that image down-sampling and a vascular model (VM) for perfusion calculation would improve normo- and hypoperfused tissue classification. CTP datasets from 40 acute stroke patients were retrospectively analysed for BT. In 16 patients with hypoperfused tissue but no BT, repeated 2-by-2 image down-sampling and uniform filtering was performed, comparing CNR to perfusion-MRI levels and tissue classification to that of unprocessed data. By simulating reduced scan duration, the minimum scan-duration at which estimated lesion volumes came within 10 % of their true volume was compared for VM and state-of-the-art algorithms. BT in veins and hypoperfused tissue was observed in 9/40 (22.5 %) and 17/40 patients (42.5 %), respectively. Down-sampling to 128 x 128 resolution yielded CNR comparable to MR data and improved tissue classification (p = 0.0069). VM reduced minimum scan duration, providing reliable maps of cerebral blood flow and mean transit time: 5 s (p = 0.03) and 7 s (p < 0.0001), respectively. BT is not uncommon in stroke CTP with 40-s scan duration. Applying image down-sampling and VM improve tissue classification. (orig.)

  2. Biased visualization of hypoperfused tissue by computed tomography due to short imaging duration: improved classification by image down-sampling and vascular models

    International Nuclear Information System (INIS)

    Mikkelsen, Irene Klaerke; Ribe, Lars Riisgaard; Bekke, Susanne Lise; Tietze, Anna; Oestergaard, Leif; Mouridsen, Kim; Jones, P.S.; Alawneh, Josef; Puig, Josep; Pedraza, Salva; Gillard, Jonathan H.; Warburton, Elisabeth A.; Baron, Jean-Claude

    2015-01-01

    Lesion detection in acute stroke by computed-tomography perfusion (CTP) can be affected by incomplete bolus coverage in veins and hypoperfused tissue, so-called bolus truncation (BT), and low contrast-to-noise ratio (CNR). We examined the BT-frequency and hypothesized that image down-sampling and a vascular model (VM) for perfusion calculation would improve normo- and hypoperfused tissue classification. CTP datasets from 40 acute stroke patients were retrospectively analysed for BT. In 16 patients with hypoperfused tissue but no BT, repeated 2-by-2 image down-sampling and uniform filtering was performed, comparing CNR to perfusion-MRI levels and tissue classification to that of unprocessed data. By simulating reduced scan duration, the minimum scan-duration at which estimated lesion volumes came within 10 % of their true volume was compared for VM and state-of-the-art algorithms. BT in veins and hypoperfused tissue was observed in 9/40 (22.5 %) and 17/40 patients (42.5 %), respectively. Down-sampling to 128 x 128 resolution yielded CNR comparable to MR data and improved tissue classification (p = 0.0069). VM reduced minimum scan duration, providing reliable maps of cerebral blood flow and mean transit time: 5 s (p = 0.03) and 7 s (p < 0.0001), respectively. BT is not uncommon in stroke CTP with 40-s scan duration. Applying image down-sampling and VM improve tissue classification. (orig.)

  3. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin

    DEFF Research Database (Denmark)

    Hoadley, Katherine A; Yau, Christina; Wolf, Denise M

    2014-01-01

    Recent genomic analyses of pathologically defined tumor types identify "within-a-tissue" disease subtypes. However, the extent to which genomic signatures are shared across tissues is still unclear. We performed an integrative analysis using five genome-wide platforms and one proteomic platform...... on 3,527 specimens from 12 cancer types, revealing a unified classification into 11 major subtypes. Five subtypes were nearly identical to their tissue-of-origin counterparts, but several distinct cancer types were found to converge into common subtypes. Lung squamous, head and neck, and a subset...

  4. Automated Analysis and Classification of Histological Tissue Features by Multi-Dimensional Microscopic Molecular Profiling.

    Directory of Open Access Journals (Sweden)

    Daniel P Riordan

    Full Text Available Characterization of the molecular attributes and spatial arrangements of cells and features within complex human tissues provides a critical basis for understanding processes involved in development and disease. Moreover, the ability to automate steps in the analysis and interpretation of histological images that currently require manual inspection by pathologists could revolutionize medical diagnostics. Toward this end, we developed a new imaging approach called multidimensional microscopic molecular profiling (MMMP that can measure several independent molecular properties in situ at subcellular resolution for the same tissue specimen. MMMP involves repeated cycles of antibody or histochemical staining, imaging, and signal removal, which ultimately can generate information analogous to a multidimensional flow cytometry analysis on intact tissue sections. We performed a MMMP analysis on a tissue microarray containing a diverse set of 102 human tissues using a panel of 15 informative antibody and 5 histochemical stains plus DAPI. Large-scale unsupervised analysis of MMMP data, and visualization of the resulting classifications, identified molecular profiles that were associated with functional tissue features. We then directly annotated H&E images from this MMMP series such that canonical histological features of interest (e.g. blood vessels, epithelium, red blood cells were individually labeled. By integrating image annotation data, we identified molecular signatures that were associated with specific histological annotations and we developed statistical models for automatically classifying these features. The classification accuracy for automated histology labeling was objectively evaluated using a cross-validation strategy, and significant accuracy (with a median per-pixel rate of 77% per feature from 15 annotated samples for de novo feature prediction was obtained. These results suggest that high-dimensional profiling may advance the

  5. Systematic bias in genomic classification due to contaminating non-neoplastic tissue in breast tumor samples.

    Science.gov (United States)

    Elloumi, Fathi; Hu, Zhiyuan; Li, Yan; Parker, Joel S; Gulley, Margaret L; Amos, Keith D; Troester, Melissa A

    2011-06-30

    Genomic tests are available to predict breast cancer recurrence and to guide clinical decision making. These predictors provide recurrence risk scores along with a measure of uncertainty, usually a confidence interval. The confidence interval conveys random error and not systematic bias. Standard tumor sampling methods make this problematic, as it is common to have a substantial proportion (typically 30-50%) of a tumor sample comprised of histologically benign tissue. This "normal" tissue could represent a source of non-random error or systematic bias in genomic classification. To assess the performance characteristics of genomic classification to systematic error from normal contamination, we collected 55 tumor samples and paired tumor-adjacent normal tissue. Using genomic signatures from the tumor and paired normal, we evaluated how increasing normal contamination altered recurrence risk scores for various genomic predictors. Simulations of normal tissue contamination caused misclassification of tumors in all predictors evaluated, but different breast cancer predictors showed different types of vulnerability to normal tissue bias. While two predictors had unpredictable direction of bias (either higher or lower risk of relapse resulted from normal contamination), one signature showed predictable direction of normal tissue effects. Due to this predictable direction of effect, this signature (the PAM50) was adjusted for normal tissue contamination and these corrections improved sensitivity and negative predictive value. For all three assays quality control standards and/or appropriate bias adjustment strategies can be used to improve assay reliability. Normal tissue sampled concurrently with tumor is an important source of bias in breast genomic predictors. All genomic predictors show some sensitivity to normal tissue contamination and ideal strategies for mitigating this bias vary depending upon the particular genes and computational methods used in the predictor.

  6. Classification of cardiovascular tissues using LBP based descriptors and a cascade SVM.

    Science.gov (United States)

    Mazo, Claudia; Alegre, Enrique; Trujillo, Maria

    2017-08-01

    Histological images have characteristics, such as texture, shape, colour and spatial structure, that permit the differentiation of each fundamental tissue and organ. Texture is one of the most discriminative features. The automatic classification of tissues and organs based on histology images is an open problem, due to the lack of automatic solutions when treating tissues without pathologies. In this paper, we demonstrate that it is possible to automatically classify cardiovascular tissues using texture information and Support Vector Machines (SVM). Additionally, we realised that it is feasible to recognise several cardiovascular organs following the same process. The texture of histological images was described using Local Binary Patterns (LBP), LBP Rotation Invariant (LBPri), Haralick features and different concatenations between them, representing in this way its content. Using a SVM with linear kernel, we selected the more appropriate descriptor that, for this problem, was a concatenation of LBP and LBPri. Due to the small number of the images available, we could not follow an approach based on deep learning, but we selected the classifier who yielded the higher performance by comparing SVM with Random Forest and Linear Discriminant Analysis. Once SVM was selected as the classifier with a higher area under the curve that represents both higher recall and precision, we tuned it evaluating different kernels, finding that a linear SVM allowed us to accurately separate four classes of tissues: (i) cardiac muscle of the heart, (ii) smooth muscle of the muscular artery, (iii) loose connective tissue, and (iv) smooth muscle of the large vein and the elastic artery. The experimental validation was conducted using 3000 blocks of 100 × 100 sized pixels, with 600 blocks per class and the classification was assessed using a 10-fold cross-validation. using LBP as the descriptor, concatenated with LBPri and a SVM with linear kernel, the main four classes of tissues were

  7. IARC use of oxidative stress as key mode of action characteristic for facilitating cancer classification: Glyphosate case example illustrating a lack of robustness in interpretative implementation.

    Science.gov (United States)

    Bus, James S

    2017-06-01

    The International Agency for Research on Cancer (IARC) has formulated 10 key characteristics of human carcinogens to incorporate mechanistic data into cancer hazard classifications. The analysis used glyphosate as a case example to examine the robustness of IARC's determination of oxidative stress as "strong" evidence supporting a plausible cancer mechanism in humans. The IARC analysis primarily relied on 14 human/mammalian studies; 19 non-mammalian studies were uninformative of human cancer given the broad spectrum of test species and extensive use of formulations and aquatic testing. The mammalian studies had substantial experimental limitations for informing cancer mechanism including use of: single doses and time points; cytotoxic/toxic test doses; tissues not identified as potential cancer targets; glyphosate formulations or mixtures; technically limited oxidative stress biomarkers. The doses were many orders of magnitude higher than human exposures determined in human biomonitoring studies. The glyphosate case example reveals that the IARC evaluation fell substantially short of "strong" supporting evidence of oxidative stress as a plausible human cancer mechanism, and suggests that other IARC monographs relying on the 10 key characteristics approach should be similarly examined for a lack of robust data integration fundamental to reasonable mode of action evaluations. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Projected estimators for robust semi-supervised classification

    DEFF Research Database (Denmark)

    Krijthe, Jesse H.; Loog, Marco

    2017-01-01

    For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts. We study this question for classification using the well-known quadratic surrogate loss function. Unlike other approaches to semi-supervised learning, the procedure...... specifically, we prove that, measured on the labeled and unlabeled training data, this semi-supervised procedure never gives a lower quadratic loss than the supervised alternative. To our knowledge this is the first approach that offers such strong, albeit conservative, guarantees for improvement over...... the supervised solution. The characteristics of our approach are explicated using benchmark datasets to further understand the similarities and differences between the quadratic loss criterion used in the theoretical results and the classification accuracy typically considered in practice....

  9. Classification of mass and normal breast tissue: A convolution neural network classifier with spatial domain and texture images

    International Nuclear Information System (INIS)

    Sahiner, B.; Chan, H.P.; Petrick, N.; Helvie, M.A.; Adler, D.D.; Goodsitt, M.M.; Wei, D.

    1996-01-01

    The authors investigated the classification of regions of interest (ROI's) on mammograms as either mass or normal tissue using a convolution neural network (CNN). A CNN is a back-propagation neural network with two-dimensional (2-D) weight kernels that operate on images. A generalized, fast and stable implementation of the CNN was developed. The input images to the CNN were obtained form the ROI's using two techniques. The first technique employed averaging and subsampling. The second technique employed texture feature extraction methods applied to small subregions inside the ROI. Features computed over different subregions were arranged as texture images, which were subsequently used as CNN inputs. The effects of CNN architecture and texture feature parameters on classification accuracy were studied. Receiver operating characteristic (ROC) methodology was used to evaluate the classification accuracy. A data set consisting of 168 ROI's containing biopsy-proven masses and 504 ROI's containing normal breast tissue was extracted from 168 mammograms by radiologists experienced in mammography. This data set was used for training and testing the CNN. With the best combination of CNN architecture and texture feature parameters, the area under the test ROC curve reached 0.87, which corresponded to a true-positive fraction of 90% at a false positive fraction of 31%. The results demonstrate the feasibility of using a CNN for classification of masses and normal tissue on mammograms

  10. A new classification method for MALDI imaging mass spectrometry data acquired on formalin-fixed paraffin-embedded tissue samples.

    Science.gov (United States)

    Boskamp, Tobias; Lachmund, Delf; Oetjen, Janina; Cordero Hernandez, Yovany; Trede, Dennis; Maass, Peter; Casadonte, Rita; Kriegsmann, Jörg; Warth, Arne; Dienemann, Hendrik; Weichert, Wilko; Kriegsmann, Mark

    2017-07-01

    Matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI IMS) shows a high potential for applications in histopathological diagnosis, and in particular for supporting tumor typing and subtyping. The development of such applications requires the extraction of spectral fingerprints that are relevant for the given tissue and the identification of biomarkers associated with these spectral patterns. We propose a novel data analysis method based on the extraction of characteristic spectral patterns (CSPs) that allow automated generation of classification models for spectral data. Formalin-fixed paraffin embedded (FFPE) tissue samples from N=445 patients assembled on 12 tissue microarrays were analyzed. The method was applied to discriminate primary lung and pancreatic cancer, as well as adenocarcinoma and squamous cell carcinoma of the lung. A classification accuracy of 100% and 82.8%, resp., could be achieved on core level, assessed by cross-validation. The method outperformed the more conventional classification method based on the extraction of individual m/z values in the first application, while achieving a comparable accuracy in the second. LC-MS/MS peptide identification demonstrated that the spectral features present in selected CSPs correspond to peptides relevant for the respective classification. This article is part of a Special Issue entitled: MALDI Imaging, edited by Dr. Corinna Henkel and Prof. Peter Hoffmann. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Polarimetry based partial least square classification of ex vivo healthy and basal cell carcinoma human skin tissues.

    Science.gov (United States)

    Ahmad, Iftikhar; Ahmad, Manzoor; Khan, Karim; Ikram, Masroor

    2016-06-01

    Optical polarimetry was employed for assessment of ex vivo healthy and basal cell carcinoma (BCC) tissue samples from human skin. Polarimetric analyses revealed that depolarization and retardance for healthy tissue group were significantly higher (ppolarimetry together with PLS statistics hold promise for automated pathology classification. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. SU-F-T-187: Quantifying Normal Tissue Sparing with 4D Robust Optimization of Intensity Modulated Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Newpower, M; Ge, S; Mohan, R [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: To report an approach to quantify the normal tissue sparing for 4D robustly-optimized versus PTV-optimized IMPT plans. Methods: We generated two sets of 90 DVHs from a patient’s 10-phase 4D CT set; one by conventional PTV-based optimization done in the Eclipse treatment planning system, and the other by an in-house robust optimization algorithm. The 90 DVHs were created for the following scenarios in each of the ten phases of the 4DCT: ± 5mm shift along x, y, z; ± 3.5% range uncertainty and a nominal scenario. A Matlab function written by Gay and Niemierko was modified to calculate EUD for each DVH for the following structures: esophagus, heart, ipsilateral lung and spinal cord. An F-test determined whether or not the variances of each structure’s DVHs were statistically different. Then a t-test determined if the average EUDs for each optimization algorithm were statistically significantly different. Results: T-test results showed each structure had a statistically significant difference in average EUD when comparing robust optimization versus PTV-based optimization. Under robust optimization all structures except the spinal cord received lower EUDs than PTV-based optimization. Using robust optimization the average EUDs decreased 1.45% for the esophagus, 1.54% for the heart and 5.45% for the ipsilateral lung. The average EUD to the spinal cord increased 24.86% but was still well below tolerance. Conclusion: This work has helped quantify a qualitative relationship noted earlier in our work: that robust optimization leads to plans with greater normal tissue sparing compared to PTV-based optimization. Except in the case of the spinal cord all structures received a lower EUD under robust optimization and these results are statistically significant. While the average EUD to the spinal cord increased to 25.06 Gy under robust optimization it is still well under the TD50 value of 66.5 Gy from Emami et al. Supported in part by the NCI U19 CA021239.

  13. Robust Speech/Non-Speech Classification in Heterogeneous Multimedia Content

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; de Jong, Franciska M.G.

    In this paper we present a speech/non-speech classification method that allows high quality classification without the need to know in advance what kinds of audible non-speech events are present in an audio recording and that does not require a single parameter to be tuned on in-domain data. Because

  14. Robust immunohistochemical staining of several classes of proteins in tissues subjected to autolysis.

    Science.gov (United States)

    Maleszewski, Joseph; Lu, Jie; Fox-Talbot, Karen; Halushka, Marc K

    2007-06-01

    Despite the common use of immunohistochemistry in autopsy tissues, the stability of most proteins over extended time periods is unknown. The robustness of signal for 16 proteins (MMP1, MMP2, MMP3, MMP9, TIMP1, TIMP2, TIMP3, AGER, MSR, SCARB1, OLR1, CD36, LTF, LGALS3, LYZ, and DDOST) and two measures of advanced glycation end products (AGE, CML) was evaluated. Two formalin-fixed, paraffin-embedded human tissue arrays containing 16 tissues each were created to evaluate 48 hr of autolysis in a warm or cold environment. For these classes of proteins, matrix metalloproteinases and their inhibitors, scavenger receptors, and advanced glycation end product receptors, we saw no systematic diminution of signal intensity during a period of 24 hr. Analysis was performed by two independent observers and confirmed for a subset of proteins by digital analysis and Western blotting. We conclude that these classes of proteins degrade slowly and faithfully maintain their immunohistochemistry characteristics over at least a 24-hr time interval in devitalized tissues. This study supports the use of autopsy tissues with short postmortem intervals for immunohistochemical studies for diseases such as diabetic vascular disease, cancer, Alzheimer's disease, atherosclerosis, and other pathological states. This manuscript contains online supplemental material at http://www.jhc.org. Please visit this article online to view these materials.

  15. Effects on MR images compression in tissue classification quality

    International Nuclear Information System (INIS)

    Santalla, H; Meschino, G; Ballarin, V

    2007-01-01

    It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of 'quality' is essential. What we understand for 'quality'? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images

  16. Projected estimators for robust semi-supervised classification

    NARCIS (Netherlands)

    Krijthe, J.H.; Loog, M.

    2017-01-01

    For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts. We study this question for classification using the well-known quadratic surrogate loss function. Unlike other approaches to semi-supervised learning, the

  17. Modular design of artificial tissue homeostasis: robust control through synthetic cellular heterogeneity.

    Directory of Open Access Journals (Sweden)

    Miles Miller

    Full Text Available Synthetic biology efforts have largely focused on small engineered gene networks, yet understanding how to integrate multiple synthetic modules and interface them with endogenous pathways remains a challenge. Here we present the design, system integration, and analysis of several large scale synthetic gene circuits for artificial tissue homeostasis. Diabetes therapy represents a possible application for engineered homeostasis, where genetically programmed stem cells maintain a steady population of β-cells despite continuous turnover. We develop a new iterative process that incorporates modular design principles with hierarchical performance optimization targeted for environments with uncertainty and incomplete information. We employ theoretical analysis and computational simulations of multicellular reaction/diffusion models to design and understand system behavior, and find that certain features often associated with robustness (e.g., multicellular synchronization and noise attenuation are actually detrimental for tissue homeostasis. We overcome these problems by engineering a new class of genetic modules for 'synthetic cellular heterogeneity' that function to generate beneficial population diversity. We design two such modules (an asynchronous genetic oscillator and a signaling throttle mechanism, demonstrate their capacity for enhancing robust control, and provide guidance for experimental implementation with various computational techniques. We found that designing modules for synthetic heterogeneity can be complex, and in general requires a framework for non-linear and multifactorial analysis. Consequently, we adapt a 'phenotypic sensitivity analysis' method to determine how functional module behaviors combine to achieve optimal system performance. We ultimately combine this analysis with Bayesian network inference to extract critical, causal relationships between a module's biochemical rate-constants, its high level functional behavior in

  18. Modular design of artificial tissue homeostasis: robust control through synthetic cellular heterogeneity.

    Science.gov (United States)

    Miller, Miles; Hafner, Marc; Sontag, Eduardo; Davidsohn, Noah; Subramanian, Sairam; Purnick, Priscilla E M; Lauffenburger, Douglas; Weiss, Ron

    2012-01-01

    Synthetic biology efforts have largely focused on small engineered gene networks, yet understanding how to integrate multiple synthetic modules and interface them with endogenous pathways remains a challenge. Here we present the design, system integration, and analysis of several large scale synthetic gene circuits for artificial tissue homeostasis. Diabetes therapy represents a possible application for engineered homeostasis, where genetically programmed stem cells maintain a steady population of β-cells despite continuous turnover. We develop a new iterative process that incorporates modular design principles with hierarchical performance optimization targeted for environments with uncertainty and incomplete information. We employ theoretical analysis and computational simulations of multicellular reaction/diffusion models to design and understand system behavior, and find that certain features often associated with robustness (e.g., multicellular synchronization and noise attenuation) are actually detrimental for tissue homeostasis. We overcome these problems by engineering a new class of genetic modules for 'synthetic cellular heterogeneity' that function to generate beneficial population diversity. We design two such modules (an asynchronous genetic oscillator and a signaling throttle mechanism), demonstrate their capacity for enhancing robust control, and provide guidance for experimental implementation with various computational techniques. We found that designing modules for synthetic heterogeneity can be complex, and in general requires a framework for non-linear and multifactorial analysis. Consequently, we adapt a 'phenotypic sensitivity analysis' method to determine how functional module behaviors combine to achieve optimal system performance. We ultimately combine this analysis with Bayesian network inference to extract critical, causal relationships between a module's biochemical rate-constants, its high level functional behavior in isolation, and

  19. The analysis of image feature robustness using cometcloud

    Directory of Open Access Journals (Sweden)

    Xin Qi

    2012-01-01

    Full Text Available The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval.

  20. Chronic radiation effects on dental hard tissue (''radiation carries''). Classification and therapeutic strategies

    International Nuclear Information System (INIS)

    Groetz, K.A.; Brahm, R.; Al-Nawas, B.; Wagner, W.; Riesenbeck, D.; Willich, N.; Seegenschmiedt, M.H.

    2001-01-01

    Objectives: Since the first description of rapid destruction of dental hard tissues following head and neck radiotherapy 80 years ago, 'radiation caries' is an established clinical finding. The internationally accepted clinical evaluation score RTOG/EORTC however is lacking a classification of this frequent radiogenic alteration. Material and Methods: Medical records, data and images of radiation effects on the teeth of more than 1,500 patients, who underwent periradiotherapeutic care, were analyzed. Macroscopic alterations regarding the grade of late lesions of tooth crowns were used for a classification into 4 grades according to the RTOG/EORTC guidelines. Results: No early radiation effects were found by macroscopic inspection. In the first 90 days following radiotherapy 1/3 of the patients complained of reversible hypersensitivity, which may be related to a temporary hyperemia of the pulp. It was possible to classify radiation caries as a late radiation effect on a graded scale as known from RTOG/EORTC for other organ systems. This is a prerequisite for the integration of radiation caries into the international nomenclature of the RTOG/EORTC classification. Conclusions: The documentation of early radiation effects on dental hard tissues seems to be neglectable. On the other hand the documentation of late radiation effects has a high clinical impact. The identification of an initial lesion at the high-risk areas of the neck and incisal part of the tooth can lead to a successful therapy as a major prerequisite for orofacial rehabilitation. An internationally standardized documentation is a basis for the evaluation of the side effects of radiooncotic therapy as well as the effectiveness of protective and supportive procedures. (orig.) [de

  1. Chance constrained uncertain classification via robust optimization

    NARCIS (Netherlands)

    Ben-Tal, A.; Bhadra, S.; Bhattacharayya, C.; Saketha Nat, J.

    2011-01-01

    This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out

  2. Is overall similarity classification less effortful than single-dimension classification?

    Science.gov (United States)

    Wills, Andy J; Milton, Fraser; Longmore, Christopher A; Hester, Sarah; Robinson, Jo

    2013-01-01

    It is sometimes argued that the implementation of an overall similarity classification is less effortful than the implementation of a single-dimension classification. In the current article, we argue that the evidence securely in support of this view is limited, and report additional evidence in support of the opposite proposition--overall similarity classification is more effortful than single-dimension classification. Using a match-to-standards procedure, Experiments 1A, 1B and 2 demonstrate that concurrent load reduces the prevalence of overall similarity classification, and that this effect is robust to changes in the concurrent load task employed, the level of time pressure experienced, and the short-term memory requirements of the classification task. Experiment 3 demonstrates that participants who produced overall similarity classifications from the outset have larger working memory capacities than those who produced single-dimension classifications initially, and Experiment 4 demonstrates that instructions to respond meticulously increase the prevalence of overall similarity classification.

  3. Active relearning for robust supervised classification of pulmonary emphysema

    Science.gov (United States)

    Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Radiologists are adept at recognizing the appearance of lung parenchymal abnormalities in CT scans. However, the inconsistent differential diagnosis, due to subjective aggregation, mandates supervised classification. Towards optimizing Emphysema classification, we introduce a physician-in-the-loop feedback approach in order to minimize uncertainty in the selected training samples. Using multi-view inductive learning with the training samples, an ensemble of Support Vector Machine (SVM) models, each based on a specific pair-wise dissimilarity metric, was constructed in less than six seconds. In the active relearning phase, the ensemble-expert label conflicts were resolved by an expert. This just-in-time feedback with unoptimized SVMs yielded 15% increase in classification accuracy and 25% reduction in the number of support vectors. The generality of relearning was assessed in the optimized parameter space of six different classifiers across seven dissimilarity metrics. The resultant average accuracy improved to 21%. The co-operative feedback method proposed here could enhance both diagnostic and staging throughput efficiency in chest radiology practice.

  4. Improved classification and visualization of healthy and pathological hard dental tissues by modeling specular reflections in NIR hyperspectral images

    Science.gov (United States)

    Usenik, Peter; Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Despite major improvements in dental healthcare and technology, dental caries remains one of the most prevalent chronic diseases of modern society. The initial stages of dental caries are characterized by demineralization of enamel crystals, commonly known as white spots, which are difficult to diagnose. Near-infrared (NIR) hyperspectral imaging is a new promising technique for early detection of demineralization which can classify healthy and pathological dental tissues. However, due to non-ideal illumination of the tooth surface the hyperspectral images can exhibit specular reflections, in particular around the edges and the ridges of the teeth. These reflections significantly affect the performance of automated classification and visualization methods. Cross polarized imaging setup can effectively remove the specular reflections, however is due to the complexity and other imaging setup limitations not always possible. In this paper, we propose an alternative approach based on modeling the specular reflections of hard dental tissues, which significantly improves the classification accuracy in the presence of specular reflections. The method was evaluated on five extracted human teeth with corresponding gold standard for 6 different healthy and pathological hard dental tissues including enamel, dentin, calculus, dentin caries, enamel caries and demineralized regions. Principal component analysis (PCA) was used for multivariate local modeling of healthy and pathological dental tissues. The classification was performed by employing multiple discriminant analysis. Based on the obtained results we believe the proposed method can be considered as an effective alternative to the complex cross polarized imaging setups.

  5. Training echo state networks for rotation-invariant bone marrow cell classification.

    Science.gov (United States)

    Kainz, Philipp; Burgsteiner, Harald; Asslaber, Martin; Ahammer, Helmut

    2017-01-01

    The main principle of diagnostic pathology is the reliable interpretation of individual cells in context of the tissue architecture. Especially a confident examination of bone marrow specimen is dependent on a valid classification of myeloid cells. In this work, we propose a novel rotation-invariant learning scheme for multi-class echo state networks (ESNs), which achieves very high performance in automated bone marrow cell classification. Based on representing static images as temporal sequence of rotations, we show how ESNs robustly recognize cells of arbitrary rotations by taking advantage of their short-term memory capacity. The performance of our approach is compared to a classification random forest that learns rotation-invariance in a conventional way by exhaustively training on multiple rotations of individual samples. The methods were evaluated on a human bone marrow image database consisting of granulopoietic and erythropoietic cells in different maturation stages. Our ESN approach to cell classification does not rely on segmentation of cells or manual feature extraction and can therefore directly be applied to image data.

  6. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    Science.gov (United States)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  7. The use of Compton scattering to differentiate between classifications of normal and diseased breast tissue

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, Elaine A; Farquharson, Michael J; Flinton, David M [School of Allied Health Sciences, City University, Charterhouse Square, London EC1M 6PA (United Kingdom)

    2005-07-21

    This study describes a technique for measuring the electron density of breast tissue utilizing Compton scattered photons. The K{sub {alpha}}{sub 2} line from a tungsten target industrial x-ray tube (57.97 keV) was used and the scattered x-rays collected at an angle of 30{sup 0}. At this angle the Compton and coherent photon peaks can be resolved using an energy dispersive detector and a peak fitting algorithm. The system was calibrated using solutions of known electron density. The results obtained from a pilot study of 22 tissues are presented. The tissue samples investigated comprise four different tissue classifications: adipose, malignancy, fibroadenoma and fibrocystic change (FCC). It is shown that there is a difference between adipose and malignant tissue, to a value of 9.0%, and between adipose and FCC, to a value of 12.7%. These figures are found to be significant by statistical analysis. The differences between adipose and fibroadenoma tissues (2.2%) and between malignancy and FCC (3.4%) are not significant. It is hypothesized that the alteration in glucose uptake within malignant cells may cause these tissues to have an elevated electron density. The fibrotic nature of tissue that has undergone FCC gives the highest measure of all tissue types.

  8. The use of Compton scattering to differentiate between classifications of normal and diseased breast tissue

    Science.gov (United States)

    Ryan, Elaine A.; Farquharson, Michael J.; Flinton, David M.

    2005-07-01

    This study describes a technique for measuring the electron density of breast tissue utilizing Compton scattered photons. The Kα2 line from a tungsten target industrial x-ray tube (57.97 keV) was used and the scattered x-rays collected at an angle of 30°. At this angle the Compton and coherent photon peaks can be resolved using an energy dispersive detector and a peak fitting algorithm. The system was calibrated using solutions of known electron density. The results obtained from a pilot study of 22 tissues are presented. The tissue samples investigated comprise four different tissue classifications: adipose, malignancy, fibroadenoma and fibrocystic change (FCC). It is shown that there is a difference between adipose and malignant tissue, to a value of 9.0%, and between adipose and FCC, to a value of 12.7%. These figures are found to be significant by statistical analysis. The differences between adipose and fibroadenoma tissues (2.2%) and between malignancy and FCC (3.4%) are not significant. It is hypothesized that the alteration in glucose uptake within malignant cells may cause these tissues to have an elevated electron density. The fibrotic nature of tissue that has undergone FCC gives the highest measure of all tissue types.

  9. The use of Compton scattering to differentiate between classifications of normal and diseased breast tissue

    International Nuclear Information System (INIS)

    Ryan, Elaine A; Farquharson, Michael J; Flinton, David M

    2005-01-01

    This study describes a technique for measuring the electron density of breast tissue utilizing Compton scattered photons. The K α2 line from a tungsten target industrial x-ray tube (57.97 keV) was used and the scattered x-rays collected at an angle of 30 0 . At this angle the Compton and coherent photon peaks can be resolved using an energy dispersive detector and a peak fitting algorithm. The system was calibrated using solutions of known electron density. The results obtained from a pilot study of 22 tissues are presented. The tissue samples investigated comprise four different tissue classifications: adipose, malignancy, fibroadenoma and fibrocystic change (FCC). It is shown that there is a difference between adipose and malignant tissue, to a value of 9.0%, and between adipose and FCC, to a value of 12.7%. These figures are found to be significant by statistical analysis. The differences between adipose and fibroadenoma tissues (2.2%) and between malignancy and FCC (3.4%) are not significant. It is hypothesized that the alteration in glucose uptake within malignant cells may cause these tissues to have an elevated electron density. The fibrotic nature of tissue that has undergone FCC gives the highest measure of all tissue types

  10. Spiral wave classification using normalized compression distance: Towards atrial tissue spatiotemporal electrophysiological behavior characterization.

    Science.gov (United States)

    Alagoz, Celal; Guez, Allon; Cohen, Andrew; Bullinga, John R

    2015-08-01

    Analysis of electrical activation patterns such as re-entries during atrial fibrillation (Afib) is crucial in understanding arrhythmic mechanisms and assessment of diagnostic measures. Spiral waves are a phenomena that provide intuitive basis for re-entries occurring in cardiac tissue. Distinct spiral wave behaviors such as stable spiral waves, meandering spiral waves, and spiral wave break-up may have distinct electrogram manifestations on a mapping catheter. Hence, it is desirable to have an automated classification of spiral wave behavior based on catheter recordings for a qualitative characterization of spatiotemporal electrophysiological activity on atrial tissue. In this study, we propose a method for classification of spatiotemporal characteristics of simulated atrial activation patterns in terms of distinct spiral wave behaviors during Afib using two different techniques: normalized compressed distance (NCD) and normalized FFT (NFFTD). We use a phenomenological model for cardiac electrical propagation to produce various simulated spiral wave behaviors on a 2D grid and labeled them as stable, meandering, or breakup. By mimicking commonly used catheter types, a star shaped and a circular shaped both of which do the local readings from atrial wall, monopolar and bipolar intracardiac electrograms are simulated. Virtual catheters are positioned at different locations on the grid. The classification performance for different catheter locations, types and for monopolar or bipolar readings were also compared. We observed that the performance for each case differed slightly. However, we found that NCD performance is superior to NFFTD. Through the simulation study, we showed the theoretical validation of the proposed method. Our findings suggest that a qualitative wavefront activation pattern can be assessed during Afib without the need for highly invasive mapping techniques such as multisite simultaneous electrogram recordings.

  11. Tissue classification and diagnostics using a fiber probe for combined Raman and fluorescence spectroscopy

    Science.gov (United States)

    Cicchi, Riccardo; Anand, Suresh; Crisci, Alfonso; Giordano, Flavio; Rossari, Susanna; De Giorgi, Vincenzo; Maio, Vincenza; Massi, Daniela; Nesi, Gabriella; Buccoliero, Anna Maria; Guerrini, Renzo; Pimpinelli, Nicola; Pavone, Francesco S.

    2015-07-01

    Two different optical fiber probes for combined Raman and fluorescence spectroscopic measurements were designed, developed and used for tissue diagnostics. Two visible laser diodes were used for fluorescence spectroscopy, whereas a laser diode emitting in the NIR was used for Raman spectroscopy. The two probes were based on fiber bundles with a central multimode optical fiber, used for delivering light to the tissue, and 24 surrounding optical fibers for signal collection. Both fluorescence and Raman spectra were acquired using the same detection unit, based on a cooled CCD camera, connected to a spectrograph. The two probes were successfully employed for diagnostic purposes on various tissues in a good agreement with common routine histology. This study included skin, brain and bladder tissues and in particular the classification of: malignant melanoma against melanocytic lesions and healthy skin; urothelial carcinoma against healthy bladder mucosa; brain tumor against dysplastic brain tissue. The diagnostic capabilities were determined using a cross-validation method with a leave-one-out approach, finding very high sensitivity and specificity for all the examined tissues. The obtained results demonstrated that the multimodal approach is crucial for improving diagnostic capabilities. The system presented here can improve diagnostic capabilities on a broad range of tissues and has the potential of being used for endoscopic inspections in the near future.

  12. Spatial cluster analysis of nanoscopically mapped serotonin receptors for classification of fixed brain tissue

    Science.gov (United States)

    Sams, Michael; Silye, Rene; Göhring, Janett; Muresan, Leila; Schilcher, Kurt; Jacak, Jaroslaw

    2014-01-01

    We present a cluster spatial analysis method using nanoscopic dSTORM images to determine changes in protein cluster distributions within brain tissue. Such methods are suitable to investigate human brain tissue and will help to achieve a deeper understanding of brain disease along with aiding drug development. Human brain tissue samples are usually treated postmortem via standard fixation protocols, which are established in clinical laboratories. Therefore, our localization microscopy-based method was adapted to characterize protein density and protein cluster localization in samples fixed using different protocols followed by common fluorescent immunohistochemistry techniques. The localization microscopy allows nanoscopic mapping of serotonin 5-HT1A receptor groups within a two-dimensional image of a brain tissue slice. These nanoscopically mapped proteins can be confined to clusters by applying the proposed statistical spatial analysis. Selected features of such clusters were subsequently used to characterize and classify the tissue. Samples were obtained from different types of patients, fixed with different preparation methods, and finally stored in a human tissue bank. To verify the proposed method, samples of a cryopreserved healthy brain have been compared with epitope-retrieved and paraffin-fixed tissues. Furthermore, samples of healthy brain tissues were compared with data obtained from patients suffering from mental illnesses (e.g., major depressive disorder). Our work demonstrates the applicability of localization microscopy and image analysis methods for comparison and classification of human brain tissues at a nanoscopic level. Furthermore, the presented workflow marks a unique technological advance in the characterization of protein distributions in brain tissue sections.

  13. Bread crumb classification using fractal and multifractal features

    OpenAIRE

    Baravalle, Rodrigo Guillermo; Delrieux, Claudio Augusto; Gómez, Juan Carlos

    2017-01-01

    Adequate image descriptors are fundamental in image classification and object recognition. Main requirements for image features are robustness and low dimensionality which would lead to low classification errors in a variety of situations and with a reasonable computational cost. In this context, the identification of materials poses a significant challenge, since typical (geometric and/or differential) feature extraction methods are not robust enough. Texture features based on Fourier or wav...

  14. Automatic classification of prostate stromal tissue in histological images using Haralick descriptors and Local Binary Patterns

    International Nuclear Information System (INIS)

    Oliveira, D L L; Batista, V R; Duarte, Y A S; Nascimento, M Z; Neves, L A; Godoy, M F; Jacomini, R S; Arruda, P F F; Neto, D S

    2014-01-01

    In this paper we presente a classification system that uses a combination of texture features from stromal regions: Haralick features and Local Binary Patterns (LBP) in wavelet domain. The system has five steps for classification of the tissues. First, the stromal regions were detected and extracted using segmentation techniques based on thresholding and RGB colour space. Second, the Wavelet decomposition was applied in the extracted regions to obtain the Wavelet coefficients. Third, the Haralick and LBP features were extracted from the coefficients. Fourth, relevant features were selected using the ANOVA statistical method. The classication (fifth step) was performed with Radial Basis Function (RBF) networks. The system was tested in 105 prostate images, which were divided into three groups of 35 images: normal, hyperplastic and cancerous. The system performance was evaluated using the area under the ROC curve and resulted in 0.98 for normal versus cancer, 0.95 for hyperplasia versus cancer and 0.96 for normal versus hyperplasia. Our results suggest that texture features can be used as discriminators for stromal tissues prostate images. Furthermore, the system was effective to classify prostate images, specially the hyperplastic class which is the most difficult type in diagnosis and prognosis

  15. Classification across gene expression microarray studies

    Directory of Open Access Journals (Sweden)

    Kuner Ruprecht

    2009-12-01

    Full Text Available Abstract Background The increasing number of gene expression microarray studies represents an important resource in biomedical research. As a result, gene expression based diagnosis has entered clinical practice for patient stratification in breast cancer. However, the integration and combined analysis of microarray studies remains still a challenge. We assessed the potential benefit of data integration on the classification accuracy and systematically evaluated the generalization performance of selected methods on four breast cancer studies comprising almost 1000 independent samples. To this end, we introduced an evaluation framework which aims to establish good statistical practice and a graphical way to monitor differences. The classification goal was to correctly predict estrogen receptor status (negative/positive and histological grade (low/high of each tumor sample in an independent study which was not used for the training. For the classification we chose support vector machines (SVM, predictive analysis of microarrays (PAM, random forest (RF and k-top scoring pairs (kTSP. Guided by considerations relevant for classification across studies we developed a generalization of kTSP which we evaluated in addition. Our derived version (DV aims to improve the robustness of the intrinsic invariance of kTSP with respect to technologies and preprocessing. Results For each individual study the generalization error was benchmarked via complete cross-validation and was found to be similar for all classification methods. The misclassification rates were substantially higher in classification across studies, when each single study was used as an independent test set while all remaining studies were combined for the training of the classifier. However, with increasing number of independent microarray studies used in the training, the overall classification performance improved. DV performed better than the average and showed slightly less variance. In

  16. How to Reduce Dimensionality of Data: Robustness Point of View

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan; Rensová, D.

    2015-01-01

    Roč. 10, č. 1 (2015), s. 131-140 ISSN 1452-4864 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : data analysis * dimensionality reduction * robust statistics * principal component analysis * robust classification analysis Subject RIV: BB - Applied Statistics, Operational Research

  17. Breast tissue classification in digital breast tomosynthesis images using texture features: a feasibility study

    Science.gov (United States)

    Kontos, Despina; Berger, Rachelle; Bakic, Predrag R.; Maidment, Andrew D. A.

    2009-02-01

    Mammographic breast density is a known breast cancer risk factor. Studies have shown the potential to automate breast density estimation by using computerized texture-based segmentation of the dense tissue in mammograms. Digital breast tomosynthesis (DBT) is a tomographic x-ray breast imaging modality that could allow volumetric breast density estimation. We evaluated the feasibility of distinguishing between dense and fatty breast regions in DBT using computer-extracted texture features. Our long-term hypothesis is that DBT texture analysis can be used to develop 3D dense tissue segmentation algorithms for estimating volumetric breast density. DBT images from 40 women were analyzed. The dense tissue area was delineated within each central source projection (CSP) image using a thresholding technique (Cumulus, Univ. Toronto). Two (2.5cm)2 ROIs were manually selected: one within the dense tissue region and another within the fatty region. Corresponding (2.5cm)3 ROIs were placed within the reconstructed DBT images. Texture features, previously used for mammographic dense tissue segmentation, were computed. Receiver operating characteristic (ROC) curve analysis was performed to evaluate feature classification performance. Different texture features appeared to perform best in the 3D reconstructed DBT compared to the 2D CSP images. Fractal dimension was superior in DBT (AUC=0.90), while contrast was best in CSP images (AUC=0.92). We attribute these differences to the effects of tissue superimposition in CSP and the volumetric visualization of the breast tissue in DBT. Our results suggest that novel approaches, different than those conventionally used in projection mammography, need to be investigated in order to develop DBT dense tissue segmentation algorithms for estimating volumetric breast density.

  18. Tissue classifications in Monte Carlo simulations of patient dose for photon beam tumor treatments

    Science.gov (United States)

    Lin, Mu-Han; Chao, Tsi-Chian; Lee, Chung-Chi; Tung-Chieh Chang, Joseph; Tung, Chuan-Jong

    2010-07-01

    The purpose of this work was to study the calculated dose uncertainties induced by the material classification that determined the interaction cross-sections and the water-to-material stopping-power ratios. Calculations were made for a head- and neck-cancer patient treated with five intensity-modulated radiotherapy fields using 6 MV photon beams. The patient's CT images were reconstructed into two voxelized patient phantoms based on different CT-to-material classification schemes. Comparisons of the depth-dose curve of the anterior-to-posterior field and the dose-volume-histogram of the treatment plan were used to evaluate the dose uncertainties from such schemes. The results indicated that any misassignment of tissue materials could lead to a substantial dose difference, which would affect the treatment outcome. To assure an appropriate material assignment, it is desirable to have different conversion tables for various parts of the body. The assignment of stopping-power ratio should be based on the chemical composition and the density of the material.

  19. Tissue classifications in Monte Carlo simulations of patient dose for photon beam tumor treatments

    International Nuclear Information System (INIS)

    Lin, Mu-Han; Chao, Tsi-Chian; Lee, Chung-Chi; Tung-Chieh Chang, Joseph; Tung, Chuan-Jong

    2010-01-01

    The purpose of this work was to study the calculated dose uncertainties induced by the material classification that determined the interaction cross-sections and the water-to-material stopping-power ratios. Calculations were made for a head- and neck-cancer patient treated with five intensity-modulated radiotherapy fields using 6 MV photon beams. The patient's CT images were reconstructed into two voxelized patient phantoms based on different CT-to-material classification schemes. Comparisons of the depth-dose curve of the anterior-to-posterior field and the dose-volume-histogram of the treatment plan were used to evaluate the dose uncertainties from such schemes. The results indicated that any misassignment of tissue materials could lead to a substantial dose difference, which would affect the treatment outcome. To assure an appropriate material assignment, it is desirable to have different conversion tables for various parts of the body. The assignment of stopping-power ratio should be based on the chemical composition and the density of the material.

  20. Robust classification using mixtures of dependency networks

    DEFF Research Database (Denmark)

    Gámez, José A.; Mateo, Juan L.; Nielsen, Thomas Dyhre

    2008-01-01

    Dependency networks have previously been proposed as alternatives to e.g. Bayesian networks by supporting fast algorithms for automatic learning. Recently dependency networks have also been proposed as classification models, but as with e.g. general probabilistic inference, the reported speed......-ups are often obtained at the expense of accuracy. In this paper we try to address this issue through the use of mixtures of dependency networks. To reduce learning time and improve robustness when dealing with data sparse classes, we outline methods for reusing calculations across mixture components. Finally...

  1. Highly Robust Statistical Methods in Medical Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 32, č. 2 (2012), s. 3-16 ISSN 0208-5216 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust statistics * classification * faces * robust image analysis * forensic science Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.208, year: 2012 http://www.ibib.waw.pl/bbe/bbefulltext/BBE_32_2_003_FT.pdf

  2. Robust classification of traffic signs using multi-view cues

    NARCIS (Netherlands)

    Hazelhoff, L.; Creusen, I.M.; With, de P.H.N.

    2012-01-01

    Traffic sign inventories are created for road safety and maintenance based on street-level panoramic images. Due to the large capturing interval, large viewpoint deviations between the different capturings occur. These viewpoint variations complicate the classification procedure, which aims at the

  3. Multiplex coherent anti-Stokes Raman scattering microspectroscopy of brain tissue with higher ranking data classification for biomedical imaging

    Science.gov (United States)

    Pohling, Christoph; Bocklitz, Thomas; Duarte, Alex S.; Emmanuello, Cinzia; Ishikawa, Mariana S.; Dietzeck, Benjamin; Buckup, Tiago; Uckermann, Ortrud; Schackert, Gabriele; Kirsch, Matthias; Schmitt, Michael; Popp, Jürgen; Motzkus, Marcus

    2017-06-01

    Multiplex coherent anti-Stokes Raman scattering (MCARS) microscopy was carried out to map a solid tumor in mouse brain tissue. The border between normal and tumor tissue was visualized using support vector machines (SVM) as a higher ranking type of data classification. Training data were collected separately in both tissue types, and the image contrast is based on class affiliation of the single spectra. Color coding in the image generated by SVM is then related to pathological information instead of single spectral intensities or spectral differences within the data set. The results show good agreement with the H&E stained reference and spontaneous Raman microscopy, proving the validity of the MCARS approach in combination with SVM.

  4. Facial Symmetry in Robust Anthropometrics

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 57, č. 3 (2012), s. 691-698 ISSN 0022-1198 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : forensic science * anthropology * robust image analysis * correlation analysis * multivariate data * classification Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.244, year: 2012

  5. Gynecomastia Classification for Surgical Management: A Systematic Review and Novel Classification System.

    Science.gov (United States)

    Waltho, Daniel; Hatchell, Alexandra; Thoma, Achilleas

    2017-03-01

    Gynecomastia is a common deformity of the male breast, where certain cases warrant surgical management. There are several surgical options, which vary depending on the breast characteristics. To guide surgical management, several classification systems for gynecomastia have been proposed. A systematic review was performed to (1) identify all classification systems for the surgical management of gynecomastia, and (2) determine the adequacy of these classification systems to appropriately categorize the condition for surgical decision-making. The search yielded 1012 articles, and 11 articles were included in the review. Eleven classification systems in total were ascertained, and a total of 10 unique features were identified: (1) breast size, (2) skin redundancy, (3) breast ptosis, (4) tissue predominance, (5) upper abdominal laxity, (6) breast tuberosity, (7) nipple malposition, (8) chest shape, (9) absence of sternal notch, and (10) breast skin elasticity. On average, classification systems included two or three of these features. Breast size and ptosis were the most commonly included features. Based on their review of the current classification systems, the authors believe the ideal classification system should be universal and cater to all causes of gynecomastia; be surgically useful and easy to use; and should include a comprehensive set of clinically appropriate patient-related features, such as breast size, breast ptosis, tissue predominance, and skin redundancy. None of the current classification systems appears to fulfill these criteria.

  6. Automated classification of immunostaining patterns in breast tissue from the human protein atlas.

    Science.gov (United States)

    Swamidoss, Issac Niwas; Kårsnäs, Andreas; Uhlmann, Virginie; Ponnusamy, Palanisamy; Kampf, Caroline; Simonsson, Martin; Wählby, Carolina; Strand, Robin

    2013-01-01

    The Human Protein Atlas (HPA) is an effort to map the location of all human proteins (http://www.proteinatlas.org/). It contains a large number of histological images of sections from human tissue. Tissue micro arrays (TMA) are imaged by a slide scanning microscope, and each image represents a thin slice of a tissue core with a dark brown antibody specific stain and a blue counter stain. When generating antibodies for protein profiling of the human proteome, an important step in the quality control is to compare staining patterns of different antibodies directed towards the same protein. This comparison is an ultimate control that the antibody recognizes the right protein. In this paper, we propose and evaluate different approaches for classifying sub-cellular antibody staining patterns in breast tissue samples. The proposed methods include the computation of various features including gray level co-occurrence matrix (GLCM) features, complex wavelet co-occurrence matrix (CWCM) features, and weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM)-inspired features. The extracted features are used into two different multivariate classifiers (support vector machine (SVM) and linear discriminant analysis (LDA) classifier). Before extracting features, we use color deconvolution to separate different tissue components, such as the brownly stained positive regions and the blue cellular regions, in the immuno-stained TMA images of breast tissue. We present classification results based on combinations of feature measurements. The proposed complex wavelet features and the WND-CHARM features have accuracy similar to that of a human expert. Both human experts and the proposed automated methods have difficulties discriminating between nuclear and cytoplasmic staining patterns. This is to a large extent due to mixed staining of nucleus and cytoplasm. Methods for quantification of staining patterns in histopathology have many

  7. Automated classification of immunostaining patterns in breast tissue from the human protein Atlas

    Directory of Open Access Journals (Sweden)

    Issac Niwas Swamidoss

    2013-01-01

    Full Text Available Background: The Human Protein Atlas (HPA is an effort to map the location of all human proteins (http://www.proteinatlas.org/. It contains a large number of histological images of sections from human tissue. Tissue micro arrays (TMA are imaged by a slide scanning microscope, and each image represents a thin slice of a tissue core with a dark brown antibody specific stain and a blue counter stain. When generating antibodies for protein profiling of the human proteome, an important step in the quality control is to compare staining patterns of different antibodies directed towards the same protein. This comparison is an ultimate control that the antibody recognizes the right protein. In this paper, we propose and evaluate different approaches for classifying sub-cellular antibody staining patterns in breast tissue samples. Materials and Methods: The proposed methods include the computation of various features including gray level co-occurrence matrix (GLCM features, complex wavelet co-occurrence matrix (CWCM features, and weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM-inspired features. The extracted features are used into two different multivariate classifiers (support vector machine (SVM and linear discriminant analysis (LDA classifier. Before extracting features, we use color deconvolution to separate different tissue components, such as the brownly stained positive regions and the blue cellular regions, in the immuno-stained TMA images of breast tissue. Results: We present classification results based on combinations of feature measurements. The proposed complex wavelet features and the WND-CHARM features have accuracy similar to that of a human expert. Conclusions: Both human experts and the proposed automated methods have difficulties discriminating between nuclear and cytoplasmic staining patterns. This is to a large extent due to mixed staining of nucleus and cytoplasm. Methods for

  8. Subcutaneous adipose tissue classification

    Directory of Open Access Journals (Sweden)

    A. Sbarbati

    2010-11-01

    Full Text Available The developments in the technologies based on the use of autologous adipose tissue attracted attention to minor depots as possible sampling areas. Some of those depots have never been studied in detail. The present study was performed on subcutaneous adipose depots sampled in different areas with the aim of explaining their morphology, particularly as far as regards stem niches. The results demonstrated that three different types of white adipose tissue (WAT can be differentiated on the basis of structural and ultrastructural features: deposit WAT (dWAT, structural WAT (sWAT and fibrous WAT (fWAT. dWAT can be found essentially in large fatty depots in the abdominal area (periumbilical. In the dWAT, cells are tightly packed and linked by a weak net of isolated collagen fibers. Collagenic components are very poor, cells are large and few blood vessels are present. The deep portion appears more fibrous then the superficial one. The microcirculation is formed by thin walled capillaries with rare stem niches. Reinforcement pericyte elements are rarely evident. The sWAT is more stromal; it is located in some areas in the limbs and in the hips. The stroma is fairly well represented, with a good vascularity and adequate staminality. Cells are wrapped by a basket of collagen fibers. The fatty depots of the knees and of the trochanteric areas have quite loose meshes. The fWAT has a noteworthy fibrous component and can be found in areas where a severe mechanic stress occurs. Adipocytes have an individual thick fibrous shell. In conclusion, the present study demonstrates evident differences among subcutaneous WAT deposits, thus suggesting that in regenerative procedures based on autologous adipose tissues the sampling area should not be randomly chosen, but it should be oriented by evidence based evaluations. The structural peculiarities of the sWAT, and particularly of its microcirculation, suggest that it could represent a privileged source for

  9. Fuzzy One-Class Classification Model Using Contamination Neighborhoods

    Directory of Open Access Journals (Sweden)

    Lev V. Utkin

    2012-01-01

    Full Text Available A fuzzy classification model is studied in the paper. It is based on the contaminated (robust model which produces fuzzy expected risk measures characterizing classification errors. Optimal classification parameters of the models are derived by minimizing the fuzzy expected risk. It is shown that an algorithm for computing the classification parameters is reduced to a set of standard support vector machine tasks with weighted data points. Experimental results with synthetic data illustrate the proposed fuzzy model.

  10. A Robust Method to Generate Mechanically Anisotropic Vascular Smooth Muscle Cell Sheets for Vascular Tissue Engineering.

    Science.gov (United States)

    Backman, Daniel E; LeSavage, Bauer L; Shah, Shivem B; Wong, Joyce Y

    2017-06-01

    In arterial tissue engineering, mimicking native structure and mechanical properties is essential because compliance mismatch can lead to graft failure and further disease. With bottom-up tissue engineering approaches, designing tissue components with proper microscale mechanical properties is crucial to achieve the necessary macroscale properties in the final implant. This study develops a thermoresponsive cell culture platform for growing aligned vascular smooth muscle cell (VSMC) sheets by photografting N-isopropylacrylamide (NIPAAm) onto micropatterned poly(dimethysiloxane) (PDMS). The grafting process is experimentally and computationally optimized to produce PNIPAAm-PDMS substrates optimal for VSMC attachment. To allow long-term VSMC sheet culture and increase the rate of VSMC sheet formation, PNIPAAm-PDMS surfaces were further modified with 3-aminopropyltriethoxysilane yielding a robust, thermoresponsive cell culture platform for culturing VSMC sheets. VSMC cell sheets cultured on patterned thermoresponsive substrates exhibit cellular and collagen alignment in the direction of the micropattern. Mechanical characterization of patterned, single-layer VSMC sheets reveals increased stiffness in the aligned direction compared to the perpendicular direction whereas nonpatterned cell sheets exhibit no directional dependence. Structural and mechanical anisotropy of aligned, single-layer VSMC sheets makes this platform an attractive microstructural building block for engineering a vascular graft to match the in vivo mechanical properties of native arterial tissue. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A Robust and Device-Free System for the Recognition and Classification of Elderly Activities.

    Science.gov (United States)

    Li, Fangmin; Al-Qaness, Mohammed Abdulaziz Aide; Zhang, Yong; Zhao, Bihai; Luan, Xidao

    2016-12-01

    Human activity recognition, tracking and classification is an essential trend in assisted living systems that can help support elderly people with their daily activities. Traditional activity recognition approaches depend on vision-based or sensor-based techniques. Nowadays, a novel promising technique has obtained more attention, namely device-free human activity recognition that neither requires the target object to wear or carry a device nor install cameras in a perceived area. The device-free technique for activity recognition uses only the signals of common wireless local area network (WLAN) devices available everywhere. In this paper, we present a novel elderly activities recognition system by leveraging the fluctuation of the wireless signals caused by human motion. We present an efficient method to select the correct data from the Channel State Information (CSI) streams that were neglected in previous approaches. We apply a Principle Component Analysis method that exposes the useful information from raw CSI. Thereafter, Forest Decision (FD) is adopted to classify the proposed activities and has gained a high accuracy rate. Extensive experiments have been conducted in an indoor environment to test the feasibility of the proposed system with a total of five volunteer users. The evaluation shows that the proposed system is applicable and robust to electromagnetic noise.

  12. Automatic classification of tissue malignancy for breast carcinoma diagnosis.

    Science.gov (United States)

    Fondón, Irene; Sarmiento, Auxiliadora; García, Ana Isabel; Silvestre, María; Eloy, Catarina; Polónia, António; Aguiar, Paulo

    2018-05-01

    Breast cancer is the second leading cause of cancer death among women. Its early diagnosis is extremely important to prevent avoidable deaths. However, malignancy assessment of tissue biopsies is complex and dependent on observer subjectivity. Moreover, hematoxylin and eosin (H&E)-stained histological images exhibit a highly variable appearance, even within the same malignancy level. In this paper, we propose a computer-aided diagnosis (CAD) tool for automated malignancy assessment of breast tissue samples based on the processing of histological images. We provide four malignancy levels as the output of the system: normal, benign, in situ and invasive. The method is based on the calculation of three sets of features related to nuclei, colour regions and textures considering local characteristics and global image properties. By taking advantage of well-established image processing techniques, we build a feature vector for each image that serves as an input to an SVM (Support Vector Machine) classifier with a quadratic kernel. The method has been rigorously evaluated, first with a 5-fold cross-validation within an initial set of 120 images, second with an external set of 30 different images and third with images with artefacts included. Accuracy levels range from 75.8% when the 5-fold cross-validation was performed to 75% with the external set of new images and 61.11% when the extremely difficult images were added to the classification experiment. The experimental results indicate that the proposed method is capable of distinguishing between four malignancy levels with high accuracy. Our results are close to those obtained with recent deep learning-based methods. Moreover, it performs better than other state-of-the-art methods based on feature extraction, and it can help improve the CAD of breast cancer. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Lauren classification and individualized chemotherapy in gastric cancer

    OpenAIRE

    MA, JUNLI; SHEN, HONG; KAPESA, LINDA; ZENG, SHAN

    2016-01-01

    Gastric cancer is one of the most common malignancies worldwide. During the last 50 years, the histological classification of gastric carcinoma has been largely based on Lauren's criteria, in which gastric cancer is classified into two major histological subtypes, namely intestinal type and diffuse type adenocarcinoma. This classification was introduced in 1965, and remains currently widely accepted and employed, since it constitutes a simple and robust classification approach. The two histol...

  14. Robust path planning for flexible needle insertion using Markov decision processes.

    Science.gov (United States)

    Tan, Xiaoyu; Yu, Pengqian; Lim, Kah-Bin; Chui, Chee-Kong

    2018-05-11

    Flexible needle has the potential to accurately navigate to a treatment region in the least invasive manner. We propose a new planning method using Markov decision processes (MDPs) for flexible needle navigation that can perform robust path planning and steering under the circumstance of complex tissue-needle interactions. This method enhances the robustness of flexible needle steering from three different perspectives. First, the method considers the problem caused by soft tissue deformation. The method then resolves the common needle penetration failure caused by patterns of targets, while the last solution addresses the uncertainty issues in flexible needle motion due to complex and unpredictable tissue-needle interaction. Computer simulation and phantom experimental results show that the proposed method can perform robust planning and generate a secure control policy for flexible needle steering. Compared with a traditional method using MDPs, the proposed method achieves higher accuracy and probability of success in avoiding obstacles under complicated and uncertain tissue-needle interactions. Future work will involve experiment with biological tissue in vivo. The proposed robust path planning method can securely steer flexible needle within soft phantom tissues and achieve high adaptability in computer simulation.

  15. Wagner classification and culture analysis of diabetic foot infection

    Directory of Open Access Journals (Sweden)

    Fatma Bozkurt

    2011-03-01

    Full Text Available The aim of this study was to determine the concordance ratio between microorganisms isolated from deep tissue culture and those from superficial culture in patients with diabetic foot according to Wagner’s wound classification method.Materials and methods: A total of 63 patients with Diabetic foot infection, who were admitted to Dicle University Hospital between October 2006 and November 2007, were included into the study. Wagner’s classification method was used for wound classification. For microbiologic studies superficial and deep tissue specimens were obtained from each patient, and were rapidly sent to laboratory for aerob and anaerob cultures. Microbiologic data were analyzed and interpreted in line with sensitivity and specifity formula.Results: Thirty-eight (60% of the patients were in Wagner’s classification ≤2, while 25 (40% patients were Wagner’s classification ≥3. According to our culture results, 66 (69% Gr (+ and 30 (31% Gr (- microorganisms grew in Wagner classification ≤2 patients. While in Wagner classification ≥3; 25 (35% Gr (+ and 46 (65% Gr (- microorganisms grew. Microorganisms grew in 89% of superficial cultures and 64% of the deep tissue cultures in patients with Wagner classification ≤2, while microorganism grew in 64% of Wagner classification ≥3.Conclusion: In ulcers of diabetic food infections, initial treatment should be started according to result of sterile superficial culture, but deep tissue culture should be taken, if unresponsive to initial treatment.

  16. Robust pattern decoding in shape-coded structured light

    Science.gov (United States)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  17. Classification of Laser Induced Fluorescence Spectra from Normal and Malignant bladder tissues using Learning Vector Quantization Neural Network in Bladder Cancer Diagnosis

    DEFF Research Database (Denmark)

    Karemore, Gopal Raghunath; Mascarenhas, Kim Komal; Patil, Choudhary

    2008-01-01

    In the present work we discuss the potential of recently developed classification algorithm, Learning Vector Quantization (LVQ), for the analysis of Laser Induced Fluorescence (LIF) Spectra, recorded from normal and malignant bladder tissue samples. The algorithm is prototype based and inherently...

  18. Gender classification under extended operating conditions

    Science.gov (United States)

    Rude, Howard N.; Rizki, Mateen

    2014-06-01

    Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.

  19. Deep learning for image classification

    Science.gov (United States)

    McCoppin, Ryan; Rizki, Mateen

    2014-06-01

    This paper provides an overview of deep learning and introduces the several subfields of deep learning including a specific tutorial of convolutional neural networks. Traditional methods for learning image features are compared to deep learning techniques. In addition, we present our preliminary classification results, our basic implementation of a convolutional restricted Boltzmann machine on the Mixed National Institute of Standards and Technology database (MNIST), and we explain how to use deep learning networks to assist in our development of a robust gender classification system.

  20. Three-dimensional analysis and classification of arteries in the skin and subcutaneous adipofascial tissue by computer graphics imaging.

    Science.gov (United States)

    Nakajima, H; Minabe, T; Imanishi, N

    1998-09-01

    To develop new types of surgical flaps that utilize portions of the skin and subcutaneous tissue (e.g., a thin flap or an adipofascial flap), three-dimensional investigation of the vasculature in the skin and subcutaneous tissue has been anticipated. In the present study, total-body arterial injection and three-dimensional imaging of the arteries by computer graphics were performed. The full-thickness skin and subcutaneous adipofascial tissue samples, which were obtained from fresh human cadavers injected with radio-opaque medium, were divided into three distinct layers. Angiograms of each layer were introduced into a personal computer to construct three-dimensional images. On a computer monitor, each artery was shown color-coded according to the three portions: the deep adipofascial layer, superficial adipofascial layer, and dermis. Three-dimensional computerized images of each artery in the skin and subcutaneous tissue revealed the components of each vascular plexus and permitted their classification into six types. The distribution of types in the body correlated with the tissue mobility of each area. Clinically, appreciation of the three-dimensional structure of the arteries allowed the development of several new kinds of flaps.

  1. A statistically harmonized alignment-classification in image space enables accurate and robust alignment of noisy images in single particle analysis.

    Science.gov (United States)

    Kawata, Masaaki; Sato, Chikara

    2007-06-01

    In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.

  2. A simple and robust method for automated photometric classification of supernovae using neural networks

    Science.gov (United States)

    Karpenka, N. V.; Feroz, F.; Hobson, M. P.

    2013-02-01

    A method is presented for automated photometric classification of supernovae (SNe) as Type Ia or non-Ia. A two-step approach is adopted in which (i) the SN light curve flux measurements in each observing filter are fitted separately to an analytical parametrized function that is sufficiently flexible to accommodate virtually all types of SNe and (ii) the fitted function parameters and their associated uncertainties, along with the number of flux measurements, the maximum-likelihood value of the fit and Bayesian evidence for the model, are used as the input feature vector to a classification neural network that outputs the probability that the SN under consideration is of Type Ia. The method is trained and tested using data released following the Supernova Photometric Classification Challenge (SNPCC), consisting of light curves for 20 895 SNe in total. We consider several random divisions of the data into training and testing sets: for instance, for our sample D_1 (D_4), a total of 10 (40) per cent of the data are involved in training the algorithm and the remainder used for blind testing of the resulting classifier; we make no selection cuts. Assigning a canonical threshold probability of pth = 0.5 on the network output to class an SN as Type Ia, for the sample D_1 (D_4) we obtain a completeness of 0.78 (0.82), purity of 0.77 (0.82) and SNPCC figure of merit of 0.41 (0.50). Including the SN host-galaxy redshift and its uncertainty as additional inputs to the classification network results in a modest 5-10 per cent increase in these values. We find that the quality of the classification does not vary significantly with SN redshift. Moreover, our probabilistic classification method allows one to calculate the expected completeness, purity and figure of merit (or other measures of classification quality) as a function of the threshold probability pth, without knowing the true classes of the SNe in the testing sample, as is the case in the classification of real SNe

  3. Automatic multi-modal MR tissue classification for the assessment of response to bevacizumab in patients with glioblastoma

    International Nuclear Information System (INIS)

    Liberman, Gilad; Louzoun, Yoram; Aizenstein, Orna; Blumenthal, Deborah T.; Bokstein, Felix; Palmon, Mika; Corn, Benjamin W.; Ben Bashat, Dafna

    2013-01-01

    Background: Current methods for evaluation of treatment response in glioblastoma are inaccurate, limited and time-consuming. This study aimed to develop a multi-modal MRI automatic classification method to improve accuracy and efficiency of treatment response assessment in patients with recurrent glioblastoma (GB). Materials and methods: A modification of the k-Nearest-Neighbors (kNN) classification method was developed and applied to 59 longitudinal MR data sets of 13 patients with recurrent GB undergoing bevacizumab (anti-angiogenic) therapy. Changes in the enhancing tumor volume were assessed using the proposed method and compared with Macdonald's criteria and with manual volumetric measurements. The edema-like area was further subclassified into peri- and non-peri-tumoral edema, using both the kNN method and an unsupervised method, to monitor longitudinal changes. Results: Automatic classification using the modified kNN method was applicable in all scans, even when the tumors were infiltrative with unclear borders. The enhancing tumor volume obtained using the automatic method was highly correlated with manual measurements (N = 33, r = 0.96, p < 0.0001), while standard radiographic assessment based on Macdonald's criteria matched manual delineation and automatic results in only 68% of cases. A graded pattern of tumor infiltration within the edema-like area was revealed by both automatic methods, showing high agreement. All classification results were confirmed by a senior neuro-radiologist and validated using MR spectroscopy. Conclusion: This study emphasizes the important role of automatic tools based on a multi-modal view of the tissue in monitoring therapy response in patients with high grade gliomas specifically under anti-angiogenic therapy

  4. Machine Learning of Human Pluripotent Stem Cell-Derived Engineered Cardiac Tissue Contractility for Automated Drug Classification

    Directory of Open Access Journals (Sweden)

    Eugene K. Lee

    2017-11-01

    Full Text Available Accurately predicting cardioactive effects of new molecular entities for therapeutics remains a daunting challenge. Immense research effort has been focused toward creating new screening platforms that utilize human pluripotent stem cell (hPSC-derived cardiomyocytes and three-dimensional engineered cardiac tissue constructs to better recapitulate human heart function and drug responses. As these new platforms become increasingly sophisticated and high throughput, the drug screens result in larger multidimensional datasets. Improved automated analysis methods must therefore be developed in parallel to fully comprehend the cellular response across a multidimensional parameter space. Here, we describe the use of machine learning to comprehensively analyze 17 functional parameters derived from force readouts of hPSC-derived ventricular cardiac tissue strips (hvCTS electrically paced at a range of frequencies and exposed to a library of compounds. A generated metric is effective for then determining the cardioactivity of a given drug. Furthermore, we demonstrate a classification model that can automatically predict the mechanistic action of an unknown cardioactive drug.

  5. Visualization and classification in biomedical terahertz pulsed imaging

    International Nuclear Information System (INIS)

    Loeffler, Torsten; Siebert, Karsten; Czasch, Stephanie; Bauer, Tobias; Roskos, Hartmut G

    2002-01-01

    'Visualization' in imaging is the process of extracting useful information from raw data in such a way that meaningful physical contrasts are developed. 'Classification' is the subsequent process of defining parameter ranges which allow us to identify elements of images such as different tissues or different objects. In this paper, we explore techniques for visualization and classification in terahertz pulsed imaging (TPI) for biomedical applications. For archived (formalin-fixed, alcohol-dehydrated and paraffin-mounted) test samples, we investigate both time- and frequency-domain methods based on bright- and dark-field TPI. Successful tissue classification is demonstrated

  6. Classification of breast tissue for protocols development in mammography exam;Classificacao dos tecidos mamarios para elaboracao de protocolos em exames de mamografia

    Energy Technology Data Exchange (ETDEWEB)

    Teixeira, M. Ines; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Chohfi, Adriana C.S.; Figueiredo, Zenaide A. [Universidade Nove de Julho (UNINOVE), Sao Paulo, SP (Brazil). Dept. da Saude. Radiologia Medica

    2009-07-01

    For accomplishment of a mammogram should be considered several factors such as equipment used and the classification of breast cancer patients. The density of the breast is different of patient to patient, thus making it difficult many precocious diagnostic of breast cancer. There is a significant variation in the breast to be radiographed, depending on several factors presented in the breast tissue of breast prosthesis with or without breasts or with implants. This work had as objective main to make a survey of the factors that influence in the classification of mammary fabrics. With this survey and classification it can be better adjusted the protocols which will be more efficient in the mammography exam, preventing excess of radiation dose, stress for the patient, economy of raw material, control of the quality and rapidity in the process of image attainment. (author)

  7. A robust dataset-agnostic heart disease classifier from Phonocardiogram.

    Science.gov (United States)

    Banerjee, Rohan; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan; Mandana, K M

    2017-07-01

    Automatic classification of normal and abnormal heart sounds is a popular area of research. However, building a robust algorithm unaffected by signal quality and patient demography is a challenge. In this paper we have analysed a wide list of Phonocardiogram (PCG) features in time and frequency domain along with morphological and statistical features to construct a robust and discriminative feature set for dataset-agnostic classification of normal and cardiac patients. The large and open access database, made available in Physionet 2016 challenge was used for feature selection, internal validation and creation of training models. A second dataset of 41 PCG segments, collected using our in-house smart phone based digital stethoscope from an Indian hospital was used for performance evaluation. Our proposed methodology yielded sensitivity and specificity scores of 0.76 and 0.75 respectively on the test dataset in classifying cardiovascular diseases. The methodology also outperformed three popular prior art approaches, when applied on the same dataset.

  8. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    KAUST Repository

    Magana-Mora, Arturo

    2017-06-14

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  9. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    KAUST Repository

    Magana-Mora, Arturo; Bajic, Vladimir B.

    2017-01-01

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  10. Segmentation methodology for automated classification and differentiation of soft tissues in multiband images of high-resolution ultrasonic transmission tomography.

    Science.gov (United States)

    Jeong, Jeong-Won; Shin, Dae C; Do, Synho; Marmarelis, Vasilis Z

    2006-08-01

    This paper presents a novel segmentation methodology for automated classification and differentiation of soft tissues using multiband data obtained with the newly developed system of high-resolution ultrasonic transmission tomography (HUTT) for imaging biological organs. This methodology extends and combines two existing approaches: the L-level set active contour (AC) segmentation approach and the agglomerative hierarchical kappa-means approach for unsupervised clustering (UC). To prevent the trapping of the current iterative minimization AC algorithm in a local minimum, we introduce a multiresolution approach that applies the level set functions at successively increasing resolutions of the image data. The resulting AC clusters are subsequently rearranged by the UC algorithm that seeks the optimal set of clusters yielding the minimum within-cluster distances in the feature space. The presented results from Monte Carlo simulations and experimental animal-tissue data demonstrate that the proposed methodology outperforms other existing methods without depending on heuristic parameters and provides a reliable means for soft tissue differentiation in HUTT images.

  11. Robust rooftop extraction from visible band images using higher order CRF

    KAUST Repository

    Li, Er; Femiani, John; Xu, Shibiao; Zhang, Xiaopeng; Wonka, Peter

    2015-01-01

    In this paper, we propose a robust framework for building extraction in visible band images. We first get an initial classification of the pixels based on an unsupervised presegmentation. Then, we develop a novel conditional random field (CRF

  12. Weakly supervised classification in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Dery, Lucio Mwinmaarong [Physics Department, Stanford University,Stanford, CA, 94305 (United States); Nachman, Benjamin [Physics Division, Lawrence Berkeley National Laboratory,1 Cyclotron Rd, Berkeley, CA, 94720 (United States); Rubbo, Francesco; Schwartzman, Ariel [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA, 94025 (United States)

    2017-05-29

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics — quark versus gluon tagging — we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.

  13. Weakly supervised classification in high energy physics

    International Nuclear Information System (INIS)

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; Schwartzman, Ariel

    2017-01-01

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics — quark versus gluon tagging — we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.

  14. Segmentation and labeling of the ventricular system in normal pressure hydrocephalus using patch-based tissue classification and multi-atlas labeling

    Science.gov (United States)

    Ellingsen, Lotta M.; Roy, Snehashis; Carass, Aaron; Blitz, Ari M.; Pham, Dzung L.; Prince, Jerry L.

    2016-03-01

    Normal pressure hydrocephalus (NPH) affects older adults and is thought to be caused by obstruction of the normal flow of cerebrospinal fluid (CSF). NPH typically presents with cognitive impairment, gait dysfunction, and urinary incontinence, and may account for more than five percent of all cases of dementia. Unlike most other causes of dementia, NPH can potentially be treated and the neurological dysfunction reversed by shunt surgery or endoscopic third ventriculostomy (ETV), which drain excess CSF. However, a major diagnostic challenge remains to robustly identify shunt-responsive NPH patients from patients with enlarged ventricles due to other neurodegenerative diseases. Currently, radiologists grade the severity of NPH by detailed examination and measurement of the ventricles based on stacks of 2D magnetic resonance images (MRIs). Here we propose a new method to automatically segment and label different compartments of the ventricles in NPH patients from MRIs. While this task has been achieved in healthy subjects, the ventricles in NPH are both enlarged and deformed, causing current algorithms to fail. Here we combine a patch-based tissue classification method with a registration-based multi-atlas labeling method to generate a novel algorithm that labels the lateral, third, and fourth ventricles in subjects with ventriculomegaly. The method is also applicable to other neurodegenerative diseases such as Alzheimer's disease; a condition considered in the differential diagnosis of NPH. Comparison with state of the art segmentation techniques demonstrate substantial improvements in labeling the enlarged ventricles, indicating that this strategy may be a viable option for the diagnosis and characterization of NPH.

  15. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential...

  16. A minimum spanning forest based classification method for dedicated breast CT images

    International Nuclear Information System (INIS)

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei

    2015-01-01

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting model used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging

  17. Robust classification of neonatal apnoea-related desaturations

    International Nuclear Information System (INIS)

    Monasterio, Violeta; Burgess, Fred; Clifford, Gari D

    2012-01-01

    Respiratory signals monitored in the neonatal intensive care units are usually ignored due to the high prevalence of noise and false alarms (FA). Apneic events are generally therefore indicated by a pulse oximeter alarm reacting to the subsequent desaturation. However, the high FA rate in the photoplethysmogram may desensitize staff, reducing the reaction speed. The main reason for the high FA rates of critical care monitors is the unimodal analysis behaviour. In this work, we propose a multimodal analysis framework to reduce the FA rate in neonatal apnoea monitoring. Information about oxygen saturation, heart rate, respiratory rate and signal quality was extracted from electrocardiogram, impedance pneumogram and photoplethysmographic signals for a total of 20 features in the 5 min interval before a desaturation event. 1616 desaturation events from 27 neonatal admissions were annotated by two independent reviewers as true (physiologically relevant) or false (noise-related). Patients were divided into two independent groups for training and validation, and a support vector machine was trained to classify the events as true or false. The best classification performance was achieved on a combination of 13 features with sensitivity, specificity and accuracy of 100% in the training set, and a sensitivity of 86%, a specificity of 91% and an accuracy of 90% in the validation set. (paper)

  18. Towards an efficient and robust foot classification from pedobarographic images

    OpenAIRE

    Oliveira, Francisco; Sousa, Andreia S. P.; Santos, Rubim; Tavares, João Manuel

    2012-01-01

    O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor). This paper presents a new computational framework for automatic foot classification from digital plantar pressure images. It classifies the foot as left or right and simultaneously calculates two well-known footprint indices: the Cavanagh's arch index and the modified arch index. The accuracy of the framework was evaluated using a set of plantar pressure images from two common pedobarographic devices. The...

  19. Fault Tolerant Neural Network for ECG Signal Classification Systems

    Directory of Open Access Journals (Sweden)

    MERAH, M.

    2011-08-01

    Full Text Available The aim of this paper is to apply a new robust hardware Artificial Neural Network (ANN for ECG classification systems. This ANN includes a penalization criterion which makes the performances in terms of robustness. Specifically, in this method, the ANN weights are normalized using the auto-prune method. Simulations performed on the MIT ? BIH ECG signals, have shown that significant robustness improvements are obtained regarding potential hardware artificial neuron failures. Moreover, we show that the proposed design achieves better generalization performances, compared to the standard back-propagation algorithm.

  20. The classification of benign and malignant human prostate tissue by multivariate analysis of {sup 1}H magnetic resonance spectra

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, P.; Smith, I.; Leboldus, L.; Littman, C.; Somorjai, L.; Bezabeh, T. [Institute for Biodiagnostic, National Research Council, Manitoba (Canada)

    1998-04-01

    {sup 1}H magnetic resonance spectroscopy studies (360 MHz) were performed on specimens of benign (n = 66) and malignant (n = 21) human prostate tissue from 50 patients and the spectral data were subjected to multivariate analysis, specifically linear-discriminant analysis. On the basis of histopathological assessments, an overall classification accuracy of 96.6 % was achieved, with a sensitivity of 100 % and a specificity of 95.5 % in classifying benign prostatic hyperplasia from prostatic cancer. Resonances due to citrate, glutamate, and taurine were among the six spectral subregions identified by our algorithm as having diagnostic potential. Significantly higher levels of citrate were observed in glandular than in stromal benign prostatic hyperplasia (P < 0.05). This method shows excellent promise for the possibility of in vivo assessment of prostate tissue by magnetic resonance. (author)

  1. Dimensionality-varied convolutional neural network for spectral-spatial classification of hyperspectral data

    Science.gov (United States)

    Liu, Wanjun; Liang, Xuejian; Qu, Haicheng

    2017-11-01

    Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.

  2. Robust Face Recognition Via Gabor Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    Hao Yu-Juan

    2016-01-01

    Full Text Available Sparse representation based on compressed sensing theory has been widely used in the field of face recognition, and has achieved good recognition results. but the face feature extraction based on sparse representation is too simple, and the sparse coefficient is not sparse. In this paper, we improve the classification algorithm based on the fusion of sparse representation and Gabor feature, and then improved algorithm for Gabor feature which overcomes the problem of large dimension of the vector dimension, reduces the computation and storage cost, and enhances the robustness of the algorithm to the changes of the environment.The classification efficiency of sparse representation is determined by the collaborative representation,we simplify the sparse constraint based on L1 norm to the least square constraint, which makes the sparse coefficients both positive and reduce the complexity of the algorithm. Experimental results show that the proposed method is robust to illumination, facial expression and pose variations of face recognition, and the recognition rate of the algorithm is improved.

  3. High throughput assessment of cells and tissues: Bayesian classification of spectral metrics from infrared vibrational spectroscopic imaging data.

    Science.gov (United States)

    Bhargava, Rohit; Fernandez, Daniel C; Hewitt, Stephen M; Levin, Ira W

    2006-07-01

    Vibrational spectroscopy allows a visualization of tissue constituents based on intrinsic chemical composition and provides a potential route to obtaining diagnostic markers of diseases. Characterizations utilizing infrared vibrational spectroscopy, in particular, are conventionally low throughput in data acquisition, generally lacking in spatial resolution with the resulting data requiring intensive numerical computations to extract information. These factors impair the ability of infrared spectroscopic measurements to represent accurately the spatial heterogeneity in tissue, to incorporate robustly the diversity introduced by patient cohorts or preparative artifacts and to validate developed protocols in large population studies. In this manuscript, we demonstrate a combination of Fourier transform infrared (FTIR) spectroscopic imaging, tissue microarrays (TMAs) and fast numerical analysis as a paradigm for the rapid analysis, development and validation of high throughput spectroscopic characterization protocols. We provide an extended description of the data treatment algorithm and a discussion of various factors that may influence decision-making using this approach. Finally, a number of prostate tissue biopsies, arranged in an array modality, are employed to examine the efficacy of this approach in histologic recognition of epithelial cell polarization in patients displaying a variety of normal, malignant and hyperplastic conditions. An index of epithelial cell polarization, derived from a combined spectral and morphological analysis, is determined to be a potentially useful diagnostic marker.

  4. When machine vision meets histology: A comparative evaluation of model architecture for classification of histology sections.

    Science.gov (United States)

    Zhong, Cheng; Han, Ju; Borowsky, Alexander; Parvin, Bahram; Wang, Yunfu; Chang, Hang

    2017-01-01

    Classification of histology sections in large cohorts, in terms of distinct regions of microanatomy (e.g., stromal) and histopathology (e.g., tumor, necrosis), enables the quantification of tumor composition, and the construction of predictive models of genomics and clinical outcome. To tackle the large technical variations and biological heterogeneities, which are intrinsic in large cohorts, emerging systems utilize either prior knowledge from pathologists or unsupervised feature learning for invariant representation of the underlying properties in the data. However, to a large degree, the architecture for tissue histology classification remains unexplored and requires urgent systematical investigation. This paper is the first attempt to provide insights into three fundamental questions in tissue histology classification: I. Is unsupervised feature learning preferable to human engineered features? II. Does cellular saliency help? III. Does the sparse feature encoder contribute to recognition? We show that (a) in I, both Cellular Morphometric Feature and features from unsupervised feature learning lead to superior performance when compared to SIFT and [Color, Texture]; (b) in II, cellular saliency incorporation impairs the performance for systems built upon pixel-/patch-level features; and (c) in III, the effect of the sparse feature encoder is correlated with the robustness of features, and the performance can be consistently improved by the multi-stage extension of systems built upon both Cellular Morphmetric Feature and features from unsupervised feature learning. These insights are validated with two cohorts of Glioblastoma Multiforme (GBM) and Kidney Clear Cell Carcinoma (KIRC). Copyright © 2016 Elsevier B.V. All rights reserved.

  5. A New Method for Solving Supervised Data Classification Problems

    Directory of Open Access Journals (Sweden)

    Parvaneh Shabanzadeh

    2014-01-01

    Full Text Available Supervised data classification is one of the techniques used to extract nontrivial information from data. Classification is a widely used technique in various fields, including data mining, industry, medicine, science, and law. This paper considers a new algorithm for supervised data classification problems associated with the cluster analysis. The mathematical formulations for this algorithm are based on nonsmooth, nonconvex optimization. A new algorithm for solving this optimization problem is utilized. The new algorithm uses a derivative-free technique, with robustness and efficiency. To improve classification performance and efficiency in generating classification model, a new feature selection algorithm based on techniques of convex programming is suggested. Proposed methods are tested on real-world datasets. Results of numerical experiments have been presented which demonstrate the effectiveness of the proposed algorithms.

  6. High-dimensional data in economics and their (robust) analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf

  7. Proposal of a new classification scheme for periocular injuries

    Directory of Open Access Journals (Sweden)

    Devi Prasad Mohapatra

    2017-01-01

    Full Text Available Background: Eyelids are important structures and play a role in protecting the globe from trauma, brightness, in maintaining the integrity of tear films and moving the tears towards the lacrimal drainage system and contribute to aesthetic appearance of the face. Ophthalmic trauma is an important cause of morbidity among individuals and has also been responsible for additional cost of healthcare. Periocular trauma involving eyelids and adjacent structures has been found to have increased recently probably due to increased pace of life and increased dependence on machinery. A comprehensive classification of periocular trauma would help in stratifying these injuries as well as study outcomes. Material and Methods: This study was carried out at our institute from June 2015 to Dec 2015. We searched multiple English language databases for existing classification systems for periocular trauma. We designed a system of classification of periocular soft tissue injuries based on clinico-anatomical presentations. This classification was applied prospectively to patients presenting with periocular soft tissue injuries to our department. Results: A comprehensive classification scheme was designed consisting of five types of periocular injuries. A total of 38 eyelid injuries in 34 patients were evaluated in this study. According to the System for Peri-Ocular Trauma (SPOT classification, Type V injuries were most common. SPOT Type II injuries were more common isolated injuries among all zones. Discussion: Classification systems are necessary in order to provide a framework in which to scientifically study the etiology, pathogenesis, and treatment of diseases in an orderly fashion. The SPOT classification has taken into account the periocular soft tissue injuries i.e., upper eyelid, lower eyelid, medial and lateral canthus injuries., based on observed clinico-anatomical patterns of eyelid injuries. Conclusion: The SPOT classification seems to be a reliable

  8. MO-DE-207B-03: Improved Cancer Classification Using Patient-Specific Biological Pathway Information Via Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Young, M; Craft, D [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2016-06-15

    Purpose: To develop an efficient, pathway-based classification system using network biology statistics to assist in patient-specific response predictions to radiation and drug therapies across multiple cancer types. Methods: We developed PICS (Pathway Informed Classification System), a novel two-step cancer classification algorithm. In PICS, a matrix m of mRNA expression values for a patient cohort is collapsed into a matrix p of biological pathways. The entries of p, which we term pathway scores, are obtained from either principal component analysis (PCA), normal tissue centroid (NTC), or gene expression deviation (GED). The pathway score matrix is clustered using both k-means and hierarchical clustering, and a clustering is judged by how well it groups patients into distinct survival classes. The most effective pathway scoring/clustering combination, per clustering p-value, thus generates various ‘signatures’ for conventional and functional cancer classification. Results: PICS successfully regularized large dimension gene data, separated normal and cancerous tissues, and clustered a large patient cohort spanning six cancer types. Furthermore, PICS clustered patient cohorts into distinct, statistically-significant survival groups. For a suboptimally-debulked ovarian cancer set, the pathway-classified Kaplan-Meier survival curve (p = .00127) showed significant improvement over that of a prior gene expression-classified study (p = .0179). For a pancreatic cancer set, the pathway-classified Kaplan-Meier survival curve (p = .00141) showed significant improvement over that of a prior gene expression-classified study (p = .04). Pathway-based classification confirmed biomarkers for the pyrimidine, WNT-signaling, glycerophosphoglycerol, beta-alanine, and panthothenic acid pathways for ovarian cancer. Despite its robust nature, PICS requires significantly less run time than current pathway scoring methods. Conclusion: This work validates the PICS method to improve

  9. Naïve and Robust: Class-Conditional Independence in Human Classification Learning

    Science.gov (United States)

    Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.

    2018-01-01

    Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…

  10. Classification of first branchial cleft anomalies: is it clinically relevant ...

    African Journals Online (AJOL)

    Background: There are three classification systems for first branchial cleft anomalies currently in use. The Arnot, Work and Olsen classifications describe these lesions on the basis of morphology, tissue of origin and clinical appearance. However, the clinical relevance of these classifications is debated, as they may not be ...

  11. High-dimensional Data in Economics and their (Robust) Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2017-01-01

    Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability

  12. Robust surface roughness indices and morphological interpretation

    Science.gov (United States)

    Trevisani, Sebastiano; Rocca, Michele

    2016-04-01

    Geostatistical-based image/surface texture indices based on variogram (Atkison and Lewis, 2000; Herzfeld and Higginson, 1996; Trevisani et al., 2012) and on its robust variant MAD (median absolute differences, Trevisani and Rocca, 2015) offer powerful tools for the analysis and interpretation of surface morphology (potentially not limited to solid earth). In particular, the proposed robust index (Trevisani and Rocca, 2015) with its implementation based on local kernels permits the derivation of a wide set of robust and customizable geomorphometric indices capable to outline specific aspects of surface texture. The stability of MAD in presence of signal noise and abrupt changes in spatial variability is well suited for the analysis of high-resolution digital terrain models. Moreover, the implementation of MAD by means of a pixel-centered perspective based on local kernels, with some analogies to the local binary pattern approach (Lucieer and Stein, 2005; Ojala et al., 2002), permits to create custom roughness indices capable to outline different aspects of surface roughness (Grohmann et al., 2011; Smith, 2015). In the proposed poster, some potentialities of the new indices in the context of geomorphometry and landscape analysis will be presented. At same time, challenges and future developments related to the proposed indices will be outlined. Atkinson, P.M., Lewis, P., 2000. Geostatistical classification for remote sensing: an introduction. Computers & Geosciences 26, 361-371. Grohmann, C.H., Smith, M.J., Riccomini, C., 2011. Multiscale Analysis of Topographic Surface Roughness in the Midland Valley, Scotland. IEEE Transactions on Geoscience and Remote Sensing 49, 1220-1213. Herzfeld, U.C., Higginson, C.A., 1996. Automated geostatistical seafloor classification - Principles, parameters, feature vectors, and discrimination criteria. Computers and Geosciences, 22 (1), pp. 35-52. Lucieer, A., Stein, A., 2005. Texture-based landform segmentation of LiDAR imagery

  13. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

    International Nuclear Information System (INIS)

    Wels, Michael; Hornegger, Joachim; Zheng Yefeng; Comaniciu, Dorin; Huber, Martin

    2011-01-01

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average

  14. Robust Seismic Normal Modes Computation in Radial Earth Models and A Novel Classification Based on Intersection Points of Waveguides

    Science.gov (United States)

    Ye, J.; Shi, J.; De Hoop, M. V.

    2017-12-01

    We develop a robust algorithm to compute seismic normal modes in a spherically symmetric, non-rotating Earth. A well-known problem is the cross-contamination of modes near "intersections" of dispersion curves for separate waveguides. Our novel computational approach completely avoids artificial degeneracies by guaranteeing orthonormality among the eigenfunctions. We extend Wiggins' and Buland's work, and reformulate the Sturm-Liouville problem as a generalized eigenvalue problem with the Rayleigh-Ritz Galerkin method. A special projection operator incorporating the gravity terms proposed by de Hoop and a displacement/pressure formulation are utilized in the fluid outer core to project out the essential spectrum. Moreover, the weak variational form enables us to achieve high accuracy across the solid-fluid boundary, especially for Stoneley modes, which have exponentially decaying behavior. We also employ the mixed finite element technique to avoid spurious pressure modes arising from discretization schemes and a numerical inf-sup test is performed following Bathe's work. In addition, the self-gravitation terms are reformulated to avoid computations outside the Earth, thanks to the domain decomposition technique. Our package enables us to study the physical properties of intersection points of waveguides. According to Okal's classification theory, the group velocities should be continuous within a branch of the same mode family. However, we have found that there will be a small "bump" near intersection points, which is consistent with Miropol'sky's observation. In fact, we can loosely regard Earth's surface and the CMB as independent waveguides. For those modes that are far from the intersection points, their eigenfunctions are localized in the corresponding waveguides. However, those that are close to intersection points will have physical features of both waveguides, which means they cannot be classified in either family. Our results improve on Okal

  15. β-Tricalcium phosphate/poly(glycerol sebacate) scaffolds with robust mechanical property for bone tissue engineering

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Kai [The State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology, Shanghai 200237 (China); Engineering Research Centre for Biomedical Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China); Zhang, Jing; Ma, Xiaoyu; Ma, Yifan; Kan, Chao [Key Laboratory for Ultrafine Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China); Engineering Research Centre for Biomedical Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China); Ma, Haiyan [Engineering Research Centre for Biomedical Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China); Li, Yulin, E-mail: yulinli@ecust.edu.cn [Engineering Research Centre for Biomedical Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China); Yuan, Yuan, E-mail: yyuan@ecust.edu.cn [The State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology, Shanghai 200237 (China); Engineering Research Centre for Biomedical Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China); Liu, Changsheng, E-mail: liucs@ecust.edu.cn [The State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology, Shanghai 200237 (China); Key Laboratory for Ultrafine Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China); Engineering Research Centre for Biomedical Materials of Ministry of Education, East China University of Science and Technology, Shanghai 200237 (China)

    2015-11-01

    Despite good biocompatibility and osteoconductivity, porous β-TCP scaffolds still lack the structural stability and mechanical robustness, which greatly limit their application in the field of bone regeneration. The hybridization of β-TCP with conventional synthetic biodegradable PLA and PCL only produced a limited toughening effect due to the plasticity of the polymers in nature. In this study, a β-TCP/poly(glycerol sebacate) scaffold (β-TCP/PGS) with well interconnected porous structure and robust mechanical property was prepared. Porous β-TCP scaffold was first prepared with polyurethane sponge as template and then impregnated into PGS pre-polymer solution with moderate viscosity, followed by in situ heat crosslinking and freezing–drying process. The results indicated that the freezing–drying under vacuum process could further facilitate crosslinking of PGS and formation of Ca{sup 2+}–COO{sup −} ionic complexing and thus synergistically improved the mechanical strength of the β-TCP/PGS with in situ heat crosslinking. Particularly, the β-TCP/PGS with 15% PGS content after heat crosslinking at 130 °C and freezing–drying at − 50 °C under vacuum exhibited an elongation at break of 375 ± 25% and a compressive strength of 1.73 MPa, 3.7-fold and 200-fold enhancement compared to the β-TCP, respectively. After the abrupt drop of compressive load, the β-TCP/PGS scaffolds exhibited a full recovery of their original shape. More importantly, the PGS polymer in the β-TCP/PGS scaffolds could direct the biomineralization of Ca/P from particulate shape into a nanofiber-interweaved structure. Furthermore, the β-TCP/PGS scaffolds allowed for cell penetration and proliferation, indicating a good cytobiocompatibility. It is believed that β-TCP/PGS scaffolds have great potential application in rigid tissue regeneration. - Graphical abstract: Robust β-TCP/PGS porous scaffolds are developed by incorporation of poly(glycerol sebacate) (PGS, a flexible

  16. Shift-invariant discrete wavelet transform analysis for retinal image classification.

    Science.gov (United States)

    Khademi, April; Krishnan, Sridhar

    2007-12-01

    This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.

  17. SU-E-T-625: Robustness Evaluation and Robust Optimization of IMPT Plans Based on Per-Voxel Standard Deviation of Dose Distributions.

    Science.gov (United States)

    Liu, W; Mohan, R

    2012-06-01

    Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD

  18. Robust optical sensors for safety critical automotive applications

    Science.gov (United States)

    De Locht, Cliff; De Knibber, Sven; Maddalena, Sam

    2008-02-01

    Optical sensors for the automotive industry need to be robust, high performing and low cost. This paper focuses on the impact of automotive requirements on optical sensor design and packaging. Main strategies to lower optical sensor entry barriers in the automotive market include: Perform sensor calibration and tuning by the sensor manufacturer, sensor test modes on chip to guarantee functional integrity at operation, and package technology is key. As a conclusion, optical sensor applications are growing in automotive. Optical sensor robustness matured to the level of safety critical applications like Electrical Power Assisted Steering (EPAS) and Drive-by-Wire by optical linear arrays based systems and Automated Cruise Control (ACC), Lane Change Assist and Driver Classification/Smart Airbag Deployment by camera imagers based systems.

  19. Lauren classification and individualized chemotherapy in gastric cancer.

    Science.gov (United States)

    Ma, Junli; Shen, Hong; Kapesa, Linda; Zeng, Shan

    2016-05-01

    Gastric cancer is one of the most common malignancies worldwide. During the last 50 years, the histological classification of gastric carcinoma has been largely based on Lauren's criteria, in which gastric cancer is classified into two major histological subtypes, namely intestinal type and diffuse type adenocarcinoma. This classification was introduced in 1965, and remains currently widely accepted and employed, since it constitutes a simple and robust classification approach. The two histological subtypes of gastric cancer proposed by the Lauren classification exhibit a number of distinct clinical and molecular characteristics, including histogenesis, cell differentiation, epidemiology, etiology, carcinogenesis, biological behaviors and prognosis. Gastric cancer exhibits varied sensitivity to chemotherapy drugs and significant heterogeneity; therefore, the disease may be a target for individualized therapy. The Lauren classification may provide the basis for individualized treatment for advanced gastric cancer, which is increasingly gaining attention in the scientific field. However, few studies have investigated individualized treatment that is guided by pathological classification. The aim of the current review is to analyze the two major histological subtypes of gastric cancer, as proposed by the Lauren classification, and to discuss the implications of this for personalized chemotherapy.

  20. β-Tricalcium phosphate/poly(glycerol sebacate) scaffolds with robust mechanical property for bone tissue engineering.

    Science.gov (United States)

    Yang, Kai; Zhang, Jing; Ma, Xiaoyu; Ma, Yifan; Kan, Chao; Ma, Haiyan; Li, Yulin; Yuan, Yuan; Liu, Changsheng

    2015-11-01

    Despite good biocompatibility and osteoconductivity, porous β-TCP scaffolds still lack the structural stability and mechanical robustness, which greatly limit their application in the field of bone regeneration. The hybridization of β-TCP with conventional synthetic biodegradable PLA and PCL only produced a limited toughening effect due to the plasticity of the polymers in nature. In this study, a β-TCP/poly(glycerol sebacate) scaffold (β-TCP/PGS) with well interconnected porous structure and robust mechanical property was prepared. Porous β-TCP scaffold was first prepared with polyurethane sponge as template and then impregnated into PGS pre-polymer solution with moderate viscosity, followed by in situ heat crosslinking and freezing-drying process. The results indicated that the freezing-drying under vacuum process could further facilitate crosslinking of PGS and formation of Ca(2+)-COO(-) ionic complexing and thus synergistically improved the mechanical strength of the β-TCP/PGS with in situ heat crosslinking. Particularly, the β-TCP/PGS with 15% PGS content after heat crosslinking at 130°C and freezing-drying at -50°C under vacuum exhibited an elongation at break of 375±25% and a compressive strength of 1.73MPa, 3.7-fold and 200-fold enhancement compared to the β-TCP, respectively. After the abrupt drop of compressive load, the β-TCP/PGS scaffolds exhibited a full recovery of their original shape. More importantly, the PGS polymer in the β-TCP/PGS scaffolds could direct the biomineralization of Ca/P from particulate shape into a nanofiber-interweaved structure. Furthermore, the β-TCP/PGS scaffolds allowed for cell penetration and proliferation, indicating a good cytobiocompatibility. It is believed that β-TCP/PGS scaffolds have great potential application in rigid tissue regeneration. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Seismic Target Classification Using a Wavelet Packet Manifold in Unattended Ground Sensors Systems

    Directory of Open Access Journals (Sweden)

    Enliang Song

    2013-07-01

    Full Text Available One of the most challenging problems in target classification is the extraction of a robust feature, which can effectively represent a specific type of targets. The use of seismic signals in unattended ground sensor (UGS systems makes this problem more complicated, because the seismic target signal is non-stationary, geology-dependent and with high-dimensional feature space. This paper proposes a new feature extraction algorithm, called wavelet packet manifold (WPM, by addressing the neighborhood preserving embedding (NPE algorithm of manifold learning on the wavelet packet node energy (WPNE of seismic signals. By combining non-stationary information and low-dimensional manifold information, WPM provides a more robust representation for seismic target classification. By using a K nearest neighbors classifier on the WPM signature, the algorithm of wavelet packet manifold classification (WPMC is proposed. Experimental results show that the proposed WPMC can not only reduce feature dimensionality, but also improve the classification accuracy up to 95.03%. Moreover, compared with state-of-the-art methods, WPMC is more suitable for UGS in terms of recognition ratio and computational complexity.

  2. Definition, classification, and epidemiology of pulmonary arterial hypertension.

    Science.gov (United States)

    Hoeper, Marius M

    2009-08-01

    Pulmonary arterial hypertension (PAH) is a distinct subgroup of pulmonary hypertension that comprises idiopathic PAH, familial/heritable forms, and PAH associated with connective tissue disease, congenital heart disease, portal hypertension, human immunodeficiency virus (HIV) infection, and some other conditions. The hemodynamic definition of PAH was recently revised: PAH is now defined by a mean pulmonary artery pressure at rest > or =25 mm Hg in the presence of a pulmonary capillary wedge pressure or =30 mm Hg during exercise) that was used in the old definition of PAH has been removed because there are no robust data that would allow defining an upper limit of normal for the pulmonary pressure during exercise. The revised classification of pulmonary hypertension still consists of five major groups: (1) PAH, (2) pulmonary hypertension due to left heart disease, (3) pulmonary hypertension due to chronic lung disease and/or hypoxia, (4) chronic thromboembolic pulmonary hypertension, and (5) miscellaneous forms. Modifications have been made in some of these groups, such as the addition of schistosomiasis-related pulmonary hypertension and pulmonary hypertension in patients with chronic hemolytic anemia to group 1.

  3. Robust photometric stereo using structural light sources

    Science.gov (United States)

    Han, Tian-Qi; Cheng, Yue; Shen, Hui-Liang; Du, Xin

    2014-05-01

    We propose a robust photometric stereo method by using structural arrangement of light sources. In the arrangement, light sources are positioned on a planar grid and form a set of collinear combinations. The shadow pixels are detected by adaptive thresholding. The specular highlight and diffuse pixels are distinguished according to their intensity deviations of the collinear combinations, thanks to the special arrangement of light sources. The highlight detection problem is cast as a pattern classification problem and is solved using support vector machine classifiers. Considering the possible misclassification of highlight pixels, the ℓ1 regularization is further employed in normal map estimation. Experimental results on both synthetic and real-world scenes verify that the proposed method can robustly recover the surface normal maps in the case of heavy specular reflection and outperforms the state-of-the-art techniques.

  4. Robust Automatic Modulation Classification Technique for Fading Channels via Deep Neural Network

    Directory of Open Access Journals (Sweden)

    Jung Hwan Lee

    2017-08-01

    Full Text Available In this paper, we propose a deep neural network (DNN-based automatic modulation classification (AMC for digital communications. While conventional AMC techniques perform well for additive white Gaussian noise (AWGN channels, classification accuracy degrades for fading channels where the amplitude and phase of channel gain change in time. The key contributions of this paper are in two phases. First, we analyze the effectiveness of a variety of statistical features for AMC task in fading channels. We reveal that the features that are shown to be effective for fading channels are different from those known to be good for AWGN channels. Second, we introduce a new enhanced AMC technique based on DNN method. We use the extensive and diverse set of statistical features found in our study for the DNN-based classifier. The fully connected feedforward network with four hidden layers are trained to classify the modulation class for several fading scenarios. Numerical evaluation shows that the proposed technique offers significant performance gain over the existing AMC methods in fading channels.

  5. Dimensionality-varied deep convolutional neural network for spectral-spatial classification of hyperspectral data

    Science.gov (United States)

    Qu, Haicheng; Liang, Xuejian; Liang, Shichao; Liu, Wanjun

    2018-01-01

    Many methods of hyperspectral image classification have been proposed recently, and the convolutional neural network (CNN) achieves outstanding performance. However, spectral-spatial classification of CNN requires an excessively large model, tremendous computations, and complex network, and CNN is generally unable to use the noisy bands caused by water-vapor absorption. A dimensionality-varied CNN (DV-CNN) is proposed to address these issues. There are four stages in DV-CNN and the dimensionalities of spectral-spatial feature maps vary with the stages. DV-CNN can reduce the computation and simplify the structure of the network. All feature maps are processed by more kernels in higher stages to extract more precise features. DV-CNN also improves the classification accuracy and enhances the robustness to water-vapor absorption bands. The experiments are performed on data sets of Indian Pines and Pavia University scene. The classification performance of DV-CNN is compared with state-of-the-art methods, which contain the variations of CNN, traditional, and other deep learning methods. The experiment of performance analysis about DV-CNN itself is also carried out. The experimental results demonstrate that DV-CNN outperforms state-of-the-art methods for spectral-spatial classification and it is also robust to water-vapor absorption bands. Moreover, reasonable parameters selection is effective to improve classification accuracy.

  6. Competition improves robustness against loss of information

    Directory of Open Access Journals (Sweden)

    Arash eKermani Kolankeh

    2015-03-01

    Full Text Available A substantial number of works aimed at modeling the receptive field properties of the primary visual cortex (V1. Their evaluation criterion is usually the similarity of the model response properties to the recorded responses from biological organisms. However, as several algorithms were able to demonstrate some degree of similarity to biological data based on the existing criteria, we focus on the robustness against loss of information in the form of occlusions as an additional constraint for better understanding the algorithmic level of early vision in the brain. We try to investigate the influence of competition mechanisms on the robustness. Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections. Each of those methods is known to be capable of developing receptive fields comparable to those of V1 simple-cells. Since measuring the robustness of methods having simple-cell like receptive fields against occlusion is difficult, we measure the robustness using the classification accuracy on the MNIST hand written digit dataset. For this we trained all methods on the training set of the MNIST hand written digits dataset and tested them on a MNIST test set with different levels of occlusions. We observe that methods which employ competitive mechanisms have higher robustness against loss of information. Also the kind of the competition mechanisms plays an important role in robustness. Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.

  7. Robust linear discriminant models to solve financial crisis in banking sectors

    Science.gov (United States)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni

    2014-12-01

    Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.

  8. The synchronous neural interactions test as a functional neuromarker for post-traumatic stress disorder (PTSD): a robust classification method based on the bootstrap

    Science.gov (United States)

    Georgopoulos, A. P.; Tan, H.-R. M.; Lewis, S. M.; Leuthold, A. C.; Winskowski, A. M.; Lynch, J. K.; Engdahl, B.

    2010-02-01

    Traumatic experiences can produce post-traumatic stress disorder (PTSD) which is a debilitating condition and for which no biomarker currently exists (Institute of Medicine (US) 2006 Posttraumatic Stress Disorder: Diagnosis and Assessment (Washington, DC: National Academies)). Here we show that the synchronous neural interactions (SNI) test which assesses the functional interactions among neural populations derived from magnetoencephalographic (MEG) recordings (Georgopoulos A P et al 2007 J. Neural Eng. 4 349-55) can successfully differentiate PTSD patients from healthy control subjects. Externally cross-validated, bootstrap-based analyses yielded >90% overall accuracy of classification. In addition, all but one of 18 patients who were not receiving medications for their disease were correctly classified. Altogether, these findings document robust differences in brain function between the PTSD and control groups that can be used for differential diagnosis and which possess the potential for assessing and monitoring disease progression and effects of therapy.

  9. Robust linear discriminant analysis with distance based estimators

    Science.gov (United States)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  10. Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.

    Science.gov (United States)

    Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui

    2018-02-01

    In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.

  11. A new classification system for congenital laryngeal cysts.

    Science.gov (United States)

    Forte, Vito; Fuoco, Gabriel; James, Adrian

    2004-06-01

    A new classification system for congenital laryngeal cysts based on the extent of the cyst and on the embryologic tissue of origin is proposed. Retrospective chart review. The charts of 20 patients with either congenital or acquired laryngeal cysts that were treated surgically between 1987 and 2002 at the Hospital for Sick Children, Toronto were retrospectively reviewed. Clinical presentation, radiologic findings, surgical management, histopathology, and outcome were recorded. A new classification system is proposed to better appreciate the origin of these cysts and to guide in their successful surgical management. Fourteen of the supraglottic and subglottic simple mucous retention cysts posed no diagnostic or therapeutic challenge and were treated successfully by a single endoscopic excision or marsupialization. The remaining six patients with congenital cysts in the study were deemed more complex, and all required open surgical procedures for cure. On the basis of the analysis of the data of these patients, a new classification of congenital laryngeal cysts is proposed. Type I cysts are confined to the larynx, the cyst wall composed of endodermal elements only, and can be managed endoscopically. Type II cysts extend beyond the confines of the larynx and require an external approach. The Type II cysts are further subclassified histologically on the basis of the embryologic tissue of origin: IIa, composed of endoderm only and IIb, containing endodermal and mesodermal elements (epithelium and cartilage) in the wall of the cyst. A new classification system for congenital laryngeal cysts is proposed on the basis of the extent of the cyst and the embryologic tissue of origin. This classification can help guide the surgeon with initial management and help us better understand the origin of these cysts.

  12. Classification of Gait Types Based on the Duty-factor

    DEFF Research Database (Denmark)

    Fihl, Preben; Moeslund, Thomas B.

    2007-01-01

    on the speed of the human, the cameras setup etc. and hence a robust descriptor for gait classification. The dutyfactor is basically a matter of measuring the ground support of the feet with respect to the stride. We estimate this by comparing the incoming silhouettes to a database of silhouettes with known...... ground support. Silhouettes are extracted using the Codebook method and represented using Shape Contexts. The matching with database silhouettes is done using the Hungarian method. While manually estimated duty-factors show a clear classification the presented system contains misclassifications due...

  13. Learning semantic histopathological representation for basal cell carcinoma classification

    Science.gov (United States)

    Gutiérrez, Ricardo; Rueda, Andrea; Romero, Eduardo

    2013-03-01

    Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.

  14. Classification of parotidectomy: a proposed modification to the European Salivary Gland Society classification system.

    Science.gov (United States)

    Wong, Wai Keat; Shetty, Subhaschandra

    2017-08-01

    Parotidectomy remains the mainstay of treatment for both benign and malignant lesions of the parotid gland. There exists a wide range of possible surgical options in parotidectomy in terms of extent of parotid tissue removed. There is increasing need for uniformity of terminology resulting from growing interest in modifications of the conventional parotidectomy. It is, therefore, of paramount importance for a standardized classification system in describing extent of parotidectomy. Recently, the European Salivary Gland Society (ESGS) proposed a novel classification system for parotidectomy. The aim of this study is to evaluate this system. A classification system proposed by the ESGS was critically re-evaluated and modified to increase its accuracy and its acceptability. Modifications mainly focused on subdividing Levels I and II into IA, IB, IIA, and IIB. From June 2006 to June 2016, 126 patients underwent 130 parotidectomies at our hospital. The classification system was tested in that cohort of patient. While the ESGS classification system is comprehensive, it does not cover all possibilities. The addition of Sublevels IA, IB, IIA, and IIB may help to address some of the clinical situations seen and is clinically relevant. We aim to test the modified classification system for partial parotidectomy to address some of the challenges mentioned.

  15. Adaptive phase k-means algorithm for waveform classification

    Science.gov (United States)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  16. KINEMATIC CLASSIFICATIONS OF LOCAL INTERACTING GALAXIES: IMPLICATIONS FOR THE MERGER/DISK CLASSIFICATIONS AT HIGH-z

    International Nuclear Information System (INIS)

    Hung, Chao-Ling; Larson, Kirsten L.; Sanders, D. B.; Rich, Jeffrey A.; Yuan, Tiantian; Kewley, Lisa J.; Casey, Caitlin M.; Smith, Howard A.; Hayward, Christopher C.

    2015-01-01

    The classification of galaxy mergers and isolated disks is key for understanding the relative importance of galaxy interactions and secular evolution during the assembly of galaxies. Galaxy kinematics as traced by emission lines have been used to suggest the existence of a significant population of high-z star-forming galaxies consistent with isolated rotating disks. However, recent studies have cautioned that post-coalescence mergers may also display disk-like kinematics. To further investigate the robustness of merger/disk classifications based on kinematic properties, we carry out a systematic classification of 24 local (U)LIRGs spanning a range of morphologies: from isolated spiral galaxies, ongoing interacting systems, to fully merged remnants. We artificially redshift the Wide Field Spectrograph observations of these local (U)LIRGs to z = 1.5 to make a realistic comparison with observations at high-z, and also to ensure that all galaxies have the same spatial sampling of ∼900 pc. Using both kinemetry-based and visual classifications, we find that the reliability of kinematic classification shows a strong trend with the interaction stage of galaxies. Mergers with two nuclei and tidal tails have the most distinct kinematics compared to isolated disks, whereas a significant population of the interacting disks and merger remnants are indistinguishable from isolated disks. The high fraction of mergers displaying disk-like kinematics reflects the complexity of the dynamics during galaxy interactions. Additional merger indicators such as morphological properties traced by stars or molecular gas are required to further constrain the merger/disk classifications at high-z

  17. Bin Ratio-Based Histogram Distances and Their Application to Image Classification.

    Science.gov (United States)

    Hu, Weiming; Xie, Nianhua; Hu, Ruiguang; Ling, Haibin; Chen, Qiang; Yan, Shuicheng; Maybank, Stephen

    2014-12-01

    Large variations in image background may cause partial matching and normalization problems for histogram-based representations, i.e., the histograms of the same category may have bins which are significantly different, and normalization may produce large changes in the differences between corresponding bins. In this paper, we deal with this problem by using the ratios between bin values of histograms, rather than bin values' differences which are used in the traditional histogram distances. We propose a bin ratio-based histogram distance (BRD), which is an intra-cross-bin distance, in contrast with previous bin-to-bin distances and cross-bin distances. The BRD is robust to partial matching and histogram normalization, and captures correlations between bins with only a linear computational complexity. We combine the BRD with the ℓ1 histogram distance and the χ(2) histogram distance to generate the ℓ1 BRD and the χ(2) BRD, respectively. These combinations exploit and benefit from the robustness of the BRD under partial matching and the robustness of the ℓ1 and χ(2) distances to small noise. We propose a method for assessing the robustness of histogram distances to partial matching. The BRDs and logistic regression-based histogram fusion are applied to image classification. The experimental results on synthetic data sets show the robustness of the BRDs to partial matching, and the experiments on seven benchmark data sets demonstrate promising results of the BRDs for image classification.

  18. Signal classification for acoustic neutrino detection

    International Nuclear Information System (INIS)

    Neff, M.; Anton, G.; Enzenhöfer, A.; Graf, K.; Hößl, J.; Katz, U.; Lahmann, R.; Richardt, C.

    2012-01-01

    This article focuses on signal classification for deep-sea acoustic neutrino detection. In the deep sea, the background of transient signals is very diverse. Approaches like matched filtering are not sufficient to distinguish between neutrino-like signals and other transient signals with similar signature, which are forming the acoustic background for neutrino detection in the deep-sea environment. A classification system based on machine learning algorithms is analysed with the goal to find a robust and effective way to perform this task. For a well-trained model, a testing error on the level of 1% is achieved for strong classifiers like Random Forest and Boosting Trees using the extracted features of the signal as input and utilising dense clusters of sensors instead of single sensors.

  19. Evaluation of thyroid tissue by Raman spectroscopy

    Science.gov (United States)

    Teixeira, C. S. B.; Bitar, R. A.; Santos, A. B. O.; Kulcsar, M. A. V.; Friguglietti, C. U. M.; Martinho, H. S.; da Costa, R. B.; Martin, A. A.

    2010-02-01

    Thyroid gland is a small gland in the neck consisting of two lobes connected by an isthmus. Thyroid's main function is to produce the hormones thyroxine (T4), triiodothyronine (T3) and calcitonin. Thyroid disorders can disturb the production of these hormones, which will affect numerous processes within the body such as: regulating metabolism and increasing utilization of cholesterol, fats, proteins, and carbohydrates. The gland itself can also be injured; for example, neoplasias, which have been considered the most important, causing damage of to the gland and are difficult to diagnose. There are several types of thyroid cancer: Papillary, Follicular, Medullary, and Anaplastic. The occurrence rate, in general is between 4 and 7%; which is on the increase (30%), probably due to new technology that is able to find small thyroid cancers that may not have been found previously. The most common method used for thyroid diagnoses are: anamnesis, ultrasonography, and laboratory exams (Fine Needle Aspiration Biopsy- FNAB). However, the sensitivity of those test are rather poor, with a high rate of false-negative results, therefore there is an urgent need to develop new diagnostic techniques. Raman spectroscopy has been presented as a valuable tool for cancer diagnosis in many different tissues. In this work, 27 fragments of the thyroid were collected from 18 patients, comprising the following histologic groups: goitre adjacent tissue, goitre nodular tissue, follicular adenoma, follicular carcinoma, and papillary carcinoma. Spectral collection was done with a commercial FTRaman Spectrometer (Bruker RFS100/S) using a 1064 nm laser excitation and Ge detector. Principal Component Analysis, Cluster Analysis, and Linear Discriminant Analysis with cross-validation were applied as spectral classification algorithm. Comparing the goitre adjacent tissue with the goitre nodular region, an index of 58.3% of correct classification was obtained. Between goitre (nodular region and

  20. A vocabulary for the identification and delineation of teratoma tissue components in hematoxylin and eosin-stained samples

    Directory of Open Access Journals (Sweden)

    Ramamurthy Bhagavatula

    2014-01-01

    Full Text Available We propose a methodology for the design of features mimicking the visual cues used by pathologists when identifying tissues in hematoxylin and eosin (H&E-stained samples. Background: H&E staining is the gold standard in clinical histology; it is cheap and universally used, producing a vast number of histopathological samples. While pathologists accurately and consistently identify tissues and their pathologies, it is a time-consuming and expensive task, establishing the need for automated algorithms for improved throughput and robustness. Methods: We use an iterative feedback process to design a histopathology vocabulary (HV, a concise set of features that mimic the visual cues used by pathologists, e.g. "cytoplasm color" or "nucleus density." These features are based in histology and understood by both pathologists and engineers. We compare our HV to several generic texture-feature sets in a pixel-level classification algorithm. Results: Results on delineating and identifying tissues in teratoma tumor samples validate our expert knowledge-based approach. Conclusions: The HV can be an effective tool for identifying and delineating teratoma components from images of H&E-stained tissue samples.

  1. Recursive SVM feature selection and sample classification for mass-spectrometry and microarray data

    Directory of Open Access Journals (Sweden)

    Harris Lyndsay N

    2006-04-01

    Full Text Available Abstract Background Like microarray-based investigations, high-throughput proteomics techniques require machine learning algorithms to identify biomarkers that are informative for biological classification problems. Feature selection and classification algorithms need to be robust to noise and outliers in the data. Results We developed a recursive support vector machine (R-SVM algorithm to select important genes/biomarkers for the classification of noisy data. We compared its performance to a similar, state-of-the-art method (SVM recursive feature elimination or SVM-RFE, paying special attention to the ability of recovering the true informative genes/biomarkers and the robustness to outliers in the data. Simulation experiments show that a 5 %-~20 % improvement over SVM-RFE can be achieved regard to these properties. The SVM-based methods are also compared with a conventional univariate method and their respective strengths and weaknesses are discussed. R-SVM was applied to two sets of SELDI-TOF-MS proteomics data, one from a human breast cancer study and the other from a study on rat liver cirrhosis. Important biomarkers found by the algorithm were validated by follow-up biological experiments. Conclusion The proposed R-SVM method is suitable for analyzing noisy high-throughput proteomics and microarray data and it outperforms SVM-RFE in the robustness to noise and in the ability to recover informative features. The multivariate SVM-based method outperforms the univariate method in the classification performance, but univariate methods can reveal more of the differentially expressed features especially when there are correlations between the features.

  2. Three-class classification in computer-aided diagnosis of breast cancer by support vector machine

    Science.gov (United States)

    Sun, Xuejun; Qian, Wei; Song, Dansheng

    2004-05-01

    Design of classifier in computer-aided diagnosis (CAD) scheme of breast cancer plays important role to its overall performance in sensitivity and specificity. Classification of a detected object as malignant lesion, benign lesion, or normal tissue on mammogram is a typical three-class pattern recognition problem. This paper presents a three-class classification approach by using two-stage classifier combined with support vector machine (SVM) learning algorithm for classification of breast cancer on mammograms. The first classification stage is used to detect abnormal areas and normal breast tissues, and the second stage is for classification of malignant or benign in detected abnormal objects. A series of spatial, morphology and texture features have been extracted on detected objects areas. By using genetic algorithm (GA), different feature groups for different stage classification have been investigated. Computerized free-response receiver operating characteristic (FROC) and receiver operating characteristic (ROC) analyses have been employed in different classification stages. Results have shown that obvious performance improvement in both sensitivity and specificity was observed through proposed classification approach compared with conventional two-class classification approaches, indicating its effectiveness in classification of breast cancer on mammograms.

  3. Mechanical design in embryos: mechanical signalling, robustness and developmental defects.

    Science.gov (United States)

    Davidson, Lance A

    2017-05-19

    Embryos are shaped by the precise application of force against the resistant structures of multicellular tissues. Forces may be generated, guided and resisted by cells, extracellular matrix, interstitial fluids, and how they are organized and bound within the tissue's architecture. In this review, we summarize our current thoughts on the multiple roles of mechanics in direct shaping, mechanical signalling and robustness of development. Genetic programmes of development interact with environmental cues to direct the composition of the early embryo and endow cells with active force production. Biophysical advances now provide experimental tools to measure mechanical resistance and collective forces during morphogenesis and are allowing integration of this field with studies of signalling and patterning during development. We focus this review on concepts that highlight this integration, and how the unique contributions of mechanical cues and gradients might be tested side by side with conventional signalling systems. We conclude with speculation on the integration of large-scale programmes of development, and how mechanical responses may ensure robust development and serve as constraints on programmes of tissue self-assembly.This article is part of the themed issue 'Systems morphodynamics: understanding the development of tissue hardware'. © 2017 The Author(s).

  4. Identification of immune cell infiltration in hematoxylin-eosin stained breast cancer samples: texture-based classification of tissue morphologies

    Science.gov (United States)

    Turkki, Riku; Linder, Nina; Kovanen, Panu E.; Pellinen, Teijo; Lundin, Johan

    2016-03-01

    The characteristics of immune cells in the tumor microenvironment of breast cancer capture clinically important information. Despite the heterogeneity of tumor-infiltrating immune cells, it has been shown that the degree of infiltration assessed by visual evaluation of hematoxylin-eosin (H and E) stained samples has prognostic and possibly predictive value. However, quantification of the infiltration in H and E-stained tissue samples is currently dependent on visual scoring by an expert. Computer vision enables automated characterization of the components of the tumor microenvironment, and texture-based methods have successfully been used to discriminate between different tissue morphologies and cell phenotypes. In this study, we evaluate whether local binary pattern texture features with superpixel segmentation and classification with support vector machine can be utilized to identify immune cell infiltration in H and E-stained breast cancer samples. Guided with the pan-leukocyte CD45 marker, we annotated training and test sets from 20 primary breast cancer samples. In the training set of arbitrary sized image regions (n=1,116) a 3-fold cross-validation resulted in 98% accuracy and an area under the receiver-operating characteristic curve (AUC) of 0.98 to discriminate between immune cell -rich and - poor areas. In the test set (n=204), we achieved an accuracy of 96% and AUC of 0.99 to label cropped tissue regions correctly into immune cell -rich and -poor categories. The obtained results demonstrate strong discrimination between immune cell -rich and -poor tissue morphologies. The proposed method can provide a quantitative measurement of the degree of immune cell infiltration and applied to digitally scanned H and E-stained breast cancer samples for diagnostic purposes.

  5. An interobserver reliability comparison between the Orthopaedic Trauma Association's open fracture classification and the Gustilo and Anderson classification.

    Science.gov (United States)

    Ghoshal, A; Enninghorst, N; Sisak, K; Balogh, Z J

    2018-02-01

    To evaluate interobserver reliability of the Orthopaedic Trauma Association's open fracture classification system (OTA-OFC). Patients of any age with a first presentation of an open long bone fracture were included. Standard radiographs, wound photographs, and a short clinical description were given to eight orthopaedic surgeons, who independently evaluated the injury using both the Gustilo and Anderson (GA) and OTA-OFC classifications. The responses were compared for variability using Cohen's kappa. The overall interobserver agreement was ĸ = 0.44 for the GA classification and ĸ = 0.49 for OTA-OFC, which reflects moderate agreement (0.41 to 0.60) for both classifications. The agreement in the five categories of OTA-OFC was: for skin, ĸ = 0.55 (moderate); for muscle, ĸ = 0.44 (moderate); for arterial injury, ĸ = 0.74 (substantial); for contamination, ĸ = 0.35 (fair); and for bone loss, ĸ = 0.41 (moderate). Although the OTA-OFC, with similar interobserver agreement to GA, offers a more detailed description of open fractures, further development may be needed to make it a reliable and robust tool. Cite this article: Bone Joint J 2018;100-B:242-6. ©2018 The British Editorial Society of Bone & Joint Surgery.

  6. The ability of current statistical classifications to separate services and manufacturing

    DEFF Research Database (Denmark)

    Christensen, Jesper Lindgaard

    2013-01-01

    This paper explores the performance of current statistical classification systems in classifying firms and, in particular, their ability to distinguish between firms that provide services and firms that provide manufacturing. We find that a large share of firms, almost 20%, are not classified...... as expected based on a comparison of their statements of activities with the assigned industry codes. This result is robust to analyses on different levels of aggregation and is validated in an additional survey. It is well known from earlier literature that industry classification systems are not perfect....... This paper provides a quantification of the flaws in classifications of firms. Moreover, it is explained why the classifications of firms are imprecise. The increasing complexity of production, inertia in changes to statistical systems and the increasing integration of manufacturing products and services...

  7. Hunter-gatherer postcranial robusticity relative to patterns of mobility, climatic adaptation, and selection for tissue economy.

    Science.gov (United States)

    Stock, J T

    2006-10-01

    Human skeletal robusticity is influenced by a number of factors, including habitual behavior, climate, and physique. Conflicting evidence as to the relative importance of these factors complicates our ability to interpret variation in robusticity in the past. It remains unclear how the pattern of robusticity in the skeleton relates to adaptive constraints on skeletal morphology. This study investigates variation in robusticity in claviculae, humeri, ulnae, femora, and tibiae among human foragers, relative to climate and habitual behavior. Cross-sectional geometric properties of the diaphyses are compared among hunter-gatherers from southern Africa (n = 83), the Andaman Islands (n = 32), Tierra del Fuego (n = 34), and the Great Lakes region (n = 15). The robusticity of both proximal and distal limb segments correlates negatively with climate and positively with patterns of terrestrial and marine mobility among these groups. However, the relative correspondence between robusticity and these factors varies throughout the body. In the lower limb, partial correlations between polar second moment of area (J(0.73)) and climate decrease from proximal to distal section locations, while this relationship increases from proximal to distal in the upper limb. Patterns of correlation between robusticity and mobility, either terrestrial or marine, generally increase from proximal to distal in the lower and upper limbs, respectively. This suggests that there may be a stronger relationship between observed patterns of diaphyseal hypertrophy and behavioral differences between populations in distal elements. Despite this trend, strength circularity indices at the femoral midshaft show the strongest correspondence with terrestrial mobility, particularly among males.

  8. Added soft tissue contrast using signal attenuation and the fractal dimension for optical coherence tomography images of porcine arterial tissue

    International Nuclear Information System (INIS)

    Flueraru, C; Mao, Y; Chang, S; Popescu, D P; Sowa, M G

    2010-01-01

    Optical coherence tomography (OCT) images of left-descending coronary tissues harvested from three porcine specimens were acquired with a home-build swept-source OCT setup. Despite the fact that OCT is capable of acquiring high resolution circumferential images of vessels, many distinct histological features of a vessel have comparable optical properties leading to poor contrast in OCT images. Two classification methods were tested in this report for the purpose of enhancing contrast between soft-tissue components of porcine coronary vessels. One method involved analyzing the attenuation of the OCT signal as a function of light penetration into the tissue. We demonstrated that by analyzing the signal attenuation in this manner we were able to differentiate two media sub-layers with different orientations of the smooth muscle cells. The other classification method used in our study was fractal analysis. Fractal analysis was implemented in a box-counting (fractal dimension) image-processing code and was used as a tool to differentiate and quantify variations in tissue texture at various locations in the OCT images. The calculated average fractal dimensions had different values in distinct regions of interest (ROI) within the imaged coronary samples. When compared to the results obtained by using the attenuation of the OCT signal, the method of fractal analysis demonstrated better classification potential for distinguishing amongst the tissue ROI.

  9. A Robust Classification of Galaxy Spectra: Dealing with Noisy and Incomplete Data

    Science.gov (United States)

    Connolly, A. J.; Szalay, A. S.

    1999-05-01

    Over the next few years new spectroscopic surveys (from the optical surveys of the Sloan Digital Sky Survey and the 2dF Galaxy Survey through to space-based ultraviolet satellites such as GALEX) will provide the opportunity and challenge of understanding how galaxies of different spectral type evolve with redshift. Techniques have been developed to classify galaxies based on their continuum and line spectra. Some of the most promising of these have used the Karhunen & Loève transform (or principal component analysis) to separate galaxies into distinct classes. Their limitation has been that they assume that the spectral coverage and quality of the spectra are constant for all galaxies within a given sample. In this paper we develop a general formalism that accounts for the missing data within the observed spectra (such as the removal of sky lines or the effect of sampling different intrinsic rest-wavelength ranges due to the redshift of a galaxy). We demonstrate that by correcting for these gaps we can recover an almost redshift-independent classification scheme. From this classification we can derive an optimal interpolation that reconstructs the underlying galaxy spectral energy distributions in the regions of missing data. This provides a simple and effective mechanism for building galaxy spectral energy distributions directly from data that may be noisy, incomplete, or drawn from a number of different sources.

  10. Classifier fusion for VoIP attacks classification

    Science.gov (United States)

    Safarik, Jakub; Rezac, Filip

    2017-05-01

    SIP is one of the most successful protocols in the field of IP telephony communication. It establishes and manages VoIP calls. As the number of SIP implementation rises, we can expect a higher number of attacks on the communication system in the near future. This work aims at malicious SIP traffic classification. A number of various machine learning algorithms have been developed for attack classification. The paper presents a comparison of current research and the use of classifier fusion method leading to a potential decrease in classification error rate. Use of classifier combination makes a more robust solution without difficulties that may affect single algorithms. Different voting schemes, combination rules, and classifiers are discussed to improve the overall performance. All classifiers have been trained on real malicious traffic. The concept of traffic monitoring depends on the network of honeypot nodes. These honeypots run in several networks spread in different locations. Separation of honeypots allows us to gain an independent and trustworthy attack information.

  11. Classification of sports types from tracklets

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    Automatic analysis of video is important in order to process and exploit large amounts of data, e.g. for sports analysis. Classification of sports types is one of the first steps to- wards a fully automatic analysis of the activities performed at sports arenas. In this work we test the idea...... that sports types can be classified from features extracted from short trajectories of the players. From tracklets created by a Kalman filter tracker we extract four robust features; Total distance, lifespan, distance span and mean speed. For clas- sification we use a quadratic discriminant analysis. In our...... experiments we use 30 2-minutes thermal video sequences from each of five different sports types. By applying a 10- fold cross validation we obtain a correct classification rate of 94.5 %....

  12. Advances in the classification and treatment of mastocytosis

    DEFF Research Database (Denmark)

    Valent, Peter; Akin, Cem; Hartmann, Karin

    2017-01-01

    Mastocytosis is a term used to denote a heterogeneous group of conditions defined by the expansion and accumulation of clonal (neoplastic) tissue mast cells in various organs. The classification of the World Health Organization (WHO) divides the disease into cutaneous mastocytosis, systemic...... leukemia. The clinical impact and prognostic value of this classification has been confirmed in numerous studies, and its basic concept remains valid. However, refinements have recently been proposed by the consensus group, the WHO, and the European Competence Network on Mastocytosis. In addition, new...... of mastocytosis, with emphasis on classification, prognostication, and emerging new treatment options in advanced systemic mastocytosis....

  13. Functional Basis of Microorganism Classification.

    Science.gov (United States)

    Zhu, Chengsheng; Delmont, Tom O; Vogel, Timothy M; Bromberg, Yana

    2015-08-01

    Correctly identifying nearest "neighbors" of a given microorganism is important in industrial and clinical applications where close relationships imply similar treatment. Microbial classification based on similarity of physiological and genetic organism traits (polyphasic similarity) is experimentally difficult and, arguably, subjective. Evolutionary relatedness, inferred from phylogenetic markers, facilitates classification but does not guarantee functional identity between members of the same taxon or lack of similarity between different taxa. Using over thirteen hundred sequenced bacterial genomes, we built a novel function-based microorganism classification scheme, functional-repertoire similarity-based organism network (FuSiON; flattened to fusion). Our scheme is phenetic, based on a network of quantitatively defined organism relationships across the known prokaryotic space. It correlates significantly with the current taxonomy, but the observed discrepancies reveal both (1) the inconsistency of functional diversity levels among different taxa and (2) an (unsurprising) bias towards prioritizing, for classification purposes, relatively minor traits of particular interest to humans. Our dynamic network-based organism classification is independent of the arbitrary pairwise organism similarity cut-offs traditionally applied to establish taxonomic identity. Instead, it reveals natural, functionally defined organism groupings and is thus robust in handling organism diversity. Additionally, fusion can use organism meta-data to highlight the specific environmental factors that drive microbial diversification. Our approach provides a complementary view to cladistic assignments and holds important clues for further exploration of microbial lifestyles. Fusion is a more practical fit for biomedical, industrial, and ecological applications, as many of these rely on understanding the functional capabilities of the microbes in their environment and are less concerned with

  14. Quality assurance: The 10-Group Classification System (Robson classification), induction of labor, and cesarean delivery.

    LENUS (Irish Health Repository)

    Robson, Michael

    2015-10-01

    Quality assurance in labor and delivery is needed. The method must be simple and consistent, and be of universal value. It needs to be clinically relevant, robust, and prospective, and must incorporate epidemiological variables. The 10-Group Classification System (TGCS) is a simple method providing a common starting point for further detailed analysis within which all perinatal events and outcomes can be measured and compared. The system is demonstrated in the present paper using data for 2013 from the National Maternity Hospital in Dublin, Ireland. Interpretation of the classification can be easily taught. The standard table can provide much insight into the philosophy of care in the population of women studied and also provide information on data quality. With standardization of audit of events and outcomes, any differences in either sizes of groups, events or outcomes can be explained only by poor data collection, significant epidemiological variables, or differences in practice. In April 2015, WHO proposed that the TGCS (also known as the Robson classification) is used as a global standard for assessing, monitoring, and comparing cesarean delivery rates within and between healthcare facilities.

  15. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  16. A Raman spectroscopy bio-sensor for tissue discrimination in surgical robotics.

    Science.gov (United States)

    Ashok, Praveen C; Giardini, Mario E; Dholakia, Kishan; Sibbett, Wilson

    2014-01-01

    We report the development of a fiber-based Raman sensor to be used in tumour margin identification during endoluminal robotic surgery. Although this is a generic platform, the sensor we describe was adapted for the ARAKNES (Array of Robots Augmenting the KiNematics of Endoluminal Surgery) robotic platform. On such a platform, the Raman sensor is intended to identify ambiguous tissue margins during robot-assisted surgeries. To maintain sterility of the probe during surgical intervention, a disposable sleeve was specially designed. A straightforward user-compatible interface was implemented where a supervised multivariate classification algorithm was used to classify different tissue types based on specific Raman fingerprints so that it could be used without prior knowledge of spectroscopic data analysis. The protocol avoids inter-patient variability in data and the sensor system is not restricted for use in the classification of a particular tissue type. Representative tissue classification assessments were performed using this system on excised tissue. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. ACCUWIND - Methods for classification of cup anemometers

    Energy Technology Data Exchange (ETDEWEB)

    Dahlberg, J.Aa.; Friis Pedersen, T.; Busche, P.

    2006-05-15

    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The European CLASSCUP project posed the objectives to quantify the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implemented in the IEC 61400-12-1 standard on power performance measurements in annex I and J. The classification of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A category classification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the result of assessment of systematic deviations. The present report focuses on methods that can be applied for assessment of such systematic deviations. A new alternative method for torque coefficient measurements at inclined flow have been developed, which have then been applied and compared to the existing methods developed in the CLASSCUP project and earlier. A number of approaches including the use of two cup anemometer models, two methods of torque coefficient measurement, two angular response measurements, and inclusion and exclusion of influence of friction have been implemented in the classification process in order to assess the robustness of methods. The results of the analysis are presented as classification indices, which are compared and discussed. (au)

  18. Robust Sounds of Activities of Daily Living Classification in Two-Channel Audio-Based Telemonitoring

    Directory of Open Access Journals (Sweden)

    David Maunder

    2013-01-01

    Full Text Available Despite recent advances in the area of home telemonitoring, the challenge of automatically detecting the sound signatures of activities of daily living of an elderly patient using nonintrusive and reliable methods remains. This paper investigates the classification of eight typical sounds of daily life from arbitrarily positioned two-microphone sensors under realistic noisy conditions. In particular, the role of several source separation and sound activity detection methods is considered. Evaluations on a new four-microphone database collected under four realistic noise conditions reveal that effective sound activity detection can produce significant gains in classification accuracy and that further gains can be made using source separation methods based on independent component analysis. Encouragingly, the results show that recognition accuracies in the range 70%–100% can be consistently obtained using different microphone-pair positions, under all but the most severe noise conditions.

  19. Robust through-the-wall radar image classification using a target-model alignment procedure.

    Science.gov (United States)

    Smith, Graeme E; Mobasseri, Bijan G

    2012-02-01

    A through-the-wall radar image (TWRI) bears little resemblance to the equivalent optical image, making it difficult to interpret. To maximize the intelligence that may be obtained, it is desirable to automate the classification of targets in the image to support human operators. This paper presents a technique for classifying stationary targets based on the high-range resolution profile (HRRP) extracted from 3-D TWRIs. The dependence of the image on the target location is discussed using a system point spread function (PSF) approach. It is shown that the position dependence will cause a classifier to fail, unless the image to be classified is aligned to a classifier-training location. A target image alignment technique based on deconvolution of the image with the system PSF is proposed. Comparison of the aligned target images with measured images shows the alignment process introducing normalized mean squared error (NMSE) ≤ 9%. The HRRP extracted from aligned target images are classified using a naive Bayesian classifier supported by principal component analysis. The classifier is tested using a real TWRI of canonical targets behind a concrete wall and shown to obtain correct classification rates ≥ 97%. © 2011 IEEE

  20. Robust Image Hashing Using Radon Transform and Invariant Features

    Directory of Open Access Journals (Sweden)

    Y.L. Liu

    2016-09-01

    Full Text Available A robust image hashing method based on radon transform and invariant features is proposed for image authentication, image retrieval, and image detection. Specifically, an input image is firstly converted into a counterpart with a normalized size. Then the invariant centroid algorithm is applied to obtain the invariant feature point and the surrounding circular area, and the radon transform is employed to acquire the mapping coefficient matrix of the area. Finally, the hashing sequence is generated by combining the feature vectors and the invariant moments calculated from the coefficient matrix. Experimental results show that this method not only can resist against the normal image processing operations, but also some geometric distortions. Comparisons of receiver operating characteristic (ROC curve indicate that the proposed method outperforms some existing methods in classification between perceptual robustness and discrimination.

  1. ILAE Classification of the Epilepsies Position Paper of the ILAE Commission for Classification and Terminology

    Science.gov (United States)

    Scheffer, Ingrid E; Berkovic, Samuel; Capovilla, Giuseppe; Connolly, Mary B; French, Jacqueline; Guilhoto, Laura; Hirsch, Edouard; Jain, Satish; Mathern, Gary W.; Moshé, Solomon L; Nordli, Douglas R; Perucca, Emilio; Tomson, Torbjörn; Wiebe, Samuel; Zhang, Yue-Hua; Zuberi, Sameer M

    2017-01-01

    Summary The ILAE Classification of the Epilepsies has been updated to reflect our gain in understanding of the epilepsies and their underlying mechanisms following the major scientific advances which have taken place since the last ratified classification in 1989. As a critical tool for the practising clinician, epilepsy classification must be relevant and dynamic to changes in thinking, yet robust and translatable to all areas of the globe. Its primary purpose is for diagnosis of patients, but it is also critical for epilepsy research, development of antiepileptic therapies and communication around the world. The new classification originates from a draft document submitted for public comments in 2013 which was revised to incorporate extensive feedback from the international epilepsy community over several rounds of consultation. It presents three levels, starting with seizure type where it assumes that the patient is having epileptic seizures as defined by the new 2017 ILAE Seizure Classification. After diagnosis of the seizure type, the next step is diagnosis of epilepsy type, including focal epilepsy, generalized epilepsy, combined generalized and focal epilepsy, and also an unknown epilepsy group. The third level is that of epilepsy syndrome where a specific syndromic diagnosis can be made. The new classification incorporates etiology along each stage, emphasizing the need to consider etiology at each step of diagnosis as it often carries significant treatment implications. Etiology is broken into six subgroups, selected because of their potential therapeutic consequences. New terminology is introduced such as developmental and epileptic encephalopathy. The term benign is replaced by the terms self-limited and pharmacoresponsive, to be used where appropriate. It is hoped that this new framework will assist in improving epilepsy care and research in the 21st century. PMID:28276062

  2. Assessment And Testing of Industrial Devices Robustness Against Cyber Security Attacks

    CERN Document Server

    Tilaro, F

    2011-01-01

    CERN (European Organization for Nuclear Research),like any organization, needs to achieve the conflicting objectives of connecting its operational network to Internet while at the same time keeping its industrial control systems secure from external and internal cyber attacks. With this in mind, the ISA-99[0F1] international cyber security standard has been adopted at CERN as a reference model to define a set of guidelines and security robustness criteria applicable to any network device. Devices robustness represents a key link in the defense-in-depth concept as some attacks will inevitably penetrate security boundaries and thus require further protection measures. When assessing the cyber security robustness of devices we have singled out control system-relevant attack patterns derived from the well-known CAPEC[1F2] classification. Once a vulnerability is identified, it needs to be documented, prioritized and reproduced at will in a dedicated test environment for debugging purposes. CERN - in collaboration ...

  3. Yarn-dyed fabric defect classification based on convolutional neural network

    Science.gov (United States)

    Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing

    2017-09-01

    Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.

  4. Catchment Classification: Connecting Climate, Structure and Function

    Science.gov (United States)

    Sawicz, K. A.; Wagener, T.; Sivapalan, M.; Troch, P. A.; Carrillo, G. A.

    2010-12-01

    Hydrology does not yet possess a generally accepted catchment classification framework. Such a classification framework needs to: [1] give names to things, i.e. the main classification step, [2] permit transfer of information, i.e. regionalization of information, [3] permit development of generalizations, i.e. to develop new theory, and [4] provide a first order environmental change impact assessment, i.e., the hydrologic implications of climate, land use and land cover change. One strategy is to create a catchment classification framework based on the notion of catchment functions (partitioning, storage, and release). Results of an empirical study presented here connects climate and structure to catchment function (in the form of select hydrologic signatures), based on analyzing over 300 US catchments. Initial results indicate a wide assortment of signature relationships with properties of climate, geology, and vegetation. The uncertainty in the different regionalized signatures varies widely, and therefore there is variability in the robustness of classifying ungauged basins. This research provides insight into the controls of hydrologic behavior of a catchment, and enables a classification framework applicable to gauged and ungauged across the study domain. This study sheds light on what we can expect to achieve in mapping climate, structure and function in a top-down manner. Results of this study complement work done using a bottom-up physically-based modeling framework to generalize this approach (Carrillo et al., this session).

  5. Hierarchical classification of dynamically varying radar pulse repetition interval modulation patterns.

    Science.gov (United States)

    Kauppi, Jukka-Pekka; Martikainen, Kalle; Ruotsalainen, Ulla

    2010-12-01

    The central purpose of passive signal intercept receivers is to perform automatic categorization of unknown radar signals. Currently, there is an urgent need to develop intelligent classification algorithms for these devices due to emerging complexity of radar waveforms. Especially multifunction radars (MFRs) capable of performing several simultaneous tasks by utilizing complex, dynamically varying scheduled waveforms are a major challenge for automatic pattern classification systems. To assist recognition of complex radar emissions in modern intercept receivers, we have developed a novel method to recognize dynamically varying pulse repetition interval (PRI) modulation patterns emitted by MFRs. We use robust feature extraction and classifier design techniques to assist recognition in unpredictable real-world signal environments. We classify received pulse trains hierarchically which allows unambiguous detection of the subpatterns using a sliding window. Accuracy, robustness and reliability of the technique are demonstrated with extensive simulations using both static and dynamically varying PRI modulation patterns. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Traffic sign classification with dataset augmentation and convolutional neural network

    Science.gov (United States)

    Tang, Qing; Kurnianggoro, Laksono; Jo, Kang-Hyun

    2018-04-01

    This paper presents a method for traffic sign classification using a convolutional neural network (CNN). In this method, firstly we transfer a color image into grayscale, and then normalize it in the range (-1,1) as the preprocessing step. To increase robustness of classification model, we apply a dataset augmentation algorithm and create new images to train the model. To avoid overfitting, we utilize a dropout module before the last fully connection layer. To assess the performance of the proposed method, the German traffic sign recognition benchmark (GTSRB) dataset is utilized. Experimental results show that the method is effective in classifying traffic signs.

  7. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.

    Science.gov (United States)

    de Moura, Karina de O A; Balbinot, Alexandre

    2018-05-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method

  8. Advanced Steel Microstructural Classification by Deep Learning Methods.

    Science.gov (United States)

    Azimi, Seyed Majid; Britz, Dominik; Engstler, Michael; Fritz, Mario; Mücklich, Frank

    2018-02-01

    The inner structure of a material is called microstructure. It stores the genesis of a material and determines all its physical and chemical properties. While microstructural characterization is widely spread and well known, the microstructural classification is mostly done manually by human experts, which gives rise to uncertainties due to subjectivity. Since the microstructure could be a combination of different phases or constituents with complex substructures its automatic classification is very challenging and only a few prior studies exist. Prior works focused on designed and engineered features by experts and classified microstructures separately from the feature extraction step. Recently, Deep Learning methods have shown strong performance in vision applications by learning the features from data together with the classification step. In this work, we propose a Deep Learning method for microstructural classification in the examples of certain microstructural constituents of low carbon steel. This novel method employs pixel-wise segmentation via Fully Convolutional Neural Network (FCNN) accompanied by a max-voting scheme. Our system achieves 93.94% classification accuracy, drastically outperforming the state-of-the-art method of 48.89% accuracy. Beyond the strong performance of our method, this line of research offers a more robust and first of all objective way for the difficult task of steel quality appreciation.

  9. Functional Basis of Microorganism Classification

    Science.gov (United States)

    Zhu, Chengsheng; Delmont, Tom O.; Vogel, Timothy M.; Bromberg, Yana

    2015-01-01

    Correctly identifying nearest “neighbors” of a given microorganism is important in industrial and clinical applications where close relationships imply similar treatment. Microbial classification based on similarity of physiological and genetic organism traits (polyphasic similarity) is experimentally difficult and, arguably, subjective. Evolutionary relatedness, inferred from phylogenetic markers, facilitates classification but does not guarantee functional identity between members of the same taxon or lack of similarity between different taxa. Using over thirteen hundred sequenced bacterial genomes, we built a novel function-based microorganism classification scheme, functional-repertoire similarity-based organism network (FuSiON; flattened to fusion). Our scheme is phenetic, based on a network of quantitatively defined organism relationships across the known prokaryotic space. It correlates significantly with the current taxonomy, but the observed discrepancies reveal both (1) the inconsistency of functional diversity levels among different taxa and (2) an (unsurprising) bias towards prioritizing, for classification purposes, relatively minor traits of particular interest to humans. Our dynamic network-based organism classification is independent of the arbitrary pairwise organism similarity cut-offs traditionally applied to establish taxonomic identity. Instead, it reveals natural, functionally defined organism groupings and is thus robust in handling organism diversity. Additionally, fusion can use organism meta-data to highlight the specific environmental factors that drive microbial diversification. Our approach provides a complementary view to cladistic assignments and holds important clues for further exploration of microbial lifestyles. Fusion is a more practical fit for biomedical, industrial, and ecological applications, as many of these rely on understanding the functional capabilities of the microbes in their environment and are less concerned

  10. Analysing breast tissue composition with MRI using currently available short, simple sequences

    International Nuclear Information System (INIS)

    Chau, A.C.M.; Hua, J.; Taylor, D.B.

    2016-01-01

    Aim: To determine the most robust commonly available magnetic resonance imaging (MRI) sequence to quantify breast tissue composition at 1.5 T. Materials and methods: Two-dimensional (2D) T1-weighted, Dixon fat, Dixon water and SPAIR images were obtained from five participants and a breast phantom using a 1.5 T Siemens Aera MRI system. Manual segmentation of the breasts was performed, and an in-house computer program was used to generate signal intensity histograms. Relative trough depth and relative peak separation were used to determine the robustness of the images for quantifying the two breast tissues. Total breast volumes and percentage breast densities calculated using the four sequences were compared. Results: Dixon fat histograms had consistently low relative trough depth and relative peak separation compared to those obtained using other sequences. There was no significant difference in total breast volumes and percentage breast densities of the participants or breast phantom using Dixon fat and 2D T1-weighted histograms. Dixon water and SPAIR histograms were not suitable for quantifying breast tissue composition. Conclusion: Dixon fat images are the most robust for the quantification of breast tissue composition using a signal intensity histogram. - Highlights: • Signal intensity histogram analysis can determine robustness of images for quantification of breast tissue composition. • Dixon fat images are the most robust. • The characteristics of the signal intensity histograms from Dixon water and SPAIR images make quantification unsuitable.

  11. Three-dimensional CT imaging of soft-tissue anatomy

    International Nuclear Information System (INIS)

    Fishman, E.K.; Ney, D.R.; Magid, D.; Kuhlman, J.E.

    1988-01-01

    Three-dimensional display of computed tomographic data has been limited to skeletal structures. This was in part related to the reconstruction algorithm used, which relied on a binary classification scheme. A new algorithm, volumetric rendering with percentage classification, provides the ability to display three-dimensional images of muscle and soft tissue. A review was conducted of images in 35 cases in which muscle and/or soft tissue were part of the clinical problem. In all cases, individual muscle groups could be clearly identified and discriminated. Branching vessels in the range of 2.3 mm could be identified. Similarly, lymph nodes could be clearly defined. High-resolution three-dimensional images were found to be useful both in providing an increased understanding of complex muscle and soft tissue anatomy and in surgical planning

  12. Robust Short-Lag Spatial Coherence Imaging.

    Science.gov (United States)

    Nair, Arun Asokan; Tran, Trac Duy; Bell, Muyinatu A Lediju

    2018-03-01

    Short-lag spatial coherence (SLSC) imaging displays the spatial coherence between backscattered ultrasound echoes instead of their signal amplitudes and is more robust to noise and clutter artifacts when compared with traditional delay-and-sum (DAS) B-mode imaging. However, SLSC imaging does not consider the content of images formed with different lags, and thus does not exploit the differences in tissue texture at each short-lag value. Our proposed method improves SLSC imaging by weighting the addition of lag values (i.e., M-weighting) and by applying robust principal component analysis (RPCA) to search for a low-dimensional subspace for projecting coherence images created with different lag values. The RPCA-based projections are considered to be denoised versions of the originals that are then weighted and added across lags to yield a final robust SLSC (R-SLSC) image. Our approach was tested on simulation, phantom, and in vivo liver data. Relative to DAS B-mode images, the mean contrast, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) improvements with R-SLSC images are 21.22 dB, 2.54, and 2.36, respectively, when averaged over simulated, phantom, and in vivo data and over all lags considered, which corresponds to mean improvements of 96.4%, 121.2%, and 120.5%, respectively. When compared with SLSC images, the corresponding mean improvements with R-SLSC images were 7.38 dB, 1.52, and 1.30, respectively (i.e., mean improvements of 14.5%, 50.5%, and 43.2%, respectively). Results show great promise for smoothing out the tissue texture of SLSC images and enhancing anechoic or hypoechoic target visibility at higher lag values, which could be useful in clinical tasks such as breast cyst visualization, liver vessel tracking, and obese patient imaging.

  13. EULAR points to consider in the development of classification and diagnostic criteria in systemic vasculitis

    DEFF Research Database (Denmark)

    Basu, Neil; Watts, Richard; Bajema, Ingeborg

    2010-01-01

    The systemic vasculitides are multiorgan diseases where early diagnosis and treatment can significantly improve outcomes. Robust nomenclature reduces diagnostic delay. However, key aspects of current nomenclature are widely perceived to be out of date, these include disease definitions, classific......, classification and diagnostic criteria. Therefore, the aim of the present work was to identify deficiencies and provide contemporary points to consider for the development of future definitions and criteria in systemic vasculitis....

  14. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  15. Pancreatic neuroendocrine tumours: correlation between MSCT features and pathological classification

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Yanji; Dong, Zhi; Li, Zi-Ping; Feng, Shi-Ting [The First Affiliated Hospital, Sun Yat-Sen University, Department of Radiology, Guangzhou, Guangdong (China); Chen, Jie [The First Affiliated Hospital, Sun Yat-Sen University, Department of Gastroenterology, Guangzhou, Guangdong (China); Chan, Tao; Chen, Minhu [Union Hospital, Hong Kong, Medical Imaging Department, Shatin, N.T. (China); Lin, Yuan [The First Affiliated Hospital, Sun Yat-Sen University, Department of Pathology, Guangzhou, Guangdong (China)

    2014-11-15

    We aimed to evaluate the multi-slice computed tomography (MSCT) features of pancreatic neuroendocrine neoplasms (P-NENs) and analyse the correlation between the MSCT features and pathological classification of P-NENs. Forty-one patients, preoperatively investigated by MSCT and subsequently operated on with a histological diagnosis of P-NENs, were included. Various MSCT features of the primary tumour, lymph node, and distant metastasis were analysed. The relationship between MSCT features and pathologic classification of P-NENs was analysed with univariate and multivariate models. Contrast-enhanced images showed significant differences among the three grades of tumours in the absolute enhancement (P = 0.013) and relative enhancement (P = 0.025) at the arterial phase. Univariate analysis revealed statistically significant differences among the tumours of different grades (based on World Health Organization [WHO] 2010 classification) in tumour size (P = 0.001), tumour contour (P < 0.001), cystic necrosis (P = 0.001), tumour boundary (P = 0.003), dilatation of the main pancreatic duct (P = 0.001), peripancreatic tissue or vascular invasion (P < 0.001), lymphadenopathy (P = 0.011), and distant metastasis (P = 0.012). Multivariate analysis suggested that only peripancreatic tissue or vascular invasion (HR 3.934, 95 % CI, 0.426-7.442, P = 0.028) was significantly associated with WHO 2010 pathological classification. MSCT is helpful in evaluating the pathological classification of P-NENs. (orig.)

  16. Real-time classification of humans versus animals using profiling sensors and hidden Markov tree model

    Science.gov (United States)

    Hossen, Jakir; Jacobs, Eddie L.; Chari, Srikant

    2015-07-01

    Linear pyroelectric array sensors have enabled useful classifications of objects such as humans and animals to be performed with relatively low-cost hardware in border and perimeter security applications. Ongoing research has sought to improve the performance of these sensors through signal processing algorithms. In the research presented here, we introduce the use of hidden Markov tree (HMT) models for object recognition in images generated by linear pyroelectric sensors. HMTs are trained to statistically model the wavelet features of individual objects through an expectation-maximization learning process. Human versus animal classification for a test object is made by evaluating its wavelet features against the trained HMTs using the maximum-likelihood criterion. The classification performance of this approach is compared to two other techniques; a texture, shape, and spectral component features (TSSF) based classifier and a speeded-up robust feature (SURF) classifier. The evaluation indicates that among the three techniques, the wavelet-based HMT model works well, is robust, and has improved classification performance compared to a SURF-based algorithm in equivalent computation time. When compared to the TSSF-based classifier, the HMT model has a slightly degraded performance but almost an order of magnitude improvement in computation time enabling real-time implementation.

  17. Accurate and robust brain image alignment using boundary-based registration.

    Science.gov (United States)

    Greve, Douglas N; Fischl, Bruce

    2009-10-15

    The fine spatial scales of the structures in the human brain represent an enormous challenge to the successful integration of information from different images for both within- and between-subject analysis. While many algorithms to register image pairs from the same subject exist, visual inspection shows that their accuracy and robustness to be suspect, particularly when there are strong intensity gradients and/or only part of the brain is imaged. This paper introduces a new algorithm called Boundary-Based Registration, or BBR. The novelty of BBR is that it treats the two images very differently. The reference image must be of sufficient resolution and quality to extract surfaces that separate tissue types. The input image is then aligned to the reference by maximizing the intensity gradient across tissue boundaries. Several lower quality images can be aligned through their alignment with the reference. Visual inspection and fMRI results show that BBR is more accurate than correlation ratio or normalized mutual information and is considerably more robust to even strong intensity inhomogeneities. BBR also excels at aligning partial-brain images to whole-brain images, a domain in which existing registration algorithms frequently fail. Even in the limit of registering a single slice, we show the BBR results to be robust and accurate.

  18. Behavioral state classification in epileptic brain using intracranial electrophysiology

    Science.gov (United States)

    Kremen, Vaclav; Duque, Juliano J.; Brinkmann, Benjamin H.; Berry, Brent M.; Kucewicz, Michal T.; Khadjevand, Fatemeh; Van Gompel, Jamie; Stead, Matt; St. Louis, Erik K.; Worrell, Gregory A.

    2017-04-01

    Objective. Automated behavioral state classification can benefit next generation implantable epilepsy devices. In this study we explored the feasibility of automated awake (AW) and slow wave sleep (SWS) classification using wide bandwidth intracranial EEG (iEEG) in patients undergoing evaluation for epilepsy surgery. Approach. Data from seven patients (age 34+/- 12 , 4 women) who underwent intracranial depth electrode implantation for iEEG monitoring were included. Spectral power features (0.1-600 Hz) spanning several frequency bands from a single electrode were used to train and test a support vector machine classifier. Main results. Classification accuracy of 97.8  ±  0.3% (normal tissue) and 89.4  ±  0.8% (epileptic tissue) across seven subjects using multiple spectral power features from a single electrode was achieved. Spectral power features from electrodes placed in normal temporal neocortex were found to be more useful (accuracy 90.8  ±  0.8%) for sleep-wake state classification than electrodes located in normal hippocampus (87.1  ±  1.6%). Spectral power in high frequency band features (Ripple (80-250 Hz), Fast Ripple (250-600 Hz)) showed comparable performance for AW and SWS classification as the best performing Berger bands (Alpha, Beta, low Gamma) with accuracy  ⩾90% using a single electrode contact and single spectral feature. Significance. Automated classification of wake and SWS should prove useful for future implantable epilepsy devices with limited computational power, memory, and number of electrodes. Applications include quantifying patient sleep patterns and behavioral state dependent detection, prediction, and electrical stimulation therapies.

  19. Classification of Strawberry Fruit Shape by Machine Learning

    Science.gov (United States)

    Ishikawa, T.; Hayashi, A.; Nagamatsu, S.; Kyutoku, Y.; Dan, I.; Wada, T.; Oku, K.; Saeki, Y.; Uto, T.; Tanabata, T.; Isobe, S.; Kochi, N.

    2018-05-01

    Shape is one of the most important traits of agricultural products due to its relationships with the quality, quantity, and value of the products. For strawberries, the nine types of fruit shape were defined and classified by humans based on the sampler patterns of the nine types. In this study, we tested the classification of strawberry shapes by machine learning in order to increase the accuracy of the classification, and we introduce the concept of computerization into this field. Four types of descriptors were extracted from the digital images of strawberries: (1) the Measured Values (MVs) including the length of the contour line, the area, the fruit length and width, and the fruit width/length ratio; (2) the Ellipse Similarity Index (ESI); (3) Elliptic Fourier Descriptors (EFDs), and (4) Chain Code Subtraction (CCS). We used these descriptors for the classification test along with the random forest approach, and eight of the nine shape types were classified with combinations of MVs + CCS + EFDs. CCS is a descriptor that adds human knowledge to the chain codes, and it showed higher robustness in classification than the other descriptors. Our results suggest machine learning's high ability to classify fruit shapes accurately. We will attempt to increase the classification accuracy and apply the machine learning methods to other plant species.

  20. Polsar Land Cover Classification Based on Hidden Polarimetric Features in Rotation Domain and Svm Classifier

    Science.gov (United States)

    Tao, C.-S.; Chen, S.-W.; Li, Y.-Z.; Xiao, S.-P.

    2017-09-01

    Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR) data utilization. Rollinvariant polarimetric features such as H / Ani / text-decoration: overline">α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets' scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM) classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification

  1. POLSAR LAND COVER CLASSIFICATION BASED ON HIDDEN POLARIMETRIC FEATURES IN ROTATION DOMAIN AND SVM CLASSIFIER

    Directory of Open Access Journals (Sweden)

    C.-S. Tao

    2017-09-01

    Full Text Available Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR data utilization. Rollinvariant polarimetric features such as H / Ani / α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets’ scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification accuracy

  2. Porosity, Mineralization, Tissue Type and Morphology Interactions at the Human Tibial Cortex

    Science.gov (United States)

    Hampson, Naomi A.

    Prior research has shown a relationship between tibia robustness (ratio of cross-sectional area to bone length) and stress fracture risk, with less robust bones having a higher risk, which may indicate a compensatory increase in elastic modulus to increase bending strength. Previous studies of human tibiae have shown higher ash content in slender bones. In this study, the relationships between variations in volumetric porosity, ash content, tissue mineral density, secondary bone tissue, and cross sectional geometry, were investigated in order to better understand the tissue level adaptations that may occur in the establishment of cross-sectional properties. In this research, significant differences were found between porosity, ash content, and tissue type around the cortex between robust and slender bones, suggesting that there was a level of co-adaption occurring. Variation in porosity correlated with robustness, and explained large parts of the variation in tissue mineral density. The nonlinear relationship between porosity and ash content may support that slender bones compensate for poor geometry by increasing ash content through reduced remodeling, while robust individuals increase porosity to decrease mass, but only to a point. These results suggest that tissue level organization plays a compensatory role in the establishment of adult bone mass, and may contribute to differences in bone aging between different bone phenotypes. The results suggest that slender individuals have significantly less remodeled bone, however the proportion of remodeled bone was not uniform around the tibia. In the complex results of the study of 38% vs. 66% sites the distal site was subject to higher strains than the 66% site, indicating both local and global regulators may be affecting overall remodeling rates and need to be teased apart in future studies. This research has broad clinical implications on the diagnosis and treatment of fragility fractures. The relationships that

  3. Automatic plankton image classification combining multiple view features via multiple kernel learning.

    Science.gov (United States)

    Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing

    2017-12-28

    Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system

  4. Robustness of digitally modulated signal features against variation in HF noise model

    Directory of Open Access Journals (Sweden)

    Shoaib Mobien

    2011-01-01

    Full Text Available Abstract High frequency (HF band has both military and civilian uses. It can be used either as a primary or backup communication link. Automatic modulation classification (AMC is of an utmost importance in this band for the purpose of communications monitoring; e.g., signal intelligence and spectrum management. A widely used method for AMC is based on pattern recognition (PR. Such a method has two main steps: feature extraction and classification. The first step is generally performed in the presence of channel noise. Recent studies show that HF noise could be modeled by Gaussian or bi-kappa distributions, depending on day-time. Therefore, it is anticipated that change in noise model will have impact on features extraction stage. In this article, we investigate the robustness of well known digitally modulated signal features against variation in HF noise. Specifically, we consider temporal time domain (TTD features, higher order cumulants (HOC, and wavelet based features. In addition, we propose new features extracted from the constellation diagram and evaluate their robustness against the change in noise model. This study is targeting 2PSK, 4PSK, 8PSK, 16QAM, 32QAM, and 64QAM modulations, as they are commonly used in HF communications.

  5. Remote Sensing Image Classification Based on Stacked Denoising Autoencoder

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2017-12-01

    Full Text Available Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model is built through the stacked layers of Denoising Autoencoder. Then, with noised input, the unsupervised Greedy layer-wise training algorithm is used to train each layer in turn for more robust expressing, characteristics are obtained in supervised learning by Back Propagation (BP neural network, and the whole network is optimized by error back propagation. Finally, Gaofen-1 satellite (GF-1 remote sensing data are used for evaluation, and the total accuracy and kappa accuracy reach 95.7% and 0.955, respectively, which are higher than that of the Support Vector Machine and Back Propagation neural network. The experiment results show that the proposed method can effectively improve the accuracy of remote sensing image classification.

  6. Completed Local Ternary Pattern for Rotation Invariant Texture Classification

    Directory of Open Access Journals (Sweden)

    Taha H. Rassem

    2014-01-01

    Full Text Available Despite the fact that the two texture descriptors, the completed modeling of Local Binary Pattern (CLBP and the Completed Local Binary Count (CLBC, have achieved a remarkable accuracy for invariant rotation texture classification, they inherit some Local Binary Pattern (LBP drawbacks. The LBP is sensitive to noise, and different patterns of LBP may be classified into the same class that reduces its discriminating property. Although, the Local Ternary Pattern (LTP is proposed to be more robust to noise than LBP, however, the latter’s weakness may appear with the LTP as well as with LBP. In this paper, a novel completed modeling of the Local Ternary Pattern (LTP operator is proposed to overcome both LBP drawbacks, and an associated completed Local Ternary Pattern (CLTP scheme is developed for rotation invariant texture classification. The experimental results using four different texture databases show that the proposed CLTP achieved an impressive classification accuracy as compared to the CLBP and CLBC descriptors.

  7. Contributions for classification of platelet rich plasma - proposal of a new classification: MARSPILL.

    Science.gov (United States)

    Lana, Jose Fabio Santos Duarte; Purita, Joseph; Paulus, Christian; Huber, Stephany Cares; Rodrigues, Bruno Lima; Rodrigues, Ana Amélia; Santana, Maria Helena; Madureira, João Lopo; Malheiros Luzo, Ângela Cristina; Belangero, William Dias; Annichino-Bizzacchi, Joyce Maria

    2017-07-01

    Platelet-rich plasma (PRP) has emerged as a significant therapy used in medical conditions with heterogeneous results. There are some important classifications to try to standardize the PRP procedure. The aim of this report is to describe PRP contents studying celular and molecular components, and also propose a new classification for PRP. The main focus is on mononuclear cells, which comprise progenitor cells and monocytes. In addition, there are important variables related to PRP application incorporated in this study, which are the harvest method, activation, red blood cells, number of spins, image guidance, leukocytes number and light activation. The other focus is the discussion about progenitor cells presence on peripherial blood which are interesting due to neovasculogenesis and proliferation. The function of monocytes (in tissue-macrophages) are discussed here and also its plasticity, a potential property for regenerative medicine treatments.

  8. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    Science.gov (United States)

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM

  9. Robust Fringe Projection Profilometry via Sparse Representation.

    Science.gov (United States)

    Budianto; Lun, Daniel P K

    2016-04-01

    In this paper, a robust fringe projection profilometry (FPP) algorithm using the sparse dictionary learning and sparse coding techniques is proposed. When reconstructing the 3D model of objects, traditional FPP systems often fail to perform if the captured fringe images have a complex scene, such as having multiple and occluded objects. It introduces great difficulty to the phase unwrapping process of an FPP system that can result in serious distortion in the final reconstructed 3D model. For the proposed algorithm, it encodes the period order information, which is essential to phase unwrapping, into some texture patterns and embeds them to the projected fringe patterns. When the encoded fringe image is captured, a modified morphological component analysis and a sparse classification procedure are performed to decode and identify the embedded period order information. It is then used to assist the phase unwrapping process to deal with the different artifacts in the fringe images. Experimental results show that the proposed algorithm can significantly improve the robustness of an FPP system. It performs equally well no matter the fringe images have a simple or complex scene, or are affected due to the ambient lighting of the working environment.

  10. A new circulation type classification based upon Lagrangian air trajectories

    Directory of Open Access Journals (Sweden)

    Alexandre M. Ramos

    2014-10-01

    Full Text Available A new classification method of the large-scale circulation characteristic for a specific target area (NW Iberian Peninsula is presented, based on the analysis of 90-h backward trajectories arriving in this area calculated with the 3-D Lagrangian particle dispersion model FLEXPART. A cluster analysis is applied to separate the backward trajectories in up to five representative air streams for each day. Specific measures are then used to characterise the distinct air streams (e.g., curvature of the trajectories, cyclonic or anticyclonic flow, moisture evolution, origin and length of the trajectories. The robustness of the presented method is demonstrated in comparison with the Eulerian Lamb weather type classification.A case study of the 2003 heatwave is discussed in terms of the new Lagrangian circulation and the Lamb weather type classifications. It is shown that the new classification method adds valuable information about the pertinent meteorological conditions, which are missing in an Eulerian approach. The new method is climatologically evaluated for the five-year time period from December 1999 to November 2004. The ability of the method to capture the inter-seasonal circulation variability in the target region is shown. Furthermore, the multi-dimensional character of the classification is shortly discussed, in particular with respect to inter-seasonal differences. Finally, the relationship between the new Lagrangian classification and the precipitation in the target area is studied.

  11. Robust automated classification of first-motion polarities for focal mechanism determination with machine learning

    Science.gov (United States)

    Ross, Z. E.; Meier, M. A.; Hauksson, E.

    2017-12-01

    Accurate first-motion polarities are essential for determining earthquake focal mechanisms, but are difficult to measure automatically because of picking errors and signal to noise issues. Here we develop an algorithm for reliable automated classification of first-motion polarities using machine learning algorithms. A classifier is designed to identify whether the first-motion polarity is up, down, or undefined by examining the waveform data directly. We first improve the accuracy of automatic P-wave onset picks by maximizing a weighted signal/noise ratio for a suite of candidate picks around the automatic pick. We then use the waveform amplitudes before and after the optimized pick as features for the classification. We demonstrate the method's potential by training and testing the classifier on tens of thousands of hand-made first-motion picks by the Southern California Seismic Network. The classifier assigned the same polarity as chosen by an analyst in more than 94% of the records. We show that the method is generalizable to a variety of learning algorithms, including neural networks and random forest classifiers. The method is suitable for automated processing of large seismic waveform datasets, and can potentially be used in real-time applications, e.g. for improving the source characterizations of earthquake early warning algorithms.

  12. Automatic segmentation of MR brain images of preterm infants using supervised classification.

    Science.gov (United States)

    Moeskops, Pim; Benders, Manon J N L; Chiţ, Sabina M; Kersbergen, Karina J; Groenendaal, Floris; de Vries, Linda S; Viergever, Max A; Išgum, Ivana

    2015-09-01

    Preterm birth is often associated with impaired brain development. The state and expected progression of preterm brain development can be evaluated using quantitative assessment of MR images. Such measurements require accurate segmentation of different tissue types in those images. This paper presents an algorithm for the automatic segmentation of unmyelinated white matter (WM), cortical grey matter (GM), and cerebrospinal fluid in the extracerebral space (CSF). The algorithm uses supervised voxel classification in three subsequent stages. In the first stage, voxels that can easily be assigned to one of the three tissue types are labelled. In the second stage, dedicated analysis of the remaining voxels is performed. The first and the second stages both use two-class classification for each tissue type separately. Possible inconsistencies that could result from these tissue-specific segmentation stages are resolved in the third stage, which performs multi-class classification. A set of T1- and T2-weighted images was analysed, but the optimised system performs automatic segmentation using a T2-weighted image only. We have investigated the performance of the algorithm when using training data randomly selected from completely annotated images as well as when using training data from only partially annotated images. The method was evaluated on images of preterm infants acquired at 30 and 40weeks postmenstrual age (PMA). When the method was trained using random selection from the completely annotated images, the average Dice coefficients were 0.95 for WM, 0.81 for GM, and 0.89 for CSF on an independent set of images acquired at 30weeks PMA. When the method was trained using only the partially annotated images, the average Dice coefficients were 0.95 for WM, 0.78 for GM and 0.87 for CSF for the images acquired at 30weeks PMA, and 0.92 for WM, 0.80 for GM and 0.85 for CSF for the images acquired at 40weeks PMA. Even though the segmentations obtained using training data

  13. Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions

    Science.gov (United States)

    Huang, Zhi; Fan, Baozheng; Song, Xiaolin

    2018-03-01

    As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.

  14. Deep Salient Feature Based Anti-Noise Transfer Network for Scene Classification of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Xi Gong

    2018-03-01

    Full Text Available Remote sensing (RS scene classification is important for RS imagery semantic interpretation. Although tremendous strides have been made in RS scene classification, one of the remaining open challenges is recognizing RS scenes in low quality variance (e.g., various scales and noises. This paper proposes a deep salient feature based anti-noise transfer network (DSFATN method that effectively enhances and explores the high-level features for RS scene classification in different scales and noise conditions. In DSFATN, a novel discriminative deep salient feature (DSF is introduced by saliency-guided DSF extraction, which conducts a patch-based visual saliency (PBVS algorithm using “visual attention” mechanisms to guide pre-trained CNNs for producing the discriminative high-level features. Then, an anti-noise network is proposed to learn and enhance the robust and anti-noise structure information of RS scene by directly propagating the label information to fully-connected layers. A joint loss is used to minimize the anti-noise network by integrating anti-noise constraint and a softmax classification loss. The proposed network architecture can be easily trained with a limited amount of training data. The experiments conducted on three different scale RS scene datasets show that the DSFATN method has achieved excellent performance and great robustness in different scales and noise conditions. It obtains classification accuracy of 98.25%, 98.46%, and 98.80%, respectively, on the UC Merced Land Use Dataset (UCM, the Google image dataset of SIRI-WHU, and the SAT-6 dataset, advancing the state-of-the-art substantially.

  15. Cancer Classification Based on Support Vector Machine Optimized by Particle Swarm Optimization and Artificial Bee Colony.

    Science.gov (United States)

    Gao, Lingyun; Ye, Mingquan; Wu, Changrong

    2017-11-29

    Intelligent optimization algorithms have advantages in dealing with complex nonlinear problems accompanied by good flexibility and adaptability. In this paper, the FCBF (Fast Correlation-Based Feature selection) method is used to filter irrelevant and redundant features in order to improve the quality of cancer classification. Then, we perform classification based on SVM (Support Vector Machine) optimized by PSO (Particle Swarm Optimization) combined with ABC (Artificial Bee Colony) approaches, which is represented as PA-SVM. The proposed PA-SVM method is applied to nine cancer datasets, including five datasets of outcome prediction and a protein dataset of ovarian cancer. By comparison with other classification methods, the results demonstrate the effectiveness and the robustness of the proposed PA-SVM method in handling various types of data for cancer classification.

  16. Robust design optimization using the price of robustness, robust least squares and regularization methods

    Science.gov (United States)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  17. The impact of laser ablation on optical soft tissue differentiation for tissue specific laser surgery-an experimental ex vivo study

    Directory of Open Access Journals (Sweden)

    Stelzle Florian

    2012-06-01

    Full Text Available Abstract Background Optical diffuse reflectance can remotely differentiate various bio tissues. To implement this technique in an optical feedback system to guide laser surgery in a tissue-specific way, the alteration of optical tissue properties by laser ablation has to be taken into account. It was the aim of this study to evaluate the general feasibility of optical soft tissue differentiation by diffuse reflectance spectroscopy under the influence of laser ablation, comparing the tissue differentiation results before and after laser intervention. Methods A total of 70 ex vivo tissue samples (5 tissue types were taken from 14 bisected pig heads. Diffuse reflectance spectra were recorded before and after Er:YAG-laser ablation. The spectra were analyzed and differentiated using principal component analysis (PCA, followed by linear discriminant analysis (LDA. To assess the potential of tissue differentiation, area under the curve (AUC, sensitivity and specificity was computed for each pair of tissue types before and after laser ablation, and compared to each other. Results Optical tissue differentiation showed good results before laser exposure (total classification error 13.51%. However, the tissue pair nerve and fat yielded lower AUC results of only 0.75. After laser ablation slightly reduced differentiation results were found with a total classification error of 16.83%. The tissue pair nerve and fat showed enhanced differentiation (AUC: 0.85. Laser ablation reduced the sensitivity in 50% and specificity in 80% of the cases of tissue pair comparison. The sensitivity of nerve–fat differentiation was enhanced by 35%. Conclusions The observed results show the general feasibility of tissue differentiation by diffuse reflectance spectroscopy even under conditions of tissue alteration by laser ablation. The contrast enhancement for the differentiation between nerve and fat tissue after ablation is assumed to be due to laser removal of the

  18. Robust Robot Grasp Detection in Multimodal Fusion

    Directory of Open Access Journals (Sweden)

    Zhang Qiang

    2017-01-01

    Full Text Available Accurate robot grasp detection for model free objects plays an important role in robotics. With the development of RGB-D sensors, object perception technology has made great progress. Reach feature expression by the colour and the depth data is a critical problem that needs to be addressed in order to accomplish the grasping task. To solve the problem of data fusion, this paper proposes a convolutional neural networks (CNN based approach combined with regression and classification. In the CNN model, the colour and the depth modal data are deeply fused together to achieve accurate feature expression. Additionally, Welsch function is introduced into the approach to enhance robustness of the training process. Experiment results demonstrates the superiority of the proposed method.

  19. Computational Modeling in Tissue Engineering

    CERN Document Server

    2013-01-01

    One of the major challenges in tissue engineering is the translation of biological knowledge on complex cell and tissue behavior into a predictive and robust engineering process. Mastering this complexity is an essential step towards clinical applications of tissue engineering. This volume discusses computational modeling tools that allow studying the biological complexity in a more quantitative way. More specifically, computational tools can help in:  (i) quantifying and optimizing the tissue engineering product, e.g. by adapting scaffold design to optimize micro-environmental signals or by adapting selection criteria to improve homogeneity of the selected cell population; (ii) quantifying and optimizing the tissue engineering process, e.g. by adapting bioreactor design to improve quality and quantity of the final product; and (iii) assessing the influence of the in vivo environment on the behavior of the tissue engineering product, e.g. by investigating vascular ingrowth. The book presents examples of each...

  20. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Chen Xing

    2016-01-01

    Full Text Available Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and supervised fine-tuning. We utilized stacked denoise autoencoder (SDAE method to pretrain the network, which is robust to noise. In the top layer of the network, logistic regression (LR approach is utilized to perform supervised fine-tuning and classification. Since sparsity of features might improve the separation capability, we utilized rectified linear unit (ReLU as activation function in SDAE to extract high level and sparse features. Experimental results using Hyperion, AVIRIS, and ROSIS hyperspectral data demonstrated that the SDAE pretraining in conjunction with the LR fine-tuning and classification (SDAE_LR can achieve higher accuracies than the popular support vector machine (SVM classifier.

  1. Determining the Most Robust Dimensional Structure of Categories from the International Classification of Functioning, Disability and Health Across Subgroups of Persons With Spinal Cord Injury to Build the Basis for Future Clinical Measures

    DEFF Research Database (Denmark)

    Ballert, Carolina S; Stucki, Gerold; Biering-Sørensen, Fin

    2014-01-01

    OBJECTIVE: To determine the most robust dimensional structure of the International Classification of Functioning, Disability and Health (ICF) categories relevant to spinal cord injury (SCI) across subgroups of lesion level, health care context, sex, age, and resources of the country. DESIGN......: A multidimensional between-item response Rasch model was used. The choice of the dimensions was conceptually driven using the ICF components from the functioning chapters and splits of the activity and participation component described in the ICF. SETTING: Secondary analysis of data from an international, cross....... The model fit improvement from the unidimensional to the 2-dimensional and from the 2-dimensional to the 3-dimensional model was significant in all groups (P

  2. The application of the central limit theorem and the law of large numbers to facial soft tissue depths: T-Table robustness and trends since 2008.

    Science.gov (United States)

    Stephan, Carl N

    2014-03-01

    By pooling independent study means (x¯), the T-Tables use the central limit theorem and law of large numbers to average out study-specific sampling bias and instrument errors and, in turn, triangulate upon human population means (μ). Since their first publication in 2008, new data from >2660 adults have been collected (c.30% of the original sample) making a review of the T-Table's robustness timely. Updated grand means show that the new data have negligible impact on the previously published statistics: maximum change = 1.7 mm at gonion; and ≤1 mm at 93% of all landmarks measured. This confirms the utility of the 2008 T-Table as a proxy to soft tissue depth population means and, together with updated sample sizes (8851 individuals at pogonion), earmarks the 2013 T-Table as the premier mean facial soft tissue depth standard for craniofacial identification casework. The utility of the T-Table, in comparison with shorths and 75-shormaxes, is also discussed. © 2013 American Academy of Forensic Sciences.

  3. Biogeographic classification of the Caspian Sea

    Science.gov (United States)

    Fendereski, F.; Vogt, M.; Payne, M. R.; Lachkar, Z.; Gruber, N.; Salmanmahiny, A.; Hosseini, S. A.

    2014-11-01

    Like other inland seas, the Caspian Sea (CS) has been influenced by climate change and anthropogenic disturbance during recent decades, yet the scientific understanding of this water body remains poor. In this study, an eco-geographical classification of the CS based on physical information derived from space and in situ data is developed and tested against a set of biological observations. We used a two-step classification procedure, consisting of (i) a data reduction with self-organizing maps (SOMs) and (ii) a synthesis of the most relevant features into a reduced number of marine ecoregions using the hierarchical agglomerative clustering (HAC) method. From an initial set of 12 potential physical variables, 6 independent variables were selected for the classification algorithm, i.e., sea surface temperature (SST), bathymetry, sea ice, seasonal variation of sea surface salinity (DSSS), total suspended matter (TSM) and its seasonal variation (DTSM). The classification results reveal a robust separation between the northern and the middle/southern basins as well as a separation of the shallow nearshore waters from those offshore. The observed patterns in ecoregions can be attributed to differences in climate and geochemical factors such as distance from river, water depth and currents. A comparison of the annual and monthly mean Chl a concentrations between the different ecoregions shows significant differences (one-way ANOVA, P qualitative evaluation of differences in community composition based on recorded presence-absence patterns of 25 different species of plankton, fish and benthic invertebrate also confirms the relevance of the ecoregions as proxies for habitats with common biological characteristics.

  4. Exploratory Study of 4D versus 3D Robust Optimization in Intensity Modulated Proton Therapy for Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wei, E-mail: Liu.Wei@mayo.edu [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Schild, Steven E. [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Chang, Joe Y.; Liao, Zhongxing [Department of Radiation Oncology, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Chang, Yu-Hui [Division of Health Sciences Research, Mayo Clinic Arizona, Phoenix, Arizona (United States); Wen, Zhifei [Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Shen, Jiajian; Stoker, Joshua B.; Ding, Xiaoning; Hu, Yanle [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Sahoo, Narayan [Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Herman, Michael G. [Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, Minnesota (United States); Vargas, Carlos; Keole, Sameer; Wong, William; Bues, Martin [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States)

    2016-05-01

    Purpose: The purpose of this study was to compare the impact of uncertainties and interplay on 3-dimensional (3D) and 4D robustly optimized intensity modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods and Materials: IMPT plans were created for 11 nonrandomly selected non-small cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D computed tomography (CT) to irradiate clinical target volume (CTV). Regular fractionation (66 Gy [relative biological effectiveness; RBE] in 33 fractions) was considered. In 4D optimization, the CTV of individual phases received nonuniform doses to achieve a uniform cumulative dose. The root-mean-square dose-volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram (DVH) indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed rank test. Results: 4D robust optimization plans led to smaller AUC for CTV (14.26 vs 18.61, respectively; P=.001), better CTV coverage (Gy [RBE]) (D{sub 95%} CTV: 60.6 vs 55.2, respectively; P=.001), and better CTV homogeneity (D{sub 5%}-D{sub 95%} CTV: 10.3 vs 17.7, resspectively; P=.002) in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage (D{sub 95%} CTV: 64.5 vs 63.8, respectively; P=.0068), comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions: Our exploratory methodology study showed

  5. A robust and efficient approach to detect 3D rectal tubes from CT colonography

    Energy Technology Data Exchange (ETDEWEB)

    Yang Xiaoyun; Slabaugh, Greg [Medicsight PLC, Kensington Centre, 66 Hammersmith Road, London (United Kingdom)

    2011-11-15

    Purpose: The rectal tube (RT) is a common source of false positives (FPs) in computer-aided detection (CAD) systems for CT colonography. A robust and efficient detection of RT can improve CAD performance by eliminating such ''obvious'' FPs and increase radiologists' confidence in CAD. Methods: In this paper, we present a novel and robust bottom-up approach to detect the RT. Probabilistic models, trained using kernel density estimation on simple low-level features, are employed to rank and select the most likely RT tube candidate on each axial slice. Then, a shape model, robustly estimated using random sample consensus (RANSAC), infers the global RT path from the selected local detections. Subimages around the RT path are projected into a subspace formed from training subimages of the RT. A quadratic discriminant analysis (QDA) provides a classification of a subimage as RT or non-RT based on the projection. Finally, a bottom-top clustering method is proposed to merge the classification predictions together to locate the tip position of the RT. Results: Our method is validated using a diverse database, including data from five hospitals. On a testing data with 21 patients (42 volumes), 99.5% of annotated RT paths have been successfully detected. Evaluated with CAD, 98.4% of FPs caused by the RT have been detected and removed without any loss of sensitivity. Conclusions: The proposed method demonstrates a high detection rate of the RT path, and when tested in a CAD system, reduces FPs caused by the RT without the loss of sensitivity.

  6. A robust and efficient approach to detect 3D rectal tubes from CT colonography

    International Nuclear Information System (INIS)

    Yang Xiaoyun; Slabaugh, Greg

    2011-01-01

    Purpose: The rectal tube (RT) is a common source of false positives (FPs) in computer-aided detection (CAD) systems for CT colonography. A robust and efficient detection of RT can improve CAD performance by eliminating such ''obvious'' FPs and increase radiologists' confidence in CAD. Methods: In this paper, we present a novel and robust bottom-up approach to detect the RT. Probabilistic models, trained using kernel density estimation on simple low-level features, are employed to rank and select the most likely RT tube candidate on each axial slice. Then, a shape model, robustly estimated using random sample consensus (RANSAC), infers the global RT path from the selected local detections. Subimages around the RT path are projected into a subspace formed from training subimages of the RT. A quadratic discriminant analysis (QDA) provides a classification of a subimage as RT or non-RT based on the projection. Finally, a bottom-top clustering method is proposed to merge the classification predictions together to locate the tip position of the RT. Results: Our method is validated using a diverse database, including data from five hospitals. On a testing data with 21 patients (42 volumes), 99.5% of annotated RT paths have been successfully detected. Evaluated with CAD, 98.4% of FPs caused by the RT have been detected and removed without any loss of sensitivity. Conclusions: The proposed method demonstrates a high detection rate of the RT path, and when tested in a CAD system, reduces FPs caused by the RT without the loss of sensitivity.

  7. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    Science.gov (United States)

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental

  8. New decision support tool for acute lymphoblastic leukemia classification

    Science.gov (United States)

    Madhukar, Monica; Agaian, Sos; Chronopoulos, Anthony T.

    2012-03-01

    In this paper, we build up a new decision support tool to improve treatment intensity choice in childhood ALL. The developed system includes different methods to accurately measure furthermore cell properties in microscope blood film images. The blood images are exposed to series of pre-processing steps which include color correlation, and contrast enhancement. By performing K-means clustering on the resultant images, the nuclei of the cells under consideration are obtained. Shape features and texture features are then extracted for classification. The system is further tested on the classification of spectra measured from the cell nuclei in blood samples in order to distinguish normal cells from those affected by Acute Lymphoblastic Leukemia. The results show that the proposed system robustly segments and classifies acute lymphoblastic leukemia based on complete microscopic blood images.

  9. Current concepts in non-gastrointestinal stromal tumor soft tissue sarcomas: A primer for radiologists

    Energy Technology Data Exchange (ETDEWEB)

    Baheti, Akahay D. [Dept. of Radiology, Tata Memorial Centre, Mumbai (India); Tirumani, Harika [Dept. of Radiology, University of Arkansas for Medical Sciences, Little Rock (United States); O' Neill, Alibhe; Jagannathan, Jyothi P. [Dept. of Imaging, Dana-Farber Cancer Institute, Boston (United States)

    2017-01-15

    Non-gastrointestinal stromal tumor (GIST) soft tissue sarcomas (STSs) are a heterogeneous group of neoplasms whose classification and management continues to evolve with better understanding of their biologic behavior. The 2013 World Health Organization (WHO) has revised their classification based on new immunohistochemical and cytogenetic data. In this article, we will provide a brief overview of the revised WHO classification of soft tissue tumors, discuss in detail the radiology and management of the two most common adult non-GIST STS, namely liposarcoma and leiomyosarcoma, and review some of the emerging histology-driven targeted therapies in non-GIST STS, focusing on the role of the radiologist.

  10. Current concepts in non-gastrointestinal stromal tumor soft tissue sarcomas: A primer for radiologists

    International Nuclear Information System (INIS)

    Baheti, Akahay D.; Tirumani, Harika; O'Neill, Alibhe; Jagannathan, Jyothi P.

    2017-01-01

    Non-gastrointestinal stromal tumor (GIST) soft tissue sarcomas (STSs) are a heterogeneous group of neoplasms whose classification and management continues to evolve with better understanding of their biologic behavior. The 2013 World Health Organization (WHO) has revised their classification based on new immunohistochemical and cytogenetic data. In this article, we will provide a brief overview of the revised WHO classification of soft tissue tumors, discuss in detail the radiology and management of the two most common adult non-GIST STS, namely liposarcoma and leiomyosarcoma, and review some of the emerging histology-driven targeted therapies in non-GIST STS, focusing on the role of the radiologist

  11. Automated Feature Design for Time Series Classification by Genetic Programming

    OpenAIRE

    Harvey, Dustin Yewell

    2014-01-01

    Time series classification (TSC) methods discover and exploit patterns in time series and other one-dimensional signals. Although many accurate, robust classifiers exist for multivariate feature sets, general approaches are needed to extend machine learning techniques to make use of signal inputs. Numerous applications of TSC can be found in structural engineering, especially in the areas of structural health monitoring and non-destructive evaluation. Additionally, the fields of process contr...

  12. Omnidirectional regeneration (ODR) of proximity sensor signals for robust diagnosis of journal bearing systems

    Science.gov (United States)

    Jung, Joon Ha; Jeon, Byung Chul; Youn, Byeng D.; Kim, Myungyon; Kim, Donghwan; Kim, Yeonwhan

    2017-06-01

    Some anomaly states of journal bearing rotor systems are direction-oriented (e.g., rubbing, misalignment). In these situations, vibration signals vary according to the direction of the sensors and the health state. This makes diagnosis difficult with traditional diagnosis methods. This paper proposes an omnidirectional regeneration method to develop a robust diagnosis algorithm for rotor systems. The proposed method can generate vibration signals in arbitrary directions without using extra sensors. In this method, signals are generated around the entire circumference of the rotor to consider all possible directions. Then, the directionality of each state is proved by mathematically and is evaluated using a proposed metric. When a directional state is determined, the classification is carried out on all of the generated signals. When a non-directional state is found, the classification is performed on only one of the generated signals to minimize computational load without sacrificing accuracy. The proposed ODR method was validated using experimental data. The classification results show that the proposed method generally outperforms the conventional classification method. The results support the proposed concept of using ODR signals in diagnosis procedures for journal bearing systems.

  13. Immunophenotype Discovery, Hierarchical Organization, and Template-based Classification of Flow Cytometry Samples

    Directory of Open Access Journals (Sweden)

    Ariful Azad

    2016-08-01

    Full Text Available We describe algorithms for discovering immunophenotypes from large collections of flow cytometry (FC samples, and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters, a template consists of generic meta-populations (a group of homogeneous cell populations obtained from the samples in a class that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples, while ignoring noise and small sample-specific variations.We have applied the template-base scheme to analyze several data setsincluding one representing a healthy immune system, and one of Acute Myeloid Leukemia (AMLsamples. The last task is challenging due to the phenotypic heterogeneity of the severalsubtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML, and were able to distinguish Acute Promyelocytic Leukemia from other subtypes of AML.

  14. NIM: A Node Influence Based Method for Cancer Classification

    Directory of Open Access Journals (Sweden)

    Yiwen Wang

    2014-01-01

    Full Text Available The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART.

  15. A hybrid ensemble learning approach to star-galaxy classification

    Science.gov (United States)

    Kim, Edward J.; Brunner, Robert J.; Carrasco Kind, Matias

    2015-10-01

    There exist a variety of star-galaxy classification techniques, each with their own strengths and weaknesses. In this paper, we present a novel meta-classification framework that combines and fully exploits different techniques to produce a more robust star-galaxy classification. To demonstrate this hybrid, ensemble approach, we combine a purely morphological classifier, a supervised machine learning method based on random forest, an unsupervised machine learning method based on self-organizing maps, and a hierarchical Bayesian template-fitting method. Using data from the CFHTLenS survey (Canada-France-Hawaii Telescope Lensing Survey), we consider different scenarios: when a high-quality training set is available with spectroscopic labels from DEEP2 (Deep Extragalactic Evolutionary Probe Phase 2 ), SDSS (Sloan Digital Sky Survey), VIPERS (VIMOS Public Extragalactic Redshift Survey), and VVDS (VIMOS VLT Deep Survey), and when the demographics of sources in a low-quality training set do not match the demographics of objects in the test data set. We demonstrate that our Bayesian combination technique improves the overall performance over any individual classification method in these scenarios. Thus, strategies that combine the predictions of different classifiers may prove to be optimal in currently ongoing and forthcoming photometric surveys, such as the Dark Energy Survey and the Large Synoptic Survey Telescope.

  16. Land Cover Classification from Multispectral Data Using Computational Intelligence Tools: A Comparative Study

    Directory of Open Access Journals (Sweden)

    André Mora

    2017-11-01

    Full Text Available This article discusses how computational intelligence techniques are applied to fuse spectral images into a higher level image of land cover distribution for remote sensing, specifically for satellite image classification. We compare a fuzzy-inference method with two other computational intelligence methods, decision trees and neural networks, using a case study of land cover classification from satellite images. Further, an unsupervised approach based on k-means clustering has been also taken into consideration for comparison. The fuzzy-inference method includes training the classifier with a fuzzy-fusion technique and then performing land cover classification using reinforcement aggregation operators. To assess the robustness of the four methods, a comparative study including three years of land cover maps for the district of Mandimba, Niassa province, Mozambique, was undertaken. Our results show that the fuzzy-fusion method performs similarly to decision trees, achieving reliable classifications; neural networks suffer from overfitting; while k-means clustering constitutes a promising technique to identify land cover types from unknown areas.

  17. Material Classification Using Raw Time-of-Flight Measurements

    KAUST Repository

    Su, Shuochen

    2016-12-13

    We propose a material classification method using raw time-of-flight (ToF) measurements. ToF cameras capture the correlation between a reference signal and the temporal response of material to incident illumination. Such measurements encode unique signatures of the material, i.e. the degree of subsurface scattering inside a volume. Subsequently, it offers an orthogonal domain of feature representation compared to conventional spatial and angular reflectance-based approaches. We demonstrate the effectiveness, robustness, and efficiency of our method through experiments and comparisons of real-world materials.

  18. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  19. Robustness analysis of a green chemistry-based model for the classification of silver nanoparticles synthesis processes

    Science.gov (United States)

    This paper proposes a robustness analysis based on Multiple Criteria Decision Aiding (MCDA). The ensuing model was used to assess the implementation of green chemistry principles in the synthesis of silver nanoparticles. Its recommendations were also compared to an earlier develo...

  20. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  1. Mass Spectrometry Imaging for the Classification of Tumor Tissue

    NARCIS (Netherlands)

    Mascini, N.E.

    2016-01-01

    Mass spectrometry imaging (MSI) can detect and identify many different molecules without the need for labeling. In addition, it can provide their spatial distributions as ‘molecular maps’. These features make MSI well suited for studying the molecular makeup of tumor tissue. Currently, there is an

  2. Magnetic resonance imaging texture analysis classification of primary breast cancer

    International Nuclear Information System (INIS)

    Waugh, S.A.; Lerski, R.A.; Purdie, C.A.; Jordan, L.B.; Vinnicombe, S.; Martin, P.; Thompson, A.M.

    2016-01-01

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  3. Magnetic resonance imaging texture analysis classification of primary breast cancer

    Energy Technology Data Exchange (ETDEWEB)

    Waugh, S.A.; Lerski, R.A. [Ninewells Hospital and Medical School, Department of Medical Physics, Dundee (United Kingdom); Purdie, C.A.; Jordan, L.B. [Ninewells Hospital and Medical School, Department of Pathology, Dundee (United Kingdom); Vinnicombe, S. [University of Dundee, Division of Imaging and Technology, Ninewells Hospital and Medical School, Dundee (United Kingdom); Martin, P. [Ninewells Hospital and Medical School, Department of Clinical Radiology, Dundee (United Kingdom); Thompson, A.M. [University of Texas MD Anderson Cancer Center, Department of Surgical Oncology, Houston, TX (United States)

    2016-02-15

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  4. Research on improving image recognition robustness by combining multiple features with associative memory

    Science.gov (United States)

    Guo, Dongwei; Wang, Zhe

    2018-05-01

    Convolutional neural networks (CNN) achieve great success in computer vision, it can learn hierarchical representation from raw pixels and has outstanding performance in various image recognition tasks [1]. However, CNN is easy to be fraudulent in terms of it is possible to produce images totally unrecognizable to human eyes that CNNs believe with near certainty are familiar objects. [2]. In this paper, an associative memory model based on multiple features is proposed. Within this model, feature extraction and classification are carried out by CNN, T-SNE and exponential bidirectional associative memory neural network (EBAM). The geometric features extracted from CNN and the digital features extracted from T-SNE are associated by EBAM. Thus we ensure the recognition of robustness by a comprehensive assessment of the two features. In our model, we can get only 8% error rate with fraudulent data. In systems that require a high safety factor or some key areas, strong robustness is extremely important, if we can ensure the image recognition robustness, network security will be greatly improved and the social production efficiency will be extremely enhanced.

  5. The 2017 international classification of the Ehlers-Danlos syndromes

    DEFF Research Database (Denmark)

    Malfait, Fransiska; Francomano, Clair; Byers, Peter H

    2017-01-01

    The Ehlers-Danlos syndromes (EDS) are a clinically and genetically heterogeneous group of heritable connective tissue disorders (HCTDs) characterized by joint hypermobility, skin hyperextensibility, and tissue fragility. Over the past two decades, the Villefranche Nosology, which delineated six s...... that the revised International EDS Classification will serve as a new standard for the diagnosis of EDS and will provide a framework for future research purposes. © 2017 Wiley Periodicals, Inc......., and mutations have been identified in an array of novel genes. The International EDS Consortium proposes a revised EDS classification, which recognizes 13 subtypes. For each of the subtypes, we propose a set of clinical criteria that are suggestive for the diagnosis. However, in view of the vast genetic...... revised the clinical criteria for hypermobile EDS in order to allow for a better distinction from other joint hypermobility disorders. To satisfy research needs, we also propose a pathogenetic scheme, that regroups EDS subtypes for which the causative proteins function within the same pathway. We hope...

  6. Using genetically modified tomato crop plants with purple leaves for absolute weed/crop classification.

    Science.gov (United States)

    Lati, Ran N; Filin, Sagi; Aly, Radi; Lande, Tal; Levin, Ilan; Eizenberg, Hanan

    2014-07-01

    Weed/crop classification is considered the main problem in developing precise weed-management methodologies, because both crops and weeds share similar hues. Great effort has been invested in the development of classification models, most based on expensive sensors and complicated algorithms. However, satisfactory results are not consistently obtained due to imaging conditions in the field. We report on an innovative approach that combines advances in genetic engineering and robust image-processing methods to detect weeds and distinguish them from crop plants by manipulating the crop's leaf color. We demonstrate this on genetically modified tomato (germplasm AN-113) which expresses a purple leaf color. An autonomous weed/crop classification is performed using an invariant-hue transformation that is applied to images acquired by a standard consumer camera (visible wavelength) and handles variations in illumination intensities. The integration of these methodologies is simple and effective, and classification results were accurate and stable under a wide range of imaging conditions. Using this approach, we simplify the most complicated stage in image-based weed/crop classification models. © 2013 Society of Chemical Industry.

  7. Cancer classification in the genomic era: five contemporary problems.

    Science.gov (United States)

    Song, Qingxuan; Merajver, Sofia D; Li, Jun Z

    2015-10-19

    Classification is an everyday instinct as well as a full-fledged scientific discipline. Throughout the history of medicine, disease classification is central to how we develop knowledge, make diagnosis, and assign treatment. Here, we discuss the classification of cancer and the process of categorizing cancer subtypes based on their observed clinical and biological features. Traditionally, cancer nomenclature is primarily based on organ location, e.g., "lung cancer" designates a tumor originating in lung structures. Within each organ-specific major type, finer subgroups can be defined based on patient age, cell type, histological grades, and sometimes molecular markers, e.g., hormonal receptor status in breast cancer or microsatellite instability in colorectal cancer. In the past 15+ years, high-throughput technologies have generated rich new data regarding somatic variations in DNA, RNA, protein, or epigenomic features for many cancers. These data, collected for increasingly large tumor cohorts, have provided not only new insights into the biological diversity of human cancers but also exciting opportunities to discover previously unrecognized cancer subtypes. Meanwhile, the unprecedented volume and complexity of these data pose significant challenges for biostatisticians, cancer biologists, and clinicians alike. Here, we review five related issues that represent contemporary problems in cancer taxonomy and interpretation. (1) How many cancer subtypes are there? (2) How can we evaluate the robustness of a new classification system? (3) How are classification systems affected by intratumor heterogeneity and tumor evolution? (4) How should we interpret cancer subtypes? (5) Can multiple classification systems co-exist? While related issues have existed for a long time, we will focus on those aspects that have been magnified by the recent influx of complex multi-omics data. Exploration of these problems is essential for data-driven refinement of cancer classification

  8. Exploiting unsupervised and supervised classification for segmentation of the pathological lung in CT

    International Nuclear Information System (INIS)

    Korfiatis, P; Costaridou, L; Kalogeropoulou, C; Petsas, T; Daoussis, D; Adonopoulos, A

    2009-01-01

    Delineation of lung fields in presence of diffuse lung diseases (DLPDs), such as interstitial pneumonias (IP), challenges segmentation algorithms. To deal with IP patterns affecting the lung border an automated image texture classification scheme is proposed. The proposed segmentation scheme is based on supervised texture classification between lung tissue (normal and abnormal) and surrounding tissue (pleura and thoracic wall) in the lung border region. This region is coarsely defined around an initial estimate of lung border, provided by means of Markov Radom Field modeling and morphological operations. Subsequently, a support vector machine classifier was trained to distinguish between the above two classes of tissue, using textural feature of gray scale and wavelet domains. 17 patients diagnosed with IP, secondary to connective tissue diseases were examined. Segmentation performance in terms of overlap was 0.924±0.021, and for shape differentiation mean, rms and maximum distance were 1.663±0.816, 2.334±1.574 and 8.0515±6.549 mm, respectively. An accurate, automated scheme is proposed for segmenting abnormal lung fields in HRC affected by IP

  9. Exploiting unsupervised and supervised classification for segmentation of the pathological lung in CT

    Science.gov (United States)

    Korfiatis, P.; Kalogeropoulou, C.; Daoussis, D.; Petsas, T.; Adonopoulos, A.; Costaridou, L.

    2009-07-01

    Delineation of lung fields in presence of diffuse lung diseases (DLPDs), such as interstitial pneumonias (IP), challenges segmentation algorithms. To deal with IP patterns affecting the lung border an automated image texture classification scheme is proposed. The proposed segmentation scheme is based on supervised texture classification between lung tissue (normal and abnormal) and surrounding tissue (pleura and thoracic wall) in the lung border region. This region is coarsely defined around an initial estimate of lung border, provided by means of Markov Radom Field modeling and morphological operations. Subsequently, a support vector machine classifier was trained to distinguish between the above two classes of tissue, using textural feature of gray scale and wavelet domains. 17 patients diagnosed with IP, secondary to connective tissue diseases were examined. Segmentation performance in terms of overlap was 0.924±0.021, and for shape differentiation mean, rms and maximum distance were 1.663±0.816, 2.334±1.574 and 8.0515±6.549 mm, respectively. An accurate, automated scheme is proposed for segmenting abnormal lung fields in HRC affected by IP

  10. Principal coordinate analysis assisted chromatographic analysis of bacterial cell wall collection: A robust classification approach.

    Science.gov (United States)

    Kumar, Keshav; Cava, Felipe

    2018-04-10

    In the present work, Principal coordinate analysis (PCoA) is introduced to develop a robust model to classify the chromatographic data sets of peptidoglycan sample. PcoA captures the heterogeneity present in the data sets by using the dissimilarity matrix as input. Thus, in principle, it can even capture the subtle differences in the bacterial peptidoglycan composition and can provide a more robust and fast approach for classifying the bacterial collection and identifying the novel cell wall targets for further biological and clinical studies. The utility of the proposed approach is successfully demonstrated by analysing the two different kind of bacterial collections. The first set comprised of peptidoglycan sample belonging to different subclasses of Alphaproteobacteria. Whereas, the second set that is relatively more intricate for the chemometric analysis consist of different wild type Vibrio Cholerae and its mutants having subtle differences in their peptidoglycan composition. The present work clearly proposes a useful approach that can classify the chromatographic data sets of chromatographic peptidoglycan samples having subtle differences. Furthermore, present work clearly suggest that PCoA can be a method of choice in any data analysis workflow. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Tissue-Engineered Vascular Rings from Human iPSC-Derived Smooth Muscle Cells

    Directory of Open Access Journals (Sweden)

    Biraja C. Dash

    2016-07-01

    Full Text Available There is an urgent need for an efficient approach to obtain a large-scale and renewable source of functional human vascular smooth muscle cells (VSMCs to establish robust, patient-specific tissue model systems for studying the pathogenesis of vascular disease, and for developing novel therapeutic interventions. Here, we have derived a large quantity of highly enriched functional VSMCs from human induced pluripotent stem cells (hiPSC-VSMCs. Furthermore, we have engineered 3D tissue rings from hiPSC-VSMCs using a facile one-step cellular self-assembly approach. The tissue rings are mechanically robust and can be used for vascular tissue engineering and disease modeling of supravalvular aortic stenosis syndrome. Our method may serve as a model system, extendable to study other vascular proliferative diseases for drug screening. Thus, this report describes an exciting platform technology with broad utility for manufacturing cell-based tissues and materials for various biomedical applications.

  12. Robust doubly charged nodal lines and nodal surfaces in centrosymmetric systems

    Science.gov (United States)

    Bzdušek, Tomáš; Sigrist, Manfred

    2017-10-01

    Weyl points in three spatial dimensions are characterized by a Z -valued charge—the Chern number—which makes them stable against a wide range of perturbations. A set of Weyl points can mutually annihilate only if their net charge vanishes, a property we refer to as robustness. While nodal loops are usually not robust in this sense, it has recently been shown using homotopy arguments that in the centrosymmetric extension of the AI symmetry class they nevertheless develop a Z2 charge analogous to the Chern number. Nodal loops carrying a nontrivial value of this Z2 charge are robust, i.e., they can be gapped out only by a pairwise annihilation and not on their own. As this is an additional charge independent of the Berry π -phase flowing along the band degeneracy, such nodal loops are, in fact, doubly charged. In this manuscript, we generalize the homotopy discussion to the centrosymmetric extensions of all Atland-Zirnbauer classes. We develop a tailored mathematical framework dubbed the AZ +I classification and show that in three spatial dimensions such robust and multiply charged nodes appear in four of such centrosymmetric extensions, namely, AZ +I classes CI and AI lead to doubly charged nodal lines, while D and BDI support doubly charged nodal surfaces. We remark that no further crystalline symmetries apart from the spatial inversion are necessary for their stability. We provide a description of the corresponding topological charges, and develop simple tight-binding models of various semimetallic and superconducting phases that exhibit these nodes. We also indicate how the concept of robust and multiply charged nodes generalizes to other spatial dimensions.

  13. Skin-inspired hydrogel-elastomer hybrids with robust interfaces and functional microstructures

    Science.gov (United States)

    Yuk, Hyunwoo; Zhang, Teng; Parada, German Alberto; Liu, Xinyue; Zhao, Xuanhe

    2016-06-01

    Inspired by mammalian skins, soft hybrids integrating the merits of elastomers and hydrogels have potential applications in diverse areas including stretchable and bio-integrated electronics, microfluidics, tissue engineering, soft robotics and biomedical devices. However, existing hydrogel-elastomer hybrids have limitations such as weak interfacial bonding, low robustness and difficulties in patterning microstructures. Here, we report a simple yet versatile method to assemble hydrogels and elastomers into hybrids with extremely robust interfaces (interfacial toughness over 1,000 Jm-2) and functional microstructures such as microfluidic channels and electrical circuits. The proposed method is generally applicable to various types of tough hydrogels and diverse commonly used elastomers including polydimethylsiloxane Sylgard 184, polyurethane, latex, VHB and Ecoflex. We further demonstrate applications enabled by the robust and microstructured hydrogel-elastomer hybrids including anti-dehydration hydrogel-elastomer hybrids, stretchable and reactive hydrogel-elastomer microfluidics, and stretchable hydrogel circuit boards patterned on elastomer.

  14. Bio-geographic classification of the Caspian Sea

    Science.gov (United States)

    Fendereski, F.; Vogt, M.; Payne, M. R.; Lachkar, Z.; Gruber, N.; Salmanmahiny, A.; Hosseini, S. A.

    2014-03-01

    Like other inland seas, the Caspian Sea (CS) has been influenced by climate change and anthropogenic disturbance during recent decades, yet the scientific understanding of this water body remains poor. In this study, an eco-geographical classification of the CS based on physical information derived from space and in-situ data is developed and tested against a set of biological observations. We used a two-step classification procedure, consisting of (i) a data reduction with self-organizing maps (SOMs) and (ii) a synthesis of the most relevant features into a reduced number of marine ecoregions using the Hierarchical Agglomerative Clustering (HAC) method. From an initial set of 12 potential physical variables, 6 independent variables were selected for the classification algorithm, i.e., sea surface temperature (SST), bathymetry, sea ice, seasonal variation of sea surface salinity (DSSS), total suspended matter (TSM) and its seasonal variation (DTSM). The classification results reveal a robust separation between the northern and the middle/southern basins as well as a separation of the shallow near-shore waters from those off-shore. The observed patterns in ecoregions can be attributed to differences in climate and geochemical factors such as distance from river, water depth and currents. A comparison of the annual and monthly mean Chl a concentrations between the different ecoregions shows significant differences (Kruskal-Wallis rank test, P qualitative evaluation of differences in community composition based on recorded presence-absence patterns of 27 different species of plankton, fish and benthic invertebrate also confirms the relevance of the ecoregions as proxies for habitats with common biological characteristics.

  15. A dynamical classification of the cosmic web

    Science.gov (United States)

    Forero-Romero, J. E.; Hoffman, Y.; Gottlöber, S.; Klypin, A.; Yepes, G.

    2009-07-01

    In this paper, we propose a new dynamical classification of the cosmic web. Each point in space is classified in one of four possible web types: voids, sheets, filaments and knots. The classification is based on the evaluation of the deformation tensor (i.e. the Hessian of the gravitational potential) on a grid. The classification is based on counting the number of eigenvalues above a certain threshold, λth, at each grid point, where the case of zero, one, two or three such eigenvalues corresponds to void, sheet, filament or a knot grid point. The collection of neighbouring grid points, friends of friends, of the same web type constitutes voids, sheets, filaments and knots as extended web objects. A simple dynamical consideration of the emergence of the web suggests that the threshold should not be null, as in previous implementations of the algorithm. A detailed dynamical analysis would have found different threshold values for the collapse of sheets, filaments and knots. Short of such an analysis a phenomenological approach has been opted for, looking for a single threshold to be determined by analysing numerical simulations. Our cosmic web classification has been applied and tested against a suite of large (dark matter only) cosmological N-body simulations. In particular, the dependence of the volume and mass filling fractions on λth and on the resolution has been calculated for the four web types. We also study the percolation properties of voids and filaments. Our main findings are as follows. (i) Already at λth = 0.1 the resulting web classification reproduces the visual impression of the cosmic web. (ii) Between 0.2 net of interconnected filaments. This suggests a reasonable choice for λth as the parameter that defines the cosmic web. (iii) The dynamical nature of the suggested classification provides a robust framework for incorporating environmental information into galaxy formation models, and in particular to semi-analytical models.

  16. Robust gene selection methods using weighting schemes for microarray data analysis.

    Science.gov (United States)

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  17. Morphological classification of plant cell deaths.

    Science.gov (United States)

    van Doorn, W G; Beers, E P; Dangl, J L; Franklin-Tong, V E; Gallois, P; Hara-Nishimura, I; Jones, A M; Kawai-Yamada, M; Lam, E; Mundy, J; Mur, L A J; Petersen, M; Smertenko, A; Taliansky, M; Van Breusegem, F; Wolpert, T; Woltering, E; Zhivotovsky, B; Bozhkov, P V

    2011-08-01

    Programmed cell death (PCD) is an integral part of plant development and of responses to abiotic stress or pathogens. Although the morphology of plant PCD is, in some cases, well characterised and molecular mechanisms controlling plant PCD are beginning to emerge, there is still confusion about the classification of PCD in plants. Here we suggest a classification based on morphological criteria. According to this classification, the use of the term 'apoptosis' is not justified in plants, but at least two classes of PCD can be distinguished: vacuolar cell death and necrosis. During vacuolar cell death, the cell contents are removed by a combination of autophagy-like process and release of hydrolases from collapsed lytic vacuoles. Necrosis is characterised by early rupture of the plasma membrane, shrinkage of the protoplast and absence of vacuolar cell death features. Vacuolar cell death is common during tissue and organ formation and elimination, whereas necrosis is typically found under abiotic stress. Some examples of plant PCD cannot be ascribed to either major class and are therefore classified as separate modalities. These are PCD associated with the hypersensitive response to biotrophic pathogens, which can express features of both necrosis and vacuolar cell death, PCD in starchy cereal endosperm and during self-incompatibility. The present classification is not static, but will be subject to further revision, especially when specific biochemical pathways are better defined.

  18. Precision automation of cell type classification and sub-cellular fluorescence quantification from laser scanning confocal images

    Directory of Open Access Journals (Sweden)

    Hardy Craig Hall

    2016-02-01

    Full Text Available While novel whole-plant phenotyping technologies have been successfully implemented into functional genomics and breeding programs, the potential of automated phenotyping with cellular resolution is largely unexploited. Laser scanning confocal microscopy has the potential to close this gap by providing spatially highly resolved images containing anatomic as well as chemical information on a subcellular basis. However, in the absence of automated methods, the assessment of the spatial patterns and abundance of fluorescent markers with subcellular resolution is still largely qualitative and time-consuming. Recent advances in image acquisition and analysis, coupled with improvements in microprocessor performance, have brought such automated methods within reach, so that information from thousands of cells per image for hundreds of images may be derived in an experimentally convenient time-frame. Here, we present a MATLAB-based analytical pipeline to 1 segment radial plant organs into individual cells, 2 classify cells into cell type categories based upon random forest classification, 3 divide each cell into sub-regions, and 4 quantify fluorescence intensity to a subcellular degree of precision for a separate fluorescence channel. In this research advance, we demonstrate the precision of this analytical process for the relatively complex tissues of Arabidopsis hypocotyls at various stages of development. High speed and robustness make our approach suitable for phenotyping of large collections of stem-like material and other tissue types.

  19. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    Science.gov (United States)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  20. Spiral waves characterization: Implications for an automated cardiodynamic tissue characterization.

    Science.gov (United States)

    Alagoz, Celal; Cohen, Andrew R; Frisch, Daniel R; Tunç, Birkan; Phatharodom, Saran; Guez, Allon

    2018-07-01

    Spiral waves are phenomena observed in cardiac tissue especially during fibrillatory activities. Spiral waves are revealed through in-vivo and in-vitro studies using high density mapping that requires special experimental setup. Also, in-silico spiral wave analysis and classification is performed using membrane potentials from entire tissue. In this study, we report a characterization approach that identifies spiral wave behaviors using intracardiac electrogram (EGM) readings obtained with commonly used multipolar diagnostic catheters that perform localized but high-resolution readings. Specifically, the algorithm is designed to distinguish between stationary, meandering, and break-up rotors. The clustering and classification algorithms are tested on simulated data produced using a phenomenological 2D model of cardiac propagation. For EGM measurements, unipolar-bipolar EGM readings from various locations on tissue using two catheter types are modeled. The distance measure between spiral behaviors are assessed using normalized compression distance (NCD), an information theoretical distance. NCD is a universal metric in the sense it is solely based on compressibility of dataset and not requiring feature extraction. We also introduce normalized FFT distance (NFFTD) where compressibility is replaced with a FFT parameter. Overall, outstanding clustering performance was achieved across varying EGM reading configurations. We found that effectiveness in distinguishing was superior in case of NCD than NFFTD. We demonstrated that distinct spiral activity identification on a behaviorally heterogeneous tissue is also possible. This report demonstrates a theoretical validation of clustering and classification approaches that provide an automated mapping from EGM signals to assessment of spiral wave behaviors and hence offers a potential mapping and analysis framework for cardiac tissue wavefront propagation patterns. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Low-cost real-time automatic wheel classification system

    Science.gov (United States)

    Shabestari, Behrouz N.; Miller, John W. V.; Wedding, Victoria

    1992-11-01

    This paper describes the design and implementation of a low-cost machine vision system for identifying various types of automotive wheels which are manufactured in several styles and sizes. In this application, a variety of wheels travel on a conveyor in random order through a number of processing steps. One of these processes requires the identification of the wheel type which was performed manually by an operator. A vision system was designed to provide the required identification. The system consisted of an annular illumination source, a CCD TV camera, frame grabber, and 386-compatible computer. Statistical pattern recognition techniques were used to provide robust classification as well as a simple means for adding new wheel designs to the system. Maintenance of the system can be performed by plant personnel with minimal training. The basic steps for identification include image acquisition, segmentation of the regions of interest, extraction of selected features, and classification. The vision system has been installed in a plant and has proven to be extremely effective. The system properly identifies the wheels correctly up to 30 wheels per minute regardless of rotational orientation in the camera's field of view. Correct classification can even be achieved if a portion of the wheel is blocked off from the camera. Significant cost savings have been achieved by a reduction in scrap associated with incorrect manual classification as well as a reduction of labor in a tedious task.

  2. Improved RMR Rock Mass Classification Using Artificial Intelligence Algorithms

    Science.gov (United States)

    Gholami, Raoof; Rasouli, Vamegh; Alimoradi, Andisheh

    2013-09-01

    Rock mass classification systems such as rock mass rating (RMR) are very reliable means to provide information about the quality of rocks surrounding a structure as well as to propose suitable support systems for unstable regions. Many correlations have been proposed to relate measured quantities such as wave velocity to rock mass classification systems to limit the associated time and cost of conducting the sampling and mechanical tests conventionally used to calculate RMR values. However, these empirical correlations have been found to be unreliable, as they usually overestimate or underestimate the RMR value. The aim of this paper is to compare the results of RMR classification obtained from the use of empirical correlations versus machine-learning methodologies based on artificial intelligence algorithms. The proposed methods were verified based on two case studies located in northern Iran. Relevance vector regression (RVR) and support vector regression (SVR), as two robust machine-learning methodologies, were used to predict the RMR for tunnel host rocks. RMR values already obtained by sampling and site investigation at one tunnel were taken into account as the output of the artificial networks during training and testing phases. The results reveal that use of empirical correlations overestimates the predicted RMR values. RVR and SVR, however, showed more reliable results, and are therefore suggested for use in RMR classification for design purposes of rock structures.

  3. Classifying Classifications

    DEFF Research Database (Denmark)

    Debus, Michael S.

    2017-01-01

    This paper critically analyzes seventeen game classifications. The classifications were chosen on the basis of diversity, ranging from pre-digital classification (e.g. Murray 1952), over game studies classifications (e.g. Elverdam & Aarseth 2007) to classifications of drinking games (e.g. LaBrie et...... al. 2013). The analysis aims at three goals: The classifications’ internal consistency, the abstraction of classification criteria and the identification of differences in classification across fields and/or time. Especially the abstraction of classification criteria can be used in future endeavors...... into the topic of game classifications....

  4. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  5. Simultaneous feature selection and classification via Minimax Probability Machine

    Directory of Open Access Journals (Sweden)

    Liming Yang

    2010-12-01

    Full Text Available This paper presents a novel method for simultaneous feature selection and classification by incorporating a robust L1-norm into the objective function of Minimax Probability Machine (MPM. A fractional programming framework is derived by using a bound on the misclassification error involving the mean and covariance of the data. Furthermore, the problems are solved by the Quadratic Interpolation method. Experiments show that our methods can select fewer features to improve the generalization compared to MPM, which illustrates the effectiveness of the proposed algorithms.

  6. Inguinal hernia recurrence: Classification and approach

    Directory of Open Access Journals (Sweden)

    Campanelli Giampiero

    2006-01-01

    Full Text Available The authors reviewed the records of 2,468 operations of groin hernia in 2,350 patients, including 277 recurrent hernias updated to January 2005. The data obtained - evaluating technique, results and complications - were used to propose a simple anatomo-clinical classification into three types which could be used to plan the surgical strategy:Type R1: first recurrence ′high,′ oblique external, reducible hernia with small (< 2 cm defect in non-obese patients, after pure tissue or mesh repairType R2: first recurrence ′low,′ direct, reducible hernia with small (< 2 cm defect in non-obese patients, after pure tissue or mesh repairType R3: all the other recurrences - including femoral recurrences; recurrent groin hernia with big defect (inguinal eventration; multirecurrent hernias; nonreducible, linked with a controlateral primitive or recurrent hernia; and situations compromised from aggravating factors (for example obesity or anyway not easily included in R1 or R2, after pure tissue or mesh repair.

  7. Combining low level features and visual attributes for VHR remote sensing image classification

    Science.gov (United States)

    Zhao, Fumin; Sun, Hao; Liu, Shuai; Zhou, Shilin

    2015-12-01

    Semantic classification of very high resolution (VHR) remote sensing images is of great importance for land use or land cover investigation. A large number of approaches exploiting different kinds of low level feature have been proposed in the literature. Engineers are often frustrated by their conclusions and a systematic assessment of various low level features for VHR remote sensing image classification is needed. In this work, we firstly perform an extensive evaluation of eight features including HOG, dense SIFT, SSIM, GIST, Geo color, LBP, Texton and Tiny images for classification of three public available datasets. Secondly, we propose to transfer ground level scene attributes to remote sensing images. Thirdly, we combine both low-level features and mid-level visual attributes to further improve the classification performance. Experimental results demonstrate that i) Dene SIFT and HOG features are more robust than other features for VHR scene image description. ii) Visual attribute competes with a combination of low level features. iii) Multiple feature combination achieves the best performance under different settings.

  8. Classification of multiple sclerosis lesions using adaptive dictionary learning.

    Science.gov (United States)

    Deshpande, Hrishikesh; Maurel, Pierre; Barillot, Christian

    2015-12-01

    This paper presents a sparse representation and an adaptive dictionary learning based method for automated classification of multiple sclerosis (MS) lesions in magnetic resonance (MR) images. Manual delineation of MS lesions is a time-consuming task, requiring neuroradiology experts to analyze huge volume of MR data. This, in addition to the high intra- and inter-observer variability necessitates the requirement of automated MS lesion classification methods. Among many image representation models and classification methods that can be used for such purpose, we investigate the use of sparse modeling. In the recent years, sparse representation has evolved as a tool in modeling data using a few basis elements of an over-complete dictionary and has found applications in many image processing tasks including classification. We propose a supervised classification approach by learning dictionaries specific to the lesions and individual healthy brain tissues, which include white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF). The size of the dictionaries learned for each class plays a major role in data representation but it is an even more crucial element in the case of competitive classification. Our approach adapts the size of the dictionary for each class, depending on the complexity of the underlying data. The algorithm is validated using 52 multi-sequence MR images acquired from 13 MS patients. The results demonstrate the effectiveness of our approach in MS lesion classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  10. UAS Detection Classification and Neutralization: Market Survey 2015

    Energy Technology Data Exchange (ETDEWEB)

    Birch, Gabriel Carisle [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Griffin, John Clark [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Erdman, Matthew Kelly [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    The purpose of this document is to briefly frame the challenges of detecting low, slow, and small (LSS) unmanned aerial systems (UAS). The conclusion drawn from internal discussions and external reports is the following; detection of LSS UAS is a challenging problem that can- not be achieved with a single detection modality for all potential targets. Classification of LSS UAS, especially classification in the presence of background clutter (e.g., urban environment) or other non-threating targets (e.g., birds), is under-explored. Though information of avail- able technologies is sparse, many of the existing options for UAS detection appear to be in their infancy (when compared to more established ground-based air defense systems for larger and/or faster threats). Companies currently providing or developing technologies to combat the UAS safety and security problem are certainly worth investigating, however, no company has provided the statistical evidence necessary to support robust detection, identification, and/or neutralization of LSS UAS targets. The results of a market survey are included that highlights potential commercial entities that could contribute some technology that assists in the detection, classification, and neutral- ization of a LSS UAS. This survey found no clear and obvious commercial solution, though recommendations are given for further investigation of several potential systems.

  11. Multi-q pattern classification of polarization curves

    Science.gov (United States)

    Fabbri, Ricardo; Bastos, Ivan N.; Neto, Francisco D. Moura; Lopes, Francisco J. P.; Gonçalves, Wesley N.; Bruno, Odemir M.

    2014-02-01

    Several experimental measurements are expressed in the form of one-dimensional profiles, for which there is a scarcity of methodologies able to classify the pertinence of a given result to a specific group. The polarization curves that evaluate the corrosion kinetics of electrodes in corrosive media are applications where the behavior is chiefly analyzed from profiles. Polarization curves are indeed a classic method to determine the global kinetics of metallic electrodes, but the strong nonlinearity from different metals and alloys can overlap and the discrimination becomes a challenging problem. Moreover, even finding a typical curve from replicated tests requires subjective judgment. In this paper, we used the so-called multi-q approach based on the Tsallis statistics in a classification engine to separate the multiple polarization curve profiles of two stainless steels. We collected 48 experimental polarization curves in an aqueous chloride medium of two stainless steel types, with different resistance against localized corrosion. Multi-q pattern analysis was then carried out on a wide potential range, from cathodic up to anodic regions. An excellent classification rate was obtained, at a success rate of 90%, 80%, and 83% for low (cathodic), high (anodic), and both potential ranges, respectively, using only 2% of the original profile data. These results show the potential of the proposed approach towards efficient, robust, systematic and automatic classification of highly nonlinear profile curves.

  12. Deep learning decision fusion for the classification of urban remote sensing data

    Science.gov (United States)

    Abdi, Ghasem; Samadzadegan, Farhad; Reinartz, Peter

    2018-01-01

    Multisensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification. After deep features are extracted by utilizing joint spectral-spatial information, a soft-decision made classifier is applied to train high-level feature representations and to fine-tune the deep learning framework. Next, a decision-level fusion classifies objects of interest by the joint use of sensors. Finally, a context-aware object-based postprocessing is used to enhance the classification results. A series of comparative experiments are conducted on the widely used dataset of 2014 IEEE GRSS data fusion contest. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers.

  13. How Transferable are CNN-based Features for Age and Gender Classification?

    OpenAIRE

    Özbulak, Gökhan; Aytar, Yusuf; Ekenel, Hazım Kemal

    2016-01-01

    Age and gender are complementary soft biometric traits for face recognition. Successful estimation of age and gender from facial images taken under real-world conditions can contribute improving the identification results in the wild. In this study, in order to achieve robust age and gender classification in the wild, we have benefited from Deep Convolutional Neural Networks based representation. We have explored transferability of existing deep convolutional neural network (CNN) models for a...

  14. A Sieving ANN for Emotion-Based Movie Clip Classification

    Science.gov (United States)

    Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon

    Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.

  15. Cell-based product classification procedure: What can be done differently to improve decisions on borderline products?

    Science.gov (United States)

    Izeta, Ander; Herrera, Concha; Mata, Rosario; Astori, Giuseppe; Giordano, Rosaria; Hernández, Carmen; Leyva, Laura; Arias, Salvador; Oyonarte, Salvador; Carmona, Gloria; Cuende, Natividad

    2016-07-01

    In June 2015, European Medicines Agency/Committee for Advanced Therapies (CAT) released the new version of the reflection paper on classification of advanced therapy medicinal products (ATMPs) established to address questions of borderline cases in which classification of a product based on genes, cells or tissues is unclear. The paper shows CAT's understanding of substantial manipulation and essential function(s) criteria that define the legal scope of cell-based medicinal products. This article aims to define the authors' viewpoint on the reflection paper. ATMP classification has intrinsic weaknesses derived from the lack of clarity of the evolving concepts of substantial manipulation and essential function(s) as stated in the EU Regulation, leading to the risk of differing interpretations and misclassification. This might result in the broadening of ATMP scope at the expense of other products such as cell/tissue transplants and blood products, or even putting some present and future clinical practice at risk of being classified as ATMP. Because of the major organizational, economic and regulatory implications of product classification, we advocate for increased interaction between CAT and competent authorities (CAs) for medicines, blood and blood components and tissues and cells or for the creation of working groups including representatives of all parties as recently suggested by several CAs. Copyright © 2016 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  16. Automatic breast tissue density estimation scheme in digital mammography images

    Science.gov (United States)

    Menechelli, Renan C.; Pacheco, Ana Luisa V.; Schiabel, Homero

    2017-03-01

    Cases of breast cancer have increased substantially each year. However, radiologists are subject to subjectivity and failures of interpretation which may affect the final diagnosis in this examination. The high density features in breast tissue are important factors related to these failures. Thus, among many functions some CADx (Computer-Aided Diagnosis) schemes are classifying breasts according to the predominant density. In order to aid in such a procedure, this work attempts to describe automated software for classification and statistical information on the percentage change in breast tissue density, through analysis of sub regions (ROIs) from the whole mammography image. Once the breast is segmented, the image is divided into regions from which texture features are extracted. Then an artificial neural network MLP was used to categorize ROIs. Experienced radiologists have previously determined the ROIs density classification, which was the reference to the software evaluation. From tests results its average accuracy was 88.7% in ROIs classification, and 83.25% in the classification of the whole breast density in the 4 BI-RADS density classes - taking into account a set of 400 images. Furthermore, when considering only a simplified two classes division (high and low densities) the classifier accuracy reached 93.5%, with AUC = 0.95.

  17. Bioprinting Using Mechanically Robust Core-Shell Cell-Laden Hydrogel Strands.

    Science.gov (United States)

    Mistry, Pritesh; Aied, Ahmed; Alexander, Morgan; Shakesheff, Kevin; Bennett, Andrew; Yang, Jing

    2017-06-01

    The strand material in extrusion-based bioprinting determines the microenvironments of the embedded cells and the initial mechanical properties of the constructs. One unmet challenge is the combination of optimal biological and mechanical properties in bioprinted constructs. Here, a novel bioprinting method that utilizes core-shell cell-laden strands with a mechanically robust shell and an extracellular matrix-like core has been developed. Cells encapsulated in the strands demonstrate high cell viability and tissue-like functions during cultivation. This process of bioprinting using core-shell strands with optimal biochemical and biomechanical properties represents a new strategy for fabricating functional human tissues and organs. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Sensitivity and Specificity of Cardiac Tissue Discrimination Using Fiber-Optics Confocal Microscopy.

    Science.gov (United States)

    Huang, Chao; Sachse, Frank B; Hitchcock, Robert W; Kaza, Aditya K

    2016-01-01

    Disturbances of the cardiac conduction system constitute a major risk after surgical repair of complex cases of congenital heart disease. Intraoperative identification of the conduction system may reduce the incidence of these disturbances. We previously developed an approach to identify cardiac tissue types using fiber-optics confocal microscopy and extracellular fluorophores. Here, we applied this approach to investigate sensitivity and specificity of human and automated classification in discriminating images of atrial working myocardium and specialized tissue of the conduction system. Two-dimensional image sequences from atrial working myocardium and nodal tissue of isolated perfused rodent hearts were acquired using a fiber-optics confocal microscope (Leica FCM1000). We compared two methods for local application of extracellular fluorophores: topical via pipette and with a dye carrier. Eight blinded examiners evaluated 162 randomly selected images of atrial working myocardium (n = 81) and nodal tissue (n = 81). In addition, we evaluated the images using automated classification. Blinded examiners achieved a sensitivity and specificity of 99.2 ± 0.3% and 98.0 ± 0.7%, respectively, with the dye carrier method of dye application. Sensitivity and specificity was similar for dye application via a pipette (99.2 ± 0.3% and 94.0 ± 2.4%, respectively). Sensitivity and specificity for automated methods of tissue discrimination were similarly high. Human and automated classification achieved high sensitivity and specificity in discriminating atrial working myocardium and nodal tissue. We suggest that our findings facilitate clinical translation of fiber-optics confocal microscopy as an intraoperative imaging modality to reduce the incidence of conduction disturbances during surgical correction of congenital heart disease.

  19. Robust boosting via convex optimization

    Science.gov (United States)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems

  20. Artificial neural networks for classification in metabolomic studies of whole cells using 1H nuclear magnetic resonance.

    LENUS (Irish Health Repository)

    Brougham, D F

    2011-01-01

    We report the successful classification, by artificial neural networks (ANNs), of (1)H NMR spectroscopic data recorded on whole-cell culture samples of four different lung carcinoma cell lines, which display different drug resistance patterns. The robustness of the approach was demonstrated by its ability to classify the cell line correctly in 100% of cases, despite the demonstrated presence of operator-induced sources of variation, and irrespective of which spectra are used for training and for validation. The study demonstrates the potential of ANN for lung carcinoma classification in realistic situations.

  1. A Novel Classification Technique of Landsat-8 OLI Image-Based Data Visualization: The Application of Andrews’ Plots and Fuzzy Evidential Reasoning

    Directory of Open Access Journals (Sweden)

    Sornkitja Boonprong

    2017-04-01

    Full Text Available Andrews first proposed an equation to visualize the structures within data in 1972. Since then, this equation has been used for data transformation and visualization in a wide variety of fields. However, it has yet to be applied to satellite image data. The effect of unwanted, or impure, pixels occurring in these data varies with their distribution in the image; the effect is greater if impurity pixels are included in a classifier’s training set. Andrews’ curves enable the interpreter to select outlier or impurity data that can be grouped into a new category for classification. This study overcomes the above-mentioned problem and illustrates the novelty of applying Andrews’ plots to satellite image data, and proposes a robust method for classifying the plots that combines Dempster-Shafer theory with fuzzy set theory. In addition, we present an example, obtained from real satellite images, to demonstrate the application of the proposed classification method. The accuracy and robustness of the proposed method are investigated for different training set sizes and crop types, and are compared with the results of two traditional classification methods. We find that outlier data are easily eliminated by examining Andrews’ curves and that the proposed method significantly outperforms traditional methods when considering the classification accuracy.

  2. Improved Classification by Non Iterative and Ensemble Classifiers in Motor Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    PANIGRAHY, P. S.

    2018-02-01

    Full Text Available Data driven approach for multi-class fault diagnosis of induction motor using MCSA at steady state condition is a complex pattern classification problem. This investigation has exploited the built-in ensemble process of non-iterative classifiers to resolve the most challenging issues in this area, including bearing and stator fault detection. Non-iterative techniques exhibit with an average 15% of increased fault classification accuracy against their iterative counterparts. Particularly RF has shown outstanding performance even at less number of training samples and noisy feature space because of its distributive feature model. The robustness of the results, backed by the experimental verification shows that the non-iterative individual classifiers like RF is the optimum choice in the area of automatic fault diagnosis of induction motor.

  3. Automated classification of cell morphology by coherence-controlled holographic microscopy

    Science.gov (United States)

    Strbkova, Lenka; Zicha, Daniel; Vesely, Pavel; Chmelik, Radim

    2017-08-01

    In the last few years, classification of cells by machine learning has become frequently used in biology. However, most of the approaches are based on morphometric (MO) features, which are not quantitative in terms of cell mass. This may result in poor classification accuracy. Here, we study the potential contribution of coherence-controlled holographic microscopy enabling quantitative phase imaging for the classification of cell morphologies. We compare our approach with the commonly used method based on MO features. We tested both classification approaches in an experiment with nutritionally deprived cancer tissue cells, while employing several supervised machine learning algorithms. Most of the classifiers provided higher performance when quantitative phase features were employed. Based on the results, it can be concluded that the quantitative phase features played an important role in improving the performance of the classification. The methodology could be valuable help in refining the monitoring of live cells in an automated fashion. We believe that coherence-controlled holographic microscopy, as a tool for quantitative phase imaging, offers all preconditions for the accurate automated analysis of live cell behavior while enabling noninvasive label-free imaging with sufficient contrast and high-spatiotemporal phase sensitivity.

  4. Multiscale Region-Level VHR Image Change Detection via Sparse Change Descriptor and Robust Discriminative Dictionary Learning

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2015-01-01

    Full Text Available Very high resolution (VHR image change detection is challenging due to the low discriminative ability of change feature and the difficulty of change decision in utilizing the multilevel contextual information. Most change feature extraction techniques put emphasis on the change degree description (i.e., in what degree the changes have happened, while they ignore the change pattern description (i.e., how the changes changed, which is of equal importance in characterizing the change signatures. Moreover, the simultaneous consideration of the classification robust to the registration noise and the multiscale region-consistent fusion is often neglected in change decision. To overcome such drawbacks, in this paper, a novel VHR image change detection method is proposed based on sparse change descriptor and robust discriminative dictionary learning. Sparse change descriptor combines the change degree component and the change pattern component, which are encoded by the sparse representation error and the morphological profile feature, respectively. Robust change decision is conducted by multiscale region-consistent fusion, which is implemented by the superpixel-level cosparse representation with robust discriminative dictionary and the conditional random field model. Experimental results confirm the effectiveness of the proposed change detection technique.

  5. Comparisons and Selections of Features and Classifiers for Short Text Classification

    Science.gov (United States)

    Wang, Ye; Zhou, Zhi; Jin, Shan; Liu, Debin; Lu, Mi

    2017-10-01

    Short text is considerably different from traditional long text documents due to its shortness and conciseness, which somehow hinders the applications of conventional machine learning and data mining algorithms in short text classification. According to traditional artificial intelligence methods, we divide short text classification into three steps, namely preprocessing, feature selection and classifier comparison. In this paper, we have illustrated step-by-step how we approach our goals. Specifically, in feature selection, we compared the performance and robustness of the four methods of one-hot encoding, tf-idf weighting, word2vec and paragraph2vec, and in the classification part, we deliberately chose and compared Naive Bayes, Logistic Regression, Support Vector Machine, K-nearest Neighbor and Decision Tree as our classifiers. Then, we compared and analysed the classifiers horizontally with each other and vertically with feature selections. Regarding the datasets, we crawled more than 400,000 short text files from Shanghai and Shenzhen Stock Exchanges and manually labeled them into two classes, the big and the small. There are eight labels in the big class, and 59 labels in the small class.

  6. Robust representation and recognition of facial emotions using extreme sparse learning.

    Science.gov (United States)

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  7. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    International Nuclear Information System (INIS)

    McGowan, S E; Albertini, F; Lomax, A J; Thomas, S J

    2015-01-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties. (paper)

  8. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    Science.gov (United States)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  9. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    Science.gov (United States)

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  10. Classification of breast cancer histology images using Convolutional Neural Networks.

    Directory of Open Access Journals (Sweden)

    Teresa Araújo

    Full Text Available Breast cancer is one of the main causes of cancer death worldwide. The diagnosis of biopsy tissue with hematoxylin and eosin stained images is non-trivial and specialists often disagree on the final diagnosis. Computer-aided Diagnosis systems contribute to reduce the cost and increase the efficiency of this process. Conventional classification approaches rely on feature extraction methods designed for a specific problem based on field-knowledge. To overcome the many difficulties of the feature-based approaches, deep learning methods are becoming important alternatives. A method for the classification of hematoxylin and eosin stained breast biopsy images using Convolutional Neural Networks (CNNs is proposed. Images are classified in four classes, normal tissue, benign lesion, in situ carcinoma and invasive carcinoma, and in two classes, carcinoma and non-carcinoma. The architecture of the network is designed to retrieve information at different scales, including both nuclei and overall tissue organization. This design allows the extension of the proposed system to whole-slide histology images. The features extracted by the CNN are also used for training a Support Vector Machine classifier. Accuracies of 77.8% for four class and 83.3% for carcinoma/non-carcinoma are achieved. The sensitivity of our method for cancer cases is 95.6%.

  11. A simplified classification system for partially edentulous spaces

    Directory of Open Access Journals (Sweden)

    Bhandari Aruna J, Bhandari Akshay J

    2014-04-01

    Full Text Available Background: There is no single universally employed classification system that will specify the exact edentulous situation. Several classification systems exist to group the situation and avoid confusion. Classifications based on edentulous areas, finished restored prostheses, type of direct retainers or fulcrum lines are there. Some are based depending on the placement of the implants. Widely accepted Kennedy Applegate classification does not give any idea about length, span or number of teeth missing. Rule 6 governing the application of Kennedy method states that additional edentulous areas are referred as modification number 1,2 etc. Rule 7 states that extent of the modification is not considered; only the number of edentulous areas is considered. Hence there is a need to modify the Kennedy –Applegate System. Aims: This new classification system is an attempt to modify Kennedy –Applegate System so as to give the exact idea about missing teeth, space, span, side and areas of partially edentulous arches. Methods and Material: This system will provide the information regarding Maxillary or Mandibular partially edentulous arches, Left or Right side, length of the edentulous space, number of teeth missing and whether there will be tooth borne or tooth – tissue borne prosthesis. Conclusions: This classification is easy for application, communication and will also help to design the removable cast partial denture in a better logical and systematic way. Also, this system will give the idea of the edentulous status and the number of missing teeth in fixed, hybrid or implant prosthesis.

  12. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    Science.gov (United States)

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  13. An Experimentation Platform for On-Chip Integration of Analog Neural Networks: A Pathway to Trusted and Robust Analog/RF ICs.

    Science.gov (United States)

    Maliuk, Dzmitry; Makris, Yiorgos

    2015-08-01

    We discuss the design of an experimentation platform intended for prototyping low-cost analog neural networks for on-chip integration with analog/RF circuits. The objective of such integration is to support various tasks, such as self-test, self-tuning, and trust/aging monitoring, which require classification of analog measurements obtained from on-chip sensors. Particular emphasis is given to cost-efficient implementation reflected in: 1) low energy and area budgets of circuits dedicated to neural networks; 2) robust learning in presence of analog inaccuracies; and 3) long-term retention of learned functionality. Our chip consists of a reconfigurable array of synapses and neurons operating below threshold and featuring sub-μW power consumption. The synapse circuits employ dual-mode weight storage: 1) a dynamic mode, for fast bidirectional weight updates during training and 2) a nonvolatile mode, for permanent storage of learned functionality. We discuss a robust learning strategy, and we evaluate the system performance on several benchmark problems, such as the XOR2-6 and two-spirals classification tasks.

  14. [Classification of cell-based medicinal products and legal implications: An overview and an update].

    Science.gov (United States)

    Scherer, Jürgen; Flory, Egbert

    2015-11-01

    In general, cell-based medicinal products do not represent a uniform class of medicinal products, but instead comprise medicinal products with diverse regulatory classification as advanced-therapy medicinal products (ATMP), medicinal products (MP), tissue preparations, or blood products. Due to the legal and scientific consequences of the development and approval of MPs, classification should be clarified as early as possible. This paper describes the legal situation in Germany and highlights specific criteria and concepts for classification, with a focus on, but not limited to, ATMPs and non-ATMPs. Depending on the stage of product development and the specific application submitted to a competent authority, legally binding classification is done by the German Länder Authorities, Paul-Ehrlich-Institut, or European Medicines Agency. On request by the applicants, the Committee for Advanced Therapies may issue scientific recommendations for classification.

  15. Enhanced echolocation via robust statistics and super-resolution of sonar images

    Science.gov (United States)

    Kim, Kio

    Echolocation is a process in which an animal uses acoustic signals to exchange information with environments. In a recent study, Neretti et al. have shown that the use of robust statistics can significantly improve the resiliency of echolocation against noise and enhance its accuracy by suppressing the development of sidelobes in the processing of an echo signal. In this research, the use of robust statistics is extended to problems in underwater explorations. The dissertation consists of two parts. Part I describes how robust statistics can enhance the identification of target objects, which in this case are cylindrical containers filled with four different liquids. Particularly, this work employs a variation of an existing robust estimator called an L-estimator, which was first suggested by Koenker and Bassett. As pointed out by Au et al.; a 'highlight interval' is an important feature, and it is closely related with many other important features that are known to be crucial for dolphin echolocation. A varied L-estimator described in this text is used to enhance the detection of highlight intervals, which eventually leads to a successful classification of echo signals. Part II extends the problem into 2 dimensions. Thanks to the advances in material and computer technology, various sonar imaging modalities are available on the market. By registering acoustic images from such video sequences, one can extract more information on the region of interest. Computer vision and image processing allowed application of robust statistics to the acoustic images produced by forward looking sonar systems, such as Dual-frequency Identification Sonar and ProViewer. The first use of robust statistics for sonar image enhancement in this text is in image registration. Random Sampling Consensus (RANSAC) is widely used for image registration. The registration algorithm using RANSAC is optimized for sonar image registration, and the performance is studied. The second use of robust

  16. Design optimization of a robust sleeve antenna for hepatic microwave ablation

    International Nuclear Information System (INIS)

    Prakash, Punit; Webster, John G; Deng Geng; Converse, Mark C; Mahvi, David M; Ferris, Michael C

    2008-01-01

    We describe the application of a Bayesian variable-number sample-path (VNSP) optimization algorithm to yield a robust design for a floating sleeve antenna for hepatic microwave ablation. Finite element models are used to generate the electromagnetic (EM) field and thermal distribution in liver given a particular design. Dielectric properties of the tissue are assumed to vary within ± 10% of average properties to simulate the variation among individuals. The Bayesian VNSP algorithm yields an optimal design that is a 14.3% improvement over the original design and is more robust in terms of lesion size, shape and efficiency. Moreover, the Bayesian VNSP algorithm finds an optimal solution saving 68.2% simulation of the evaluations compared to the standard sample-path optimization method

  17. Supervised linear dimensionality reduction with robust margins for object recognition

    Science.gov (United States)

    Dornaika, F.; Assoum, A.

    2013-01-01

    Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.

  18. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization

    Directory of Open Access Journals (Sweden)

    Philipp Kainz

    2017-10-01

    Full Text Available Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

  19. Novel classification system of rib fractures observed in infants.

    Science.gov (United States)

    Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Pinto, Deborrah C; Greeley, Christopher; Donaruma-Kwoh, Marcella; Bista, Bibek

    2013-03-01

    Rib fractures are considered highly suspicious for nonaccidental injury in the pediatric clinical literature; however, a rib fracture classification system has not been developed. As an aid and impetus for rib fracture research, we developed a concise schema for classifying rib fracture types and fracture location that is applicable to infants. The system defined four fracture types (sternal end, buckle, transverse, and oblique) and four regions of the rib (posterior, posterolateral, anterolateral, and anterior). It was applied to all rib fractures observed during 85 consecutive infant autopsies. Rib fractures were found in 24 (28%) of the cases. A total of 158 rib fractures were identified. The proposed schema was adequate to classify 153 (97%) of the observed fractures. The results indicate that the classification system is sufficiently robust to classify rib fractures typically observed in infants and should be used by researchers investigating infant rib fractures. © 2013 American Academy of Forensic Sciences.

  20. Multilevel Weighted Support Vector Machine for Classification on Healthcare Data with Missing Values.

    Directory of Open Access Journals (Sweden)

    Talayeh Razzaghi

    Full Text Available This work is motivated by the needs of predictive analytics on healthcare data as represented by Electronic Medical Records. Such data is invariably problematic: noisy, with missing entries, with imbalance in classes of interests, leading to serious bias in predictive modeling. Since standard data mining methods often produce poor performance measures, we argue for development of specialized techniques of data-preprocessing and classification. In this paper, we propose a new method to simultaneously classify large datasets and reduce the effects of missing values. It is based on a multilevel framework of the cost-sensitive SVM and the expected maximization imputation method for missing values, which relies on iterated regression analyses. We compare classification results of multilevel SVM-based algorithms on public benchmark datasets with imbalanced classes and missing values as well as real data in health applications, and show that our multilevel SVM-based method produces fast, and more accurate and robust classification results.

  1. Supervised classification of continental shelf sediment off western Donegal, Ireland

    Science.gov (United States)

    Monteys, X.; Craven, K.; McCarron, S. G.

    2017-12-01

    Managing human impacts on marine ecosystems requires natural regions to be identified and mapped over a range of hierarchically nested scales. In recent years (2000-present) the Irish National Seabed Survey (INSS) and Integrated Mapping for the Sustainable Development of Ireland's Marine Resources programme (INFOMAR) (Geological Survey Ireland and Marine Institute collaborations) has provided unprecedented quantities of high quality data on Ireland's offshore territories. The increasing availability of large, detailed digital representations of these environments requires the application of objective and quantitative analyses. This study presents results of a new approach for sea floor sediment mapping based on an integrated analysis of INFOMAR multibeam bathymetric data (including the derivatives of slope and relative position), backscatter data (including derivatives of angular response analysis) and sediment groundtruthing over the continental shelf, west of Donegal. It applies a Geographic-Object-Based Image Analysis software package to provide a supervised classification of the surface sediment. This approach can provide a statistically robust, high resolution classification of the seafloor. Initial results display a differentiation of sediment classes and a reduction in artefacts from previously applied methodologies. These results indicate a methodology that could be used during physical habitat mapping and classification of marine environments.

  2. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  3. Tissue specific heterogeneity in effector immune cell response

    Directory of Open Access Journals (Sweden)

    Saba eTufail

    2013-08-01

    Full Text Available Post pathogen invasion, migration of effector T-cell subsets to specific tissue locations is of prime importance for generation of robust immune response. Effector T cells are imprinted with distinct ‘homing codes’ (adhesion molecules and chemokine receptors during activation which regulate their targeted trafficking to specific tissues. Internal cues in the lymph node microenvironment along with external stimuli from food (vitamin A and sunlight (vitamin D3 prime dendritic cells, imprinting them to play centrestage in the induction of tissue tropism in effector T cells. B cells as well, in a manner similar to effector T cells, exhibit tissue tropic migration. In this review, we have focused on the factors regulating the generation and migration of effector T cells to various tissues alongwith giving an overview of tissue tropism in B cells.

  4. A Color-Texture-Structure Descriptor for High-Resolution Satellite Image Classification

    Directory of Open Access Journals (Sweden)

    Huai Yu

    2016-03-01

    Full Text Available Scene classification plays an important role in understanding high-resolution satellite (HRS remotely sensed imagery. For remotely sensed scenes, both color information and texture information provide the discriminative ability in classification tasks. In recent years, substantial performance gains in HRS image classification have been reported in the literature. One branch of research combines multiple complementary features based on various aspects such as texture, color and structure. Two methods are commonly used to combine these features: early fusion and late fusion. In this paper, we propose combining the two methods under a tree of regions and present a new descriptor to encode color, texture and structure features using a hierarchical structure-Color Binary Partition Tree (CBPT, which we call the CTS descriptor. Specifically, we first build the hierarchical representation of HRS imagery using the CBPT. Then we quantize the texture and color features of dense regions. Next, we analyze and extract the co-occurrence patterns of regions based on the hierarchical structure. Finally, we encode local descriptors to obtain the final CTS descriptor and test its discriminative capability using object categorization and scene classification with HRS images. The proposed descriptor contains the spectral, textural and structural information of the HRS imagery and is also robust to changes in illuminant color, scale, orientation and contrast. The experimental results demonstrate that the proposed CTS descriptor achieves competitive classification results compared with state-of-the-art algorithms.

  5. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    Science.gov (United States)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  6. Tissue-Based MRI Intensity Standardization: Application to Multicentric Datasets

    Directory of Open Access Journals (Sweden)

    Nicolas Robitaille

    2012-01-01

    Full Text Available Intensity standardization in MRI aims at correcting scanner-dependent intensity variations. Existing simple and robust techniques aim at matching the input image histogram onto a standard, while we think that standardization should aim at matching spatially corresponding tissue intensities. In this study, we present a novel automatic technique, called STI for STandardization of Intensities, which not only shares the simplicity and robustness of histogram-matching techniques, but also incorporates tissue spatial intensity information. STI uses joint intensity histograms to determine intensity correspondence in each tissue between the input and standard images. We compared STI to an existing histogram-matching technique on two multicentric datasets, Pilot E-ADNI and ADNI, by measuring the intensity error with respect to the standard image after performing nonlinear registration. The Pilot E-ADNI dataset consisted in 3 subjects each scanned in 7 different sites. The ADNI dataset consisted in 795 subjects scanned in more than 50 different sites. STI was superior to the histogram-matching technique, showing significantly better intensity matching for the brain white matter with respect to the standard image.

  7. Modern classification of neoplasms: reconciling differences between morphologic and molecular approaches

    International Nuclear Information System (INIS)

    Berman, Jules

    2005-01-01

    For over 150 years, pathologists have relied on histomorphology to classify and diagnose neoplasms. Their success has been stunning, permitting the accurate diagnosis of thousands of different types of neoplasms using only a microscope and a trained eye. In the past two decades, cancer genomics has challenged the supremacy of histomorphology by identifying genetic alterations shared by morphologically diverse tumors and by finding genetic features that distinguish subgroups of morphologically homogeneous tumors. The Developmental Lineage Classification and Taxonomy of Neoplasms groups neoplasms by their embryologic origin. The putative value of this classification is based on the expectation that tumors of a common developmental lineage will share common metabolic pathways and common responses to drugs that target these pathways. The purpose of this manuscript is to show that grouping tumors according to their developmental lineage can reconcile certain fundamental discrepancies resulting from morphologic and molecular approaches to neoplasm classification. In this study, six issues in tumor classification are described that exemplify the growing rift between morphologic and molecular approaches to tumor classification: 1) the morphologic separation between epithelial and non-epithelial tumors; 2) the grouping of tumors based on shared cellular functions; 3) the distinction between germ cell tumors and pluripotent tumors of non-germ cell origin; 4) the distinction between tumors that have lost their differentiation and tumors that arise from uncommitted stem cells; 5) the molecular properties shared by morphologically disparate tumors that have a common developmental lineage, and 6) the problem of re-classifying morphologically identical but clinically distinct subsets of tumors. The discussion of these issues in the context of describing different methods of tumor classification is intended to underscore the clinical value of a robust tumor classification. A

  8. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  9. Cocktail of chemical compounds robustly promoting cell reprogramming protects liver against acute injury

    Directory of Open Access Journals (Sweden)

    Yuewen Tang

    2017-02-01

    Full Text Available Abstract Tissue damage induces cells into reprogramming-like cellular state, which contributes to tissue regeneration. However, whether factors promoting the cell reprogramming favor tissue regeneration remains elusive. Here we identified combination of small chemical compounds including drug cocktails robustly promoting in vitro cell reprogramming. We then administrated the drug cocktails to mice with acute liver injuries induced by partial hepatectomy or toxic treatment. Our results demonstrated that the drug cocktails which promoted cell reprogramming in vitro improved liver regeneration and hepatic function in vivo after acute injuries. The underlying mechanism could be that expression of pluripotent genes activated after injury is further upregulated by drug cocktails. Thus our study offers proof-of-concept evidence that cocktail of clinical compounds improving cell reprogramming favors tissue recovery after acute damages, which is an attractive strategy for regenerative purpose.

  10. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  11. Activity classification based on inertial and barometric pressure sensors at different anatomical locations.

    Science.gov (United States)

    Moncada-Torres, A; Leuenberger, K; Gonzenbach, R; Luft, A; Gassert, R

    2014-07-01

    Miniature, wearable sensor modules are a promising technology to monitor activities of daily living (ADL) over extended periods of time. To assure both user compliance and meaningful results, the selection and placement site of sensors requires careful consideration. We investigated these aspects for the classification of 16 ADL in 6 healthy subjects under laboratory conditions using ReSense, our custom-made inertial measurement unit enhanced with a barometric pressure sensor used to capture activity-related altitude changes. Subjects wore a module on each wrist and ankle, and one on the trunk. Activities comprised whole body movements as well as gross and dextrous upper-limb activities. Wrist-module data outperformed the other locations for the three activity groups. Specifically, overall classification accuracy rates of almost 93% and more than 95% were achieved for the repeated holdout and user-specific validation methods, respectively, for all 16 activities. Including the altitude profile resulted in a considerable improvement of up to 20% in the classification accuracy for stair ascent and descent. The gyroscopes provided no useful information for activity classification under this scheme. The proposed sensor setting could allow for robust long-term activity monitoring with high compliance in different patient populations.

  12. Activity classification based on inertial and barometric pressure sensors at different anatomical locations

    International Nuclear Information System (INIS)

    Moncada-Torres, A; Leuenberger, K; Gassert, R; Gonzenbach, R; Luft, A

    2014-01-01

    Miniature, wearable sensor modules are a promising technology to monitor activities of daily living (ADL) over extended periods of time. To assure both user compliance and meaningful results, the selection and placement site of sensors requires careful consideration. We investigated these aspects for the classification of 16 ADL in 6 healthy subjects under laboratory conditions using ReSense, our custom-made inertial measurement unit enhanced with a barometric pressure sensor used to capture activity-related altitude changes. Subjects wore a module on each wrist and ankle, and one on the trunk. Activities comprised whole body movements as well as gross and dextrous upper-limb activities. Wrist-module data outperformed the other locations for the three activity groups. Specifically, overall classification accuracy rates of almost 93% and more than 95% were achieved for the repeated holdout and user-specific validation methods, respectively, for all 16 activities. Including the altitude profile resulted in a considerable improvement of up to 20% in the classification accuracy for stair ascent and descent. The gyroscopes provided no useful information for activity classification under this scheme. The proposed sensor setting could allow for robust long-term activity monitoring with high compliance in different patient populations. (paper)

  13. Use of genetic toxicity data in GHS mutagenicity classification and labeling of substances.

    Science.gov (United States)

    Ball, Nicholas S; Hollnagel, Heli M

    2017-06-01

    One of the key outcomes of testing the potential genotoxicity or mutagenicity of a substance is the conclusion on whether the substance should be classified as a germ cell mutagen and the significance of this for other endpoints such as carcinogenicity. The basis for this conclusion are the criteria presented in classification and labelling systems such as the Globally Harmonized System for classification and labeling (GHS). This article reviews the classification criteria for germ cell mutagenicity and carcinogenicity and how they are applied to substances with evidence of mutagenicity. The implications and suitability of such a classification for hazard communication, risk assessment, and risk management are discussed. It is proposed that genotoxicity assessments should not focus on specifically identifying germ cell mutagens, particularly given the challenges associated with communicating this information in a meaningful way. Rather the focus should be on deriving data to characterize the mode of action and for use in the risk assessment of mutagens, which could then feed into a more robust, risk based management of mutagenic substances versus the current more hazard based approaches. Environ. Mol. Mutagen. 58:354-360, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. Fourier Transform Infrared (FT-IR) and Laser Ablation Inductively Coupled Plasma-Mass Spectrometry (LA-ICP-MS) Imaging of Cerebral Ischemia: Combined Analysis of Rat Brain Thin Cuts Toward Improved Tissue Classification.

    Science.gov (United States)

    Balbekova, Anna; Lohninger, Hans; van Tilborg, Geralda A F; Dijkhuizen, Rick M; Bonta, Maximilian; Limbeck, Andreas; Lendl, Bernhard; Al-Saad, Khalid A; Ali, Mohamed; Celikic, Minja; Ofner, Johannes

    2018-02-01

    Microspectroscopic techniques are widely used to complement histological studies. Due to recent developments in the field of chemical imaging, combined chemical analysis has become attractive. This technique facilitates a deepened analysis compared to single techniques or side-by-side analysis. In this study, rat brains harvested one week after induction of photothrombotic stroke were investigated. Adjacent thin cuts from rats' brains were imaged using Fourier transform infrared (FT-IR) microspectroscopy and laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). The LA-ICP-MS data were normalized using an internal standard (a thin gold layer). The acquired hyperspectral data cubes were fused and subjected to multivariate analysis. Brain regions affected by stroke as well as unaffected gray and white matter were identified and classified using a model based on either partial least squares discriminant analysis (PLS-DA) or random decision forest (RDF) algorithms. The RDF algorithm demonstrated the best results for classification. Improved classification was observed in the case of fused data in comparison to individual data sets (either FT-IR or LA-ICP-MS). Variable importance analysis demonstrated that both molecular and elemental content contribute to the improved RDF classification. Univariate spectral analysis identified biochemical properties of the assigned tissue types. Classification of multisensor hyperspectral data sets using an RDF algorithm allows access to a novel and in-depth understanding of biochemical processes and solid chemical allocation of different brain regions.

  15. Capabilities and Limitations of Tissue Size Control through Passive Mechanical Forces.

    Directory of Open Access Journals (Sweden)

    Jochen Kursawe

    2015-12-01

    Full Text Available Embryogenesis is an extraordinarily robust process, exhibiting the ability to control tissue size and repair patterning defects in the face of environmental and genetic perturbations. The size and shape of a developing tissue is a function of the number and size of its constituent cells as well as their geometric packing. How these cellular properties are coordinated at the tissue level to ensure developmental robustness remains a mystery; understanding this process requires studying multiple concurrent processes that make up morphogenesis, including the spatial patterning of cell fates and apoptosis, as well as cell intercalations. In this work, we develop a computational model that aims to understand aspects of the robust pattern repair mechanisms of the Drosophila embryonic epidermal tissues. Size control in this system has previously been shown to rely on the regulation of apoptosis rather than proliferation; however, to date little work has been done to understand the role of cellular mechanics in this process. We employ a vertex model of an embryonic segment to test hypotheses about the emergence of this size control. Comparing the model to previously published data across wild type and genetic perturbations, we show that passive mechanical forces suffice to explain the observed size control in the posterior (P compartment of a segment. However, observed asymmetries in cell death frequencies across the segment are demonstrated to require patterning of cellular properties in the model. Finally, we show that distinct forms of mechanical regulation in the model may be distinguished by differences in cell shapes in the P compartment, as quantified through experimentally accessible summary statistics, as well as by the tissue recoil after laser ablation experiments.

  16. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    Science.gov (United States)

    Cho, Nam-Hoon; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system. PMID:25371701

  17. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    Directory of Open Access Journals (Sweden)

    Tae-Yun Kim

    2014-01-01

    Full Text Available One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  18. Atmospheric circulation classification comparison based on wildfires in Portugal

    Science.gov (United States)

    Pereira, M. G.; Trigo, R. M.

    2009-04-01

    Atmospheric circulation classifications are not a simple description of atmospheric states but a tool to understand and interpret the atmospheric processes and to model the relation between atmospheric circulation and surface climate and other related variables (Radan Huth et al., 2008). Classifications were initially developed with weather forecasting purposes, however with the progress in computer processing capability, new and more robust objective methods were developed and applied to large datasets prompting atmospheric circulation classification methods to one of the most important fields in synoptic and statistical climatology. Classification studies have been extensively used in climate change studies (e.g. reconstructed past climates, recent observed changes and future climates), in bioclimatological research (e.g. relating human mortality to climatic factors) and in a wide variety of synoptic climatological applications (e.g. comparison between datasets, air pollution, snow avalanches, wine quality, fish captures and forest fires). Likewise, atmospheric circulation classifications are important for the study of the role of weather in wildfire occurrence in Portugal because the daily synoptic variability is the most important driver of local weather conditions (Pereira et al., 2005). In particular, the objective classification scheme developed by Trigo and DaCamara (2000) to classify the atmospheric circulation affecting Portugal have proved to be quite useful in discriminating the occurrence and development of wildfires as well as the distribution over Portugal of surface climatic variables with impact in wildfire activity such as maximum and minimum temperature and precipitation. This work aims to present: (i) an overview the existing circulation classification for the Iberian Peninsula, and (ii) the results of a comparison study between these atmospheric circulation classifications based on its relation with wildfires and relevant meteorological

  19. Free-floating epithelial micro-tissue arrays: a low cost and versatile technique.

    Science.gov (United States)

    Flood, P; Alvarez, L; Reynaud, E G

    2016-10-11

    Three-dimensional (3D) tissue models are invaluable tools that can closely reflect the in vivo physiological environment. However, they are usually difficult to develop, have a low throughput and are often costly; limiting their utility to most laboratories. The recent availability of inexpensive additive manufacturing printers and open source 3D design software offers us the possibility to easily create affordable 3D cell culture platforms. To demonstrate this, we established a simple, inexpensive and robust method for producing arrays of free-floating epithelial micro-tissues. Using a combination of 3D computer aided design and 3D printing, hydrogel micro-moulding and collagen cell encapsulation we engineered microenvironments that consistently direct the growth of micro-tissue arrays. We described the adaptability of this technique by testing several immortalised epithelial cell lines (MDCK, A549, Caco-2) and by generating branching morphology and micron to millimetre scaled micro-tissues. We established by fluorescence and electron microscopy that micro-tissues are polarised, have cell type specific differentiated phenotypes and regain native in vivo tissue qualities. Finally, using Salmonella typhimurium we show micro-tissues display a more physiologically relevant infection response compared to epithelial monolayers grown on permeable filter supports. In summary, we have developed a robust and adaptable technique for producing arrays of epithelial micro-tissues. This in vitro model has the potential to be a valuable tool for studying epithelial cell and tissue function/architecture in a physiologically relevant context.

  20. Automated classification of cell morphology by coherence-controlled holographic microscopy.

    Science.gov (United States)

    Strbkova, Lenka; Zicha, Daniel; Vesely, Pavel; Chmelik, Radim

    2017-08-01

    In the last few years, classification of cells by machine learning has become frequently used in biology. However, most of the approaches are based on morphometric (MO) features, which are not quantitative in terms of cell mass. This may result in poor classification accuracy. Here, we study the potential contribution of coherence-controlled holographic microscopy enabling quantitative phase imaging for the classification of cell morphologies. We compare our approach with the commonly used method based on MO features. We tested both classification approaches in an experiment with nutritionally deprived cancer tissue cells, while employing several supervised machine learning algorithms. Most of the classifiers provided higher performance when quantitative phase features were employed. Based on the results, it can be concluded that the quantitative phase features played an important role in improving the performance of the classification. The methodology could be valuable help in refining the monitoring of live cells in an automated fashion. We believe that coherence-controlled holographic microscopy, as a tool for quantitative phase imaging, offers all preconditions for the accurate automated analysis of live cell behavior while enabling noninvasive label-free imaging with sufficient contrast and high-spatiotemporal phase sensitivity. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  1. MicroRNAs in the Tumor Biology of Soft Tissue Sarcomas

    NARCIS (Netherlands)

    C.M.M. Gits (Caroline)

    2013-01-01

    markdownabstract__Abstract__ Soft tissue sarcomas represent a rare, heterogeneous group of mesenchymal tumors. In sarcomas, histological classification, prediction of clinical behaviour and prognosis, and targeted treatment is often a challenge. A better understanding of the biology of soft

  2. Classification, diagnosis, and approach to treatment for angioedema

    DEFF Research Database (Denmark)

    Cicardi, M; Aberer, W; Banerji, A

    2014-01-01

    Angioedema is defined as localized and self-limiting edema of the subcutaneous and submucosal tissue, due to a temporary increase in vascular permeability caused by the release of vasoactive mediator(s). When angioedema recurs without significant wheals, the patient should be diagnosed to have...... angioedema as a distinct disease. In the absence of accepted classification, different types of angioedema are not uniquely identified. For this reason, the European Academy of Allergy and Clinical Immunology gave its patronage to a consensus conference aimed at classifying angioedema. Four types of acquired...... and three types of hereditary angioedema were identified as separate forms from the analysis of the literature and were presented in detail at the meeting. Here, we summarize the analysis of the data and the resulting classification of angioedema....

  3. Cancer Feature Selection and Classification Using a Binary Quantum-Behaved Particle Swarm Optimization and Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Maolong Xi

    2016-01-01

    Full Text Available This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO for cancer feature gene selection, coupling support vector machine (SVM for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV. Finally, the BQPSO coupling SVM (BQPSO/SVM, binary PSO coupling SVM (BPSO/SVM, and genetic algorithm coupling SVM (GA/SVM are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms.

  4. Cancer Feature Selection and Classification Using a Binary Quantum-Behaved Particle Swarm Optimization and Support Vector Machine

    Science.gov (United States)

    Sun, Jun; Liu, Li; Fan, Fangyun; Wu, Xiaojun

    2016-01-01

    This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO) for cancer feature gene selection, coupling support vector machine (SVM) for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV). Finally, the BQPSO coupling SVM (BQPSO/SVM), binary PSO coupling SVM (BPSO/SVM), and genetic algorithm coupling SVM (GA/SVM) are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms. PMID:27642363

  5. Benign fatty tumors: classification, clinical course, imaging appearance, and treatment

    International Nuclear Information System (INIS)

    Bancroft, Laura W.; Kransdorf, Mark J.; Peterson, Jeffrey J.; O'Connor, Mary I.

    2006-01-01

    Lipoma is the most common soft-tissue tumor, with a wide spectrum of clinical presentations and imaging appearances. Several subtypes are described, ranging from lesions entirely composed of mature adipose tissue to tumors intimately associated with nonadipose tissue, to those composed of brown fat. The imaging appearance of these fatty masses is frequently sufficiently characteristic to allow a specific diagnosis. However, in other cases, although a specific diagnosis is not achievable, a meaningful limited differential diagnosis can be established. The purpose of this manuscript is to review the spectrum of benign fatty tumors highlighting the current classification system, clinical presentation and behavior, spectrum of imaging appearances, and treatment. The imaging review emphasizes computed tomography (CT) scanning and magnetic resonance (MR) imaging, differentiating radiologic features. (orig.)

  6. A New Classification Approach Based on Multiple Classification Rules

    OpenAIRE

    Zhongmei Zhou

    2014-01-01

    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  7. A texton-based approach for the classification of lung parenchyma in CT images

    DEFF Research Database (Denmark)

    Gangeh, Mehrdad J.; Sørensen, Lauge; Shaker, Saher B.

    2010-01-01

    In this paper, a texton-based classification system based on raw pixel representation along with a support vector machine with radial basis function kernel is proposed for the classification of emphysema in computed tomography images of the lung. The proposed approach is tested on 168 annotated...... regions of interest consisting of normal tissue, centrilobular emphysema, and paraseptal emphysema. The results show the superiority of the proposed approach to common techniques in the literature including moments of the histogram of filter responses based on Gaussian derivatives. The performance...

  8. On Predicting lung cancer subtypes using ‘omic’ data from tumor and tumor-adjacent histologically-normal tissue

    International Nuclear Information System (INIS)

    Pineda, Arturo López; Ogoe, Henry Ato; Balasubramanian, Jeya Balaji; Rangel Escareño, Claudia; Visweswaran, Shyam; Herman, James Gordon; Gopalakrishnan, Vanathi

    2016-01-01

    Adenocarcinoma (ADC) and squamous cell carcinoma (SCC) are the most prevalent histological types among lung cancers. Distinguishing between these subtypes is critically important because they have different implications for prognosis and treatment. Normally, histopathological analyses are used to distinguish between the two, where the tissue samples are collected based on small endoscopic samples or needle aspirations. However, the lack of cell architecture in these small tissue samples hampers the process of distinguishing between the two subtypes. Molecular profiling can also be used to discriminate between the two lung cancer subtypes, on condition that the biopsy is composed of at least 50 % of tumor cells. However, for some cases, the tissue composition of a biopsy might be a mix of tumor and tumor-adjacent histologically normal tissue (TAHN). When this happens, a new biopsy is required, with associated cost, risks and discomfort to the patient. To avoid this problem, we hypothesize that a computational method can distinguish between lung cancer subtypes given tumor and TAHN tissue. Using publicly available datasets for gene expression and DNA methylation, we applied four classification tasks, depending on the possible combinations of tumor and TAHN tissue. First, we used a feature selector (ReliefF/Limma) to select relevant variables, which were then used to build a simple naïve Bayes classification model. Then, we evaluated the classification performance of our models by measuring the area under the receiver operating characteristic curve (AUC). Finally, we analyzed the relevance of the selected genes using hierarchical clustering and IPA® software for gene functional analysis. All Bayesian models achieved high classification performance (AUC > 0.94), which were confirmed by hierarchical cluster analysis. From the genes selected, 25 (93 %) were found to be related to cancer (19 were associated with ADC or SCC), confirming the biological relevance of our

  9. Quality Evaluation of Land-Cover Classification Using Convolutional Neural Network

    Science.gov (United States)

    Dang, Y.; Zhang, J.; Zhao, Y.; Luo, F.; Ma, W.; Yu, F.

    2018-04-01

    Land-cover classification is one of the most important products of earth observation, which focuses mainly on profiling the physical characters of the land surface with temporal and distribution attributes and contains the information of both natural and man-made coverage elements, such as vegetation, soil, glaciers, rivers, lakes, marsh wetlands and various man-made structures. In recent years, the amount of high-resolution remote sensing data has increased sharply. Accordingly, the volume of land-cover classification products increases, as well as the need to evaluate such frequently updated products that is a big challenge. Conventionally, the automatic quality evaluation of land-cover classification is made through pixel-based classifying algorithms, which lead to a much trickier task and consequently hard to keep peace with the required updating frequency. In this paper, we propose a novel quality evaluation approach for evaluating the land-cover classification by a scene classification method Convolutional Neural Network (CNN) model. By learning from remote sensing data, those randomly generated kernels that serve as filter matrixes evolved to some operators that has similar functions to man-crafted operators, like Sobel operator or Canny operator, and there are other kernels learned by the CNN model that are much more complex and can't be understood as existing filters. The method using CNN approach as the core algorithm serves quality-evaluation tasks well since it calculates a bunch of outputs which directly represent the image's membership grade to certain classes. An automatic quality evaluation approach for the land-cover DLG-DOM coupling data (DLG for Digital Line Graphic, DOM for Digital Orthophoto Map) will be introduced in this paper. The CNN model as an robustness method for image evaluation, then brought out the idea of an automatic quality evaluation approach for land-cover classification. Based on this experiment, new ideas of quality evaluation

  10. 78 FR 68983 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-11-18

    ...-AD33 Cotton Futures Classification: Optional Classification Procedure AGENCY: Agricultural Marketing... regulations to allow for the addition of an optional cotton futures classification procedure--identified and... response to requests from the U.S. cotton industry and ICE, AMS will offer a futures classification option...

  11. Studies of stability and robustness for artificial neural networks and boosted decision trees

    International Nuclear Information System (INIS)

    Yang, H.-J.; Roe, Byron P.; Zhu Ji

    2007-01-01

    In this paper, we compare the performance, stability and robustness of Artificial Neural Networks (ANN) and Boosted Decision Trees (BDT) using MiniBooNE Monte Carlo samples. These methods attempt to classify events given a number of identification variables. The BDT algorithm has been discussed by us in previous publications. Testing is done in this paper by smearing and shifting the input variables of testing samples. Based on these studies, BDT has better particle identification performance than ANN. The degradation of the classifications obtained by shifting or smearing variables of testing results is smaller for BDT than for ANN

  12. Adaptive partial volume classification of MRI data

    International Nuclear Information System (INIS)

    Chiverton, John P; Wells, Kevin

    2008-01-01

    Tomographic biomedical images are commonly affected by an imaging artefact known as the partial volume (PV) effect. The PV effect produces voxels composed of a mixture of tissues in anatomical magnetic resonance imaging (MRI) data resulting in a continuity of these tissue classes. Anatomical MRI data typically consist of a number of contiguous regions of tissues or even contiguous regions of PV voxels. Furthermore discontinuities exist between the boundaries of these contiguous image regions. The work presented here probabilistically models the PV effect using spatial regularization in the form of continuous Markov random fields (MRFs) to classify anatomical MRI brain data, simulated and real. A unique approach is used to adaptively control the amount of spatial regularization imposed by the MRF. Spatially derived image gradient magnitude is used to identify the discontinuities between image regions of contiguous tissue voxels and PV voxels, imposing variable amounts of regularization determined by simulation. Markov chain Monte Carlo (MCMC) is used to simulate the posterior distribution of the probabilistic image model. Promising quantitative results are presented for PV classification of simulated and real MRI data of the human brain.

  13. Proteomic patterns analysis with multivariate calculations as a promising tool for prompt differentiation of early stage lung tissue with cancer and unchanged tissue material

    Directory of Open Access Journals (Sweden)

    Grodzki Tomasz

    2011-03-01

    Full Text Available Abstract Background Lung cancer diagnosis in tissue material with commonly used histological techniques is sometimes inconvenient and in a number of cases leads to ambiguous conclusions. Frequently advanced immunostaining techniques have to be employed, yet they are both time consuming and limited. In this study a proteomic approach is presented which may help provide unambiguous pathologic diagnosis of tissue material. Methods Lung tissue material found to be pathologically changed was prepared to isolate proteome with fast and non selective procedure. Isolated peptides and proteins in ranging from 3.5 to 20 kDa were analysed directly using high resolution mass spectrometer (MALDI-TOF/TOF with sinapic acid as a matrix. Recorded complex spectra of a single run were then analyzed with multivariate statistical analysis algorithms (principle component analysis, classification methods. In the applied protocol we focused on obtaining the spectra richest in protein signals constituting a pattern of change within the sample containing detailed information about its protein composition. Advanced statistical methods were to indicate differences between examined groups. Results Obtained results indicate changes in proteome profiles of changed tissues in comparison to physiologically unchanged material (control group which were reflected in the result of principle component analysis (PCA. Points representing spectra of control group were located in different areas of multidimensional space and were less diffused in comparison to cancer tissues. Three different classification algorithms showed recognition capability of 100% regarding classification of examined material into an appropriate group. Conclusion The application of the presented protocol and method enabled finding pathological changes in tissue material regardless of localization and size of abnormalities in the sample volume. Proteomic profile as a complex, rich in signals spectrum of proteins

  14. Dense Trajectories and DHOG for Classification of Viewpoints from Echocardiogram Videos

    Directory of Open Access Journals (Sweden)

    Liqin Huang

    2016-01-01

    Full Text Available In echo-cardiac clinical computer-aided diagnosis, an important step is to automatically classify echocardiography videos from different angles and different regions. We propose a kind of echocardiography video classification algorithm based on the dense trajectory and difference histograms of oriented gradients (DHOG. First, we use the dense grid method to describe feature characteristics in each frame of echocardiography sequence and then track these feature points by applying the dense optical flow. In order to overcome the influence of the rapid and irregular movement of echocardiography videos and get more robust tracking results, we also design a trajectory description algorithm which uses the derivative of the optical flow to obtain the motion trajectory information and associates the different characteristics (e.g., the trajectory shape, DHOG, HOF, and MBH with embedded structural information of the spatiotemporal pyramid. To avoid “dimension disaster,” we apply Fisher’s vector to reduce the dimension of feature description followed by the SVM linear classifier to improve the final classification result. The average accuracy of echocardiography video classification is 77.12% for all eight viewpoints and 100% for three primary viewpoints.

  15. Classification of schizophrenia patients based on resting-state functional network connectivity

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Arbabshirani

    2013-07-01

    Full Text Available There is a growing interest in automatic classification of mental disorders based on neuroimaging data. Small training data sets (subjects and very large amount of high dimensional data make it a challenging task to design robust and accurate classifiers for heterogeneous disorders such as schizophrenia. Most previous studies considered structural MRI, diffusion tensor imaging and task-based fMRI for this purpose. However, resting-state data has been rarely used in discrimination of schizophrenia patients from healthy controls. Resting data are of great interest, since they are relatively easy to collect, and not confounded by behavioral performance on a task. Several linear and non-linear classification methods were trained using a training dataset and evaluate with a separate testing dataset. Results show that classification with high accuracy is achievable using simple non-linear discriminative methods such as k-nearest neighbors which is very promising. We compare and report detailed results of each classifier as well as statistical analysis and evaluation of each single feature. To our knowledge our effects represent the first use of resting-state functional network connectivity features to classify schizophrenia.

  16. Wavelet-based feature extraction applied to small-angle x-ray scattering patterns from breast tissue: a tool for differentiating between tissue types

    International Nuclear Information System (INIS)

    Falzon, G; Pearson, S; Murison, R; Hall, C; Siu, K; Evans, A; Rogers, K; Lewis, R

    2006-01-01

    This paper reports on the application of wavelet decomposition to small-angle x-ray scattering (SAXS) patterns from human breast tissue produced by a synchrotron source. The pixel intensities of SAXS patterns of normal, benign and malignant tissue types were transformed into wavelet coefficients. Statistical analysis found significant differences between the wavelet coefficients describing the patterns produced by different tissue types. These differences were then correlated with position in the image and have been linked to the supra-molecular structural changes that occur in breast tissue in the presence of disease. Specifically, results indicate that there are significant differences between healthy and diseased tissues in the wavelet coefficients that describe the peaks produced by the axial d-spacing of collagen. These differences suggest that a useful classification tool could be based upon the spectral information within the axial peaks

  17. Object Classification in Semi Structured Enviroment Using Forward-Looking Sonar

    Directory of Open Access Journals (Sweden)

    Matheus dos Santos

    2017-09-01

    Full Text Available The submarine exploration using robots has been increasing in recent years. The automation of tasks such as monitoring, inspection, and underwater maintenance requires the understanding of the robot’s environment. The object recognition in the scene is becoming a critical issue for these systems. On this work, an underwater object classification pipeline applied in acoustic images acquired by Forward-Looking Sonar (FLS are studied. The object segmentation combines thresholding, connected pixels searching and peak of intensity analyzing techniques. The object descriptor extract intensity and geometric features of the detected objects. A comparison between the Support Vector Machine, K-Nearest Neighbors, and Random Trees classifiers are presented. An open-source tool was developed to annotate and classify the objects and evaluate their classification performance. The proposed method efficiently segments and classifies the structures in the scene using a real dataset acquired by an underwater vehicle in a harbor area. Experimental results demonstrate the robustness and accuracy of the method described in this paper.

  18. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  19. An approach for leukemia classification based on cooperative game theory.

    Science.gov (United States)

    Torkaman, Atefeh; Charkari, Nasrollah Moghaddam; Aghaeipour, Mahnaz

    2011-01-01

    Hematological malignancies are the types of cancer that affect blood, bone marrow and lymph nodes. As these tissues are naturally connected through the immune system, a disease affecting one of them will often affect the others as well. The hematological malignancies include; Leukemia, Lymphoma, Multiple myeloma. Among them, leukemia is a serious malignancy that starts in blood tissues especially the bone marrow, where the blood is made. Researches show, leukemia is one of the common cancers in the world. So, the emphasis on diagnostic techniques and best treatments would be able to provide better prognosis and survival for patients. In this paper, an automatic diagnosis recommender system for classifying leukemia based on cooperative game is presented. Through out this research, we analyze the flow cytometry data toward the classification of leukemia into eight classes. We work on real data set from different types of leukemia that have been collected at Iran Blood Transfusion Organization (IBTO). Generally, the data set contains 400 samples taken from human leukemic bone marrow. This study deals with cooperative game used for classification according to different weights assigned to the markers. The proposed method is versatile as there are no constraints to what the input or output represent. This means that it can be used to classify a population according to their contributions. In other words, it applies equally to other groups of data. The experimental results show the accuracy rate of 93.12%, for classification and compared to decision tree (C4.5) with (90.16%) in accuracy. The result demonstrates that cooperative game is very promising to be used directly for classification of leukemia as a part of Active Medical decision support system for interpretation of flow cytometry readout. This system could assist clinical hematologists to properly recognize different kinds of leukemia by preparing suggestions and this could improve the treatment of leukemic

  20. Mechanostimulation Protocols for Cardiac Tissue Engineering

    Directory of Open Access Journals (Sweden)

    Marco Govoni

    2013-01-01

    Full Text Available Owing to the inability of self-replacement by a damaged myocardium, alternative strategies to heart transplantation have been explored within the last decades and cardiac tissue engineering/regenerative medicine is among the present challenges in biomedical research. Hopefully, several studies witness the constant extension of the toolbox available to engineer a fully functional, contractile, and robust cardiac tissue using different combinations of cells, template bioscaffolds, and biophysical stimuli obtained by the use of specific bioreactors. Mechanical forces influence the growth and shape of every tissue in our body generating changes in intracellular biochemistry and gene expression. That is why bioreactors play a central role in the task of regenerating a complex tissue such as the myocardium. In the last fifteen years a large number of dynamic culture devices have been developed and many results have been collected. The aim of this brief review is to resume in a single streamlined paper the state of the art in this field.

  1. Synthetic aperture tissue and flow ultrasound imaging

    DEFF Research Database (Denmark)

    Nikolov, Svetoslav

    imaging applied to medical ultrasound. It is divided into two major parts: tissue and blood flow imaging. Tissue imaging using synthetic aperture algorithms has been investigated for about two decades, but has not been implemented in medical scanners yet. Among the other reasons, the conventional scanning...... and beamformation methods are adequate for the imaging modalities in clinical use - the B-mode imaging of tissue structures, and the color mapping of blood flow. The acquisition time, however, is too long, and these methods fail to perform real-time three-dimensional scans. The synthetic transmit aperture......, on the other hand, can create a Bmode image with as little as 2 emissions, thus significantly speeding-up the scan procedure. The first part of the dissertation describes the synthetic aperture tissue imaging. It starts with an overview of the efforts previously made by other research groups. A classification...

  2. Quantitative segmentation of fluorescence microscopy images of heterogeneous tissue: Approach for tuning algorithm parameters

    Science.gov (United States)

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-02-01

    The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.

  3. Development and validation of a microRNA based diagnostic assay for primary tumor site classification of liver core biopsies

    DEFF Research Database (Denmark)

    Perell, Katharina; Vincent, Martin; Vainer, Ben

    2015-01-01

    for normal liver tissue contamination. Performance was estimated by cross-validation, followed by independent validation on 55 liver core biopsies with a tumor content as low as 10%. A microRNA classifier developed, using the statistical contamination model, showed an overall classification accuracy of 74...... on classification. MicroRNA profiling was performed using quantitative Real-Time PCR on formalin-fixed paraffin-embedded samples. 278 primary tumors and liver metastases, representing nine primary tumor classes, as well as normal liver samples were used as a training set. A statistical model was applied to adjust.......5% upon independent validation. Two-thirds of the samples were classified with high-confidence, with an accuracy of 92% on high-confidence predictions. A classifier trained without adjusting for liver tissue contamination, showed a classification accuracy of 38.2%. Our results indicate that surrounding...

  4. Classification of cancerous cells based on the one-class problem approach

    Science.gov (United States)

    Murshed, Nabeel A.; Bortolozzi, Flavio; Sabourin, Robert

    1996-03-01

    One of the most important factors in reducing the effect of cancerous diseases is the early diagnosis, which requires a good and a robust method. With the advancement of computer technologies and digital image processing, the development of a computer-based system has become feasible. In this paper, we introduce a new approach for the detection of cancerous cells. This approach is based on the one-class problem approach, through which the classification system need only be trained with patterns of cancerous cells. This reduces the burden of the training task by about 50%. Based on this approach, a computer-based classification system is developed, based on the Fuzzy ARTMAP neural networks. Experimental results were performed using a set of 542 patterns taken from a sample of breast cancer. Results of the experiment show 98% correct identification of cancerous cells and 95% correct identification of non-cancerous cells.

  5. Robust Identification of Developmentally Active Endothelial Enhancers in Zebrafish Using FANS-Assisted ATAC-Seq.

    Science.gov (United States)

    Quillien, Aurelie; Abdalla, Mary; Yu, Jun; Ou, Jianhong; Zhu, Lihua Julie; Lawson, Nathan D

    2017-07-18

    Identification of tissue-specific and developmentally active enhancers provides insights into mechanisms that control gene expression during embryogenesis. However, robust detection of these regulatory elements remains challenging, especially in vertebrate genomes. Here, we apply fluorescent-activated nuclei sorting (FANS) followed by Assay for Transposase-Accessible Chromatin with high-throughput sequencing (ATAC-seq) to identify developmentally active endothelial enhancers in the zebrafish genome. ATAC-seq of nuclei from Tg(fli1a:egfp) y1 transgenic embryos revealed expected patterns of nucleosomal positioning at transcriptional start sites throughout the genome and association with active histone modifications. Comparison of ATAC-seq from GFP-positive and -negative nuclei identified more than 5,000 open elements specific to endothelial cells. These elements flanked genes functionally important for vascular development and that displayed endothelial-specific gene expression. Importantly, a majority of tested elements drove endothelial gene expression in zebrafish embryos. Thus, FANS-assisted ATAC-seq using transgenic zebrafish embryos provides a robust approach for genome-wide identification of active tissue-specific enhancer elements. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  6. Laser Fluorescence Illuminates the Soft Tissue and Life Habits of the Early Cretaceous Bird Confuciusornis.

    Directory of Open Access Journals (Sweden)

    Amanda R Falk

    Full Text Available In this paper we report the discovery of non-plumage soft tissues in Confuciusornis, a basal beaked bird from the Early Cretaceous Jehol Biota in northeastern China. Various soft tissues are visualized and interpreted through the use of laser-stimulated fluorescence, providing much novel anatomical information about this early bird, specifically reticulate scales covering the feet, and the well-developed and robust pro- and postpatagium. We also include a direct comparison between the forelimb soft tissues of Confuciusornis and modern avian patagia. Furthermore, apparently large, fleshy phalangeal pads are preserved on the feet. The reticulate scales, robust phalangeal pads as well as the highly recurved pedal claws strongly support Confuciusornis as an arboreal bird. Reticulate scales are more rounded than scutate scales and do not overlap, thus allowing for more flexibility in the toe. The extent of the pro- and postpatagium and the robust primary feather rachises are evidence that Confuciusornis was capable of powered flight, contrary to previous reports suggesting otherwise. A unique avian wing shape is also reconstructed based on plumage preserved. These soft tissues combined indicate an arboreal bird with the capacity for short-term (non-migratory flight, and suggest that, although primitive, Confuciusornis already possessed many relatively advanced avian anatomical characteristics.

  7. Hyperspectral image classification using Support Vector Machine

    International Nuclear Information System (INIS)

    Moughal, T A

    2013-01-01

    Classification of land cover hyperspectral images is a very challenging task due to the unfavourable ratio between the number of spectral bands and the number of training samples. The focus in many applications is to investigate an effective classifier in terms of accuracy. The conventional multiclass classifiers have the ability to map the class of interest but the considerable efforts and large training sets are required to fully describe the classes spectrally. Support Vector Machine (SVM) is suggested in this paper to deal with the multiclass problem of hyperspectral imagery. The attraction to this method is that it locates the optimal hyper plane between the class of interest and the rest of the classes to separate them in a new high-dimensional feature space by taking into account only the training samples that lie on the edge of the class distributions known as support vectors and the use of the kernel functions made the classifier more flexible by making it robust against the outliers. A comparative study has undertaken to find an effective classifier by comparing Support Vector Machine (SVM) to the other two well known classifiers i.e. Maximum likelihood (ML) and Spectral Angle Mapper (SAM). At first, the Minimum Noise Fraction (MNF) was applied to extract the best possible features form the hyperspectral imagery and then the resulting subset of the features was applied to the classifiers. Experimental results illustrate that the integration of MNF and SVM technique significantly reduced the classification complexity and improves the classification accuracy.

  8. Definitions, Criteria and Global Classification of Mast Cell Disorders with Special Reference to Mast Cell Activation Syndromes: A Consensus Proposal

    Science.gov (United States)

    Valent, Peter; Akin, Cem; Arock, Michel; Brockow, Knut; Butterfield, Joseph H.; Carter, Melody C.; Castells, Mariana; Escribano, Luis; Hartmann, Karin; Lieberman, Philip; Nedoszytko, Boguslaw; Orfao, Alberto; Schwartz, Lawrence B.; Sotlar, Karl; Sperr, Wolfgang R.; Triggiani, Massimo; Valenta, Rudolf; Horny, Hans-Peter; Metcalfe, Dean D.

    2012-01-01

    Activation of tissue mast cells (MCs) and their abnormal growth and accumulation in various organs are typically found in primary MC disorders also referred to as mastocytosis. However, increasing numbers of patients are now being informed that their clinical findings are due to MC activation (MCA) that is neither associated with mastocytosis nor with a defined allergic or inflammatory reaction. In other patients with MCA, MCs appear to be clonal cells, but criteria for diagnosing mastocytosis are not met. A working conference was organized in 2010 with the aim to define criteria for diagnosing MCA and related disorders, and to propose a global unifying classification of all MC disorders and pathologic MC reactions. This classification includes three types of ‘MCA syndromes’ (MCASs), namely primary MCAS, secondary MCAS and idiopathic MCAS. MCA is now defined by robust and generally applicable criteria, including (1) typical clinical symptoms, (2) a substantial transient increase in serum total tryptase level or an increase in other MC-derived mediators, such as histamine or prostaglandin D2, or their urinary metabolites, and (3) a response of clinical symptoms to agents that attenuate the production or activities of MC mediators. These criteria should assist in the identification and diagnosis of patients with MCAS, and in avoiding misdiagnoses or overinterpretation of clinical symptoms in daily practice. Moreover, the MCAS concept should stimulate research in order to identify and exploit new molecular mechanisms and therapeutic targets. PMID:22041891

  9. A simple working classification proposed for the latrogenic lesions of teeth and associated structures in the oral cavity.

    Science.gov (United States)

    Shamim, Thorakkal

    2013-09-01

    Iatrogenic lesions can affect both hard and soft tissues in the oral cavity, induced by the dentist's activity, manner or therapy. There is no approved simple working classification for the iatrogenic lesions of teeth and associated structures in the oral cavity in the literature. A simple working classification is proposed here for iatrogenic lesions of teeth and associated structures in the oral cavity based on its relation with dental specialities. The dental specialities considered in this classification are conservative dentistry and endodontics, orthodontics, oral and maxillofacial surgery and prosthodontics. This classification will be useful for the dental clinician who is dealing with diseases of oral cavity.

  10. Classification of bone scintigrams in hemodialysis patients

    International Nuclear Information System (INIS)

    Ishibashi, Kazunari; Miyamae, Tatsuya

    1985-01-01

    Bone scintigrams from a total of 75 hemodialysis patients using sup(99m)Tc-methylene-diphosphonate (MDP) were classified into two groups; Group I (56 patients) in which the uptake of the radioactivity appeared to be relatively high in the soft tissue and low in the bone, and Group II (19 patients) in which high uptake in the bone and low uptake in the soft tissue were observed. Patients in Group I were further classified into two subgroups; Group I sub(A) (articular type, 21 patients) which was characterized by relatively high uptake into the joint, and Group I sub(B) (reduction type, 35 patients) where uptake was faint in the whole region of the bone. The classification of patients in Group II was also performed; Group II sub(A) (spinal type, 14 patients) where high spinal uptake was observed, and Group II sub(B) (cranio-facial type, 5 patients) where high uptake into the cranio-facial region was observed. The results were compared with 146 subjects with normal bone scintigram in terms of the ratio of bone to soft tissue uptake (B/S ratio) for the cranial bone, jaw bone, lumbar vertebra and femoral bone, and the ratio of epiphysis to diaphysis uptake (E/D ratio) for the femoral bone. The B/S ratio was low in Group I and high in Group II for the bone studied, and the E/D ratio was markedly high in Group I sub(A). Histobiochemical examination indicated that patients in Group I sub(A) and Group II may have osteomalacia and secondary hyperparathyroidism, respectively. It was considered that the visual classification and semiquantitative study as described here were useful for evaluating the pathological condition of renal osteodystrophy. (author)

  11. Classification of bone scintigrams in hemodialysis patients

    Energy Technology Data Exchange (ETDEWEB)

    Ishibashi, Kazunari; Miyamae, Tatsuya

    1985-03-01

    Bone scintigrams from a total of 75 hemodialysis patients using /sup 99m/Tc-methylene-diphosphonate (MDP) were classified into two groups; Group I (56 patients) in which the uptake of the radioactivity appeared to be relatively high in the soft tissue and low in the bone, and Group II (19 patients) in which high uptake in the bone and low uptake in the soft tissue were observed. Patients in Group I were further classified into two subgroups; Group I sub(A) (articular type, 21 patients) which was characterized by relatively high uptake into the joint, and Group I sub(B) (reduction type, 35 patients) where uptake was faint in the whole region of the bone. The classification of patients in Group II was also performed; Group II sub(A) (spinal type, 14 patients) where high spinal uptake was observed, and Group II sub(B) (cranio-facial type, 5 patients) where high uptake into the cranio-facial region was observed. The results were compared with 146 subjects with normal bone scintigram in terms of the ratio of bone to soft tissue uptake (B/S ratio) for the cranial bone, jaw bone, lumbar vertebra and femoral bone, and the ratio of epiphysis to diaphysis uptake (E/D ratio) for the femoral bone. The B/S ratio was low in Group I and high in Group II for the bone studied, and the E/D ratio was markedly high in Group I sub(A). Histobiochemical examination indicated that patients in Group I sub(A) and Group II may have osteomalacia and secondary hyperparathyroidism, respectively. It was considered that the visual classification and semiquantitative study as described here were useful for evaluating the pathological condition of renal osteodystrophy.

  12. Optimization of Classification Strategies of Acetowhite Temporal Patterns towards Improving Diagnostic Performance of Colposcopy

    Directory of Open Access Journals (Sweden)

    Karina Gutiérrez-Fragoso

    2017-01-01

    Full Text Available Efforts have been being made to improve the diagnostic performance of colposcopy, trying to help better diagnose cervical cancer, particularly in developing countries. However, improvements in a number of areas are still necessary, such as the time it takes to process the full digital image of the cervix, the performance of the computing systems used to identify different kinds of tissues, and biopsy sampling. In this paper, we explore three different, well-known automatic classification methods (k-Nearest Neighbors, Naïve Bayes, and C4.5, in addition to different data models that take full advantage of this information and improve the diagnostic performance of colposcopy based on acetowhite temporal patterns. Based on the ROC and PRC area scores, the k-Nearest Neighbors and discrete PLA representation performed better than other methods. The values of sensitivity, specificity, and accuracy reached using this method were 60% (95% CI 50–70, 79% (95% CI 71–86, and 70% (95% CI 60–80, respectively. The acetowhitening phenomenon is not exclusive to high-grade lesions, and we have found acetowhite temporal patterns of epithelial changes that are not precancerous lesions but that are similar to positive ones. These findings need to be considered when developing more robust computing systems in the future.

  13. [Definition, etiology, classification and presentation forms].

    Science.gov (United States)

    Mas Garriga, Xavier

    2014-01-01

    Osteoarthritis is defined as a degenerative process affecting the joints as a result of mechanical and biological disorders that destabilize the balance between the synthesis and degradation of joint cartilage, stimulating the growth of subchondral bone; chronic synovitis is also present. Currently, the joint is considered as a functional unit that includes distinct tissues, mainly cartilage, the synovial membrane, and subchondral bone, all of which are involved in the pathogenesis of the disease. Distinct risk factors for the development of osteoarthritis have been described: general, unmodifiable risk factors (age, sex, and genetic makeup), general, modifiable risk factors (obesity and hormonal factors) and local risk factors (prior joint anomalies and joint overload). Notable among the main factors related to disease progression are joint alignment defects and generalized osteoarthritis. Several classifications of osteoarthritis have been proposed but none is particularly important for the primary care management of the disease. These classifications include etiological (primary or idiopathic forms and secondary forms) and topographical (typical and atypical localizations) classifications, the Kellgren and Lawrence classification (radiological repercussions) and that of the American College of Rheumatology for osteoarthritis of the hand, hip and knee. The prevalence of knee osteoarthritis is 10.2% in Spain and shows a marked discrepancy between clinical and radiological findings. Hand osteoarthritis, with a prevalence of symptomatic involvement of around 6.2%, has several forms of presentation (nodal osteoarthritis, generalized osteoarthritis, rhizarthrosis, and erosive osteoarthritis). Symptomatic osteoarthritis of the hip affects between 3.5% and 5.6% of persons older than 50 years and has different radiological patterns depending on femoral head migration. Copyright © 2014 Elsevier España, S.L. All rights reserved.

  14. Tissues Use Resident Dendritic Cells and Macrophages to Maintain Homeostasis and to Regain Homeostasis upon Tissue Injury: The Immunoregulatory Role of Changing Tissue Environments

    Science.gov (United States)

    Lech, Maciej; Gröbmayr, Regina; Weidenbusch, Marc; Anders, Hans-Joachim

    2012-01-01

    Most tissues harbor resident mononuclear phagocytes, that is, dendritic cells and macrophages. A classification that sufficiently covers their phenotypic heterogeneity and plasticity during homeostasis and disease does not yet exist because cell culture-based phenotypes often do not match those found in vivo. The plasticity of mononuclear phagocytes becomes obvious during dynamic or complex disease processes. Different data interpretation also originates from different conceptual perspectives. An immune-centric view assumes that a particular priming of phagocytes then causes a particular type of pathology in target tissues, conceptually similar to antigen-specific T-cell priming. A tissue-centric view assumes that changing tissue microenvironments shape the phenotypes of their resident and infiltrating mononuclear phagocytes to fulfill the tissue's need to maintain or regain homeostasis. Here we discuss the latter concept, for example, why different organs host different types of mononuclear phagocytes during homeostasis. We further discuss how injuries alter tissue environments and how this primes mononuclear phagocytes to enforce this particular environment, for example, to support host defense and pathogen clearance, to support the resolution of inflammation, to support epithelial and mesenchymal healing, and to support the resolution of fibrosis to the smallest possible scar. Thus, organ- and disease phase-specific microenvironments determine macrophage and dendritic cell heterogeneity in a temporal and spatial manner, which assures their support to maintain and regain homeostasis in whatever condition. Mononuclear phagocytes contributions to tissue pathologies relate to their central roles in orchestrating all stages of host defense and wound healing, which often become maladaptive processes, especially in sterile and/or diffuse tissue injuries. PMID:23251037

  15. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  16. Discriminant analysis of normal and malignant breast tissue based upon INAA investigation of elemental concentration

    International Nuclear Information System (INIS)

    Kwanhoong Ng; Senghuat Ong; Bradley, D.A.; Laimeng Looi

    1997-01-01

    Discriminant analysis of six trace element concentrations measured by instrumental neutron activation analysis (INAA) in 26 paired-samples of malignant and histologically normal human breast tissues shows the technique to be a potentially valuable clinical tool for making malignant-normal classification. Nonparametric discriminant analysis is performed for the data obtained. Linear and quadratic discriminant analyses are also carried out for comparison. For this data set a formal analysis shows that the elements which may be useful in distinguishing between malignant and normal tissues are Ca, Rb and Br, providing correct classification for 24 out of 26 normal samples and 22 out of 26 malignant samples. (Author)

  17. Tissue Equivalents Based on Cell-Seeded Biodegradable Microfluidic Constructs

    Directory of Open Access Journals (Sweden)

    Sarah L. Tao

    2010-03-01

    Full Text Available One of the principal challenges in the field of tissue engineering and regenerative medicine is the formation of functional microvascular networks capable of sustaining tissue constructs. Complex tissues and vital organs require a means to support oxygen and nutrient transport during the development of constructs both prior to and after host integration, and current approaches have not demonstrated robust solutions to this challenge. Here, we present a technology platform encompassing the design, construction, cell seeding and functional evaluation of tissue equivalents for wound healing and other clinical applications. These tissue equivalents are comprised of biodegradable microfluidic scaffolds lined with microvascular cells and designed to replicate microenvironmental cues necessary to generate and sustain cell populations to replace dermal and/or epidermal tissues lost due to trauma or disease. Initial results demonstrate that these biodegradable microfluidic devices promote cell adherence and support basic cell functions. These systems represent a promising pathway towards highly integrated three-dimensional engineered tissue constructs for a wide range of clinical applications.

  18. Hydrological Classification, a Practical Tool for Mangrove Restoration.

    Science.gov (United States)

    Van Loon, Anne F; Te Brake, Bram; Van Huijgevoort, Marjolein H J; Dijksma, Roel

    2016-01-01

    Mangrove restoration projects, aimed at restoring important values of mangrove forests after degradation, often fail because hydrological conditions are disregarded. We present a simple, but robust methodology to determine hydrological suitability for mangrove species, which can guide restoration practice. In 15 natural and 8 disturbed sites (i.e. disused shrimp ponds) in three case study regions in south-east Asia, water levels were measured and vegetation species composition was determined. Using an existing hydrological classification for mangroves, sites were classified into hydrological classes, based on duration of inundation, and vegetation classes, based on occurrence of mangrove species. For the natural sites hydrological and vegetation classes were similar, showing clear distribution of mangrove species from wet to dry sites. Application of the classification to disturbed sites showed that in some locations hydrological conditions had been restored enough for mangrove vegetation to establish, in some locations hydrological conditions were suitable for various mangrove species but vegetation had not established naturally, and in some locations hydrological conditions were too wet for any mangrove species (natural or planted) to grow. We quantified the effect that removal of obstructions such as dams would have on the hydrology and found that failure of planting at one site could have been prevented. The hydrological classification needs relatively little data, i.e. water levels for a period of only one lunar tidal cycle without additional measurements, and uncertainties in the measurements and analysis are relatively small. For the study locations, the application of the hydrological classification gave important information about how to restore the hydrology to suitable conditions to improve natural regeneration or to plant mangrove species, which could not have been obtained by estimating elevation only. Based on this research a number of recommendations

  19. Hydrological Classification, a Practical Tool for Mangrove Restoration.

    Directory of Open Access Journals (Sweden)

    Anne F Van Loon

    Full Text Available Mangrove restoration projects, aimed at restoring important values of mangrove forests after degradation, often fail because hydrological conditions are disregarded. We present a simple, but robust methodology to determine hydrological suitability for mangrove species, which can guide restoration practice. In 15 natural and 8 disturbed sites (i.e. disused shrimp ponds in three case study regions in south-east Asia, water levels were measured and vegetation species composition was determined. Using an existing hydrological classification for mangroves, sites were classified into hydrological classes, based on duration of inundation, and vegetation classes, based on occurrence of mangrove species. For the natural sites hydrological and vegetation classes were similar, showing clear distribution of mangrove species from wet to dry sites. Application of the classification to disturbed sites showed that in some locations hydrological conditions had been restored enough for mangrove vegetation to establish, in some locations hydrological conditions were suitable for various mangrove species but vegetation had not established naturally, and in some locations hydrological conditions were too wet for any mangrove species (natural or planted to grow. We quantified the effect that removal of obstructions such as dams would have on the hydrology and found that failure of planting at one site could have been prevented. The hydrological classification needs relatively little data, i.e. water levels for a period of only one lunar tidal cycle without additional measurements, and uncertainties in the measurements and analysis are relatively small. For the study locations, the application of the hydrological classification gave important information about how to restore the hydrology to suitable conditions to improve natural regeneration or to plant mangrove species, which could not have been obtained by estimating elevation only. Based on this research a number

  20. Histomorphometry and cortical robusticity of the adult human femur.

    Science.gov (United States)

    Miszkiewicz, Justyna Jolanta; Mahoney, Patrick

    2018-01-13

    Recent quantitative analyses of human bone microanatomy, as well as theoretical models that propose bone microstructure and gross anatomical associations, have started to reveal insights into biological links that may facilitate remodeling processes. However, relationships between bone size and the underlying cortical bone histology remain largely unexplored. The goal of this study is to determine the extent to which static indicators of bone remodeling and vascularity, measured using histomorphometric techniques, relate to femoral midshaft cortical width and robusticity. Using previously published and new quantitative data from 450 adult human male (n = 233) and female (n = 217) femora, we determine if these aspects of femoral size relate to bone microanatomy. Scaling relationships are explored and interpreted within the context of tissue form and function. Analyses revealed that the area and diameter of Haversian canals and secondary osteons, and densities of secondary osteons and osteocyte lacunae from the sub-periosteal region of the posterior midshaft femur cortex were significantly, but not consistently, associated with femoral size. Cortical width and bone robusticity were correlated with osteocyte lacunae density and scaled with positive allometry. Diameter and area of osteons and Haversian canals decreased as the width of cortex and bone robusticity increased, revealing a negative allometric relationship. These results indicate that microscopic products of cortical bone remodeling and vascularity are linked to femur size. Allometric relationships between more robust human femora with thicker cortical bone and histological products of bone remodeling correspond with principles of bone functional adaptation. Future studies may benefit from exploring scaling relationships between bone histomorphometric data and measurements of bone macrostructure.

  1. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  2. Dynamics robustness of cascading systems.

    Directory of Open Access Journals (Sweden)

    Jonathan T Young

    2017-03-01

    Full Text Available A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1 Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2 Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it

  3. Classification system for oral submucous fibrosis

    Directory of Open Access Journals (Sweden)

    Chandramani Bhagvan More

    2012-01-01

    Full Text Available Oral submucous fibrosis (OSMF is a potentially malignant disorder (PMD and crippling condition of oral mucosa. It is a chronic insidious scarring disease of oral cavity, pharynx and upper digestive tract, characterized by progressive inability to open the mouth due to loss of elasticity and development of vertical fibrous bands in labial and buccal tissues. OSMF is a debilitating but preventable oral disease. It predominantly affects people of Southeast Asia and Indian subcontinent, where chewing of arecanut and its commercial preparation is high. Presence of fibrous bands is the main characteristic feature of OSMF. The present literature review provides the compilation of various classification system based on clinical and/or histopathological features of OSMF from several databases. The advantages and drawbacks of these classifications supersede each other, leading to perplexity. An attempt is made to provide and update the knowledge about this potentially malignant disorder to health care providers in order to help in early detection and treatment, thus reducing the mortality of oral cancer.

  4. High Classification Rates for Continuous Cow Activity Recognition using Low-cost GPS Positioning Sensors and Standard Machine Learning Techniques

    DEFF Research Database (Denmark)

    Godsk, Torben; Kjærgaard, Mikkel Baun

    2011-01-01

    activities. By preprocessing the raw cow position data, we obtain high classification rates using standard machine learning techniques to recognize cow activities. Our objectives were to (i) determine to what degree it is possible to robustly recognize cow activities from GPS positioning data, using low...... and their activities manually logged to serve as ground truth. For our dataset we managed to obtain an average classification success rate of 86.2% of the four activities: eating/seeking (90.0%), walking (100%), lying (76.5%), and standing (75.8%) by optimizing both the preprocessing of the raw GPS data...

  5. A hierarchical classification method for finger knuckle print recognition

    Science.gov (United States)

    Kong, Tao; Yang, Gongping; Yang, Lu

    2014-12-01

    Finger knuckle print has recently been seen as an effective biometric technique. In this paper, we propose a hierarchical classification method for finger knuckle print recognition, which is rooted in traditional score-level fusion methods. In the proposed method, we firstly take Gabor feature as the basic feature for finger knuckle print recognition and then a new decision rule is defined based on the predefined threshold. Finally, the minor feature speeded-up robust feature is conducted for these users, who cannot be recognized by the basic feature. Extensive experiments are performed to evaluate the proposed method, and experimental results show that it can achieve a promising performance.

  6. Tissue discrimination in magnetic resonance imaging of the rotator cuff

    International Nuclear Information System (INIS)

    Meschino, G J; Comas, D S; González, M A; Ballarin, V L; Capiel, C

    2016-01-01

    Evaluation and diagnosis of diseases of the muscles within the rotator cuff can be done using different modalities, being the Magnetic Resonance the method more widely used. There are criteria to evaluate the degree of fat infiltration and muscle atrophy, but these have low accuracy and show great variability inter and intra observer. In this paper, an analysis of the texture features of the rotator cuff muscles is performed to classify them and other tissues. A general supervised classification approach was used, combining forward-search as feature selection method with kNN as classification rule. Sections of Magnetic Resonance Images of the tissues of interest were selected by specialist doctors and they were considered as Gold Standard. Accuracies obtained were of 93% for T1-weighted images and 92% for T2-weighted images. As an immediate future work, the combination of both sequences of images will be considered, expecting to improve the results, as well as the use of other sequences of Magnetic Resonance Images. This work represents an initial point for the classification and quantification of fat infiltration and muscle atrophy degree. From this initial point, it is expected to make an accurate and objective system which will result in benefits for future research and for patients’ health. (paper)

  7. Tissue discrimination in magnetic resonance imaging of the rotator cuff

    Science.gov (United States)

    Meschino, G. J.; Comas, D. S.; González, M. A.; Capiel, C.; Ballarin, V. L.

    2016-04-01

    Evaluation and diagnosis of diseases of the muscles within the rotator cuff can be done using different modalities, being the Magnetic Resonance the method more widely used. There are criteria to evaluate the degree of fat infiltration and muscle atrophy, but these have low accuracy and show great variability inter and intra observer. In this paper, an analysis of the texture features of the rotator cuff muscles is performed to classify them and other tissues. A general supervised classification approach was used, combining forward-search as feature selection method with kNN as classification rule. Sections of Magnetic Resonance Images of the tissues of interest were selected by specialist doctors and they were considered as Gold Standard. Accuracies obtained were of 93% for T1-weighted images and 92% for T2-weighted images. As an immediate future work, the combination of both sequences of images will be considered, expecting to improve the results, as well as the use of other sequences of Magnetic Resonance Images. This work represents an initial point for the classification and quantification of fat infiltration and muscle atrophy degree. From this initial point, it is expected to make an accurate and objective system which will result in benefits for future research and for patients’ health.

  8. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  9. Diffuse reflectance spectroscopy for optical soft tissue differentiation as remote feedback control for tissue-specific laser surgery.

    Science.gov (United States)

    Stelzle, Florian; Tangermann-Gerk, Katja; Adler, Werner; Zam, Azhar; Schmidt, Michael; Douplik, Alexandre; Nkenke, Emeka

    2010-04-01

    Laser surgery does not provide haptic feedback for operating layer-by-layer and thereby preserving vulnerable anatomical structures like nerve tissue or blood vessels. Diffuse reflectance spectra can facilitate remote optical tissue differentiation. It is the aim of the study to use this technique on soft tissue samples, to set a technological basis for a remote optical feedback system for tissue-specific laser surgery. Diffuse reflectance spectra (wavelength range: 350-650 nm) of ex vivo types of soft tissue (a total of 10,800 spectra) of the midfacial region of domestic pigs were remotely measured under reduced environmental light conditions and analyzed in order to differentiate between skin, mucosa, muscle, subcutaneous fat, and nerve tissue. We performed a principal components (PC) analysis (PCA) to reduce the number of variables. Linear discriminant analysis (LDA) was utilized for classification. For the tissue differentiation, we calculated the specificity and sensitivity by receiver operating characteristic (ROC) analysis and the area under curve (AUC). Six PCs were found to be adequate for tissue differentiation with diffuse reflectance spectra using LDA. All of the types of soft tissue could be differentiated with high specificity and sensitivity. Only the tissue pairs nervous tissue/fatty tissue and nervous tissue/mucosa showed a decline of differentiation due to bio-structural similarity. However, both of these tissue pairs could still be differentiated with a specificity and sensitivity of more than 90%. Analyzing diffuse reflectance spectroscopy with PCA and LDA allows for remote differentiation of biological tissue. Considering the limitations of the ex vivo conditions, the obtained results are promising and set a basis for the further development of a feedback system for tissue-specific laser surgery. (c) 2010 Wiley-Liss, Inc.

  10. Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models

    Directory of Open Access Journals (Sweden)

    Xiao Guo

    2018-03-01

    Full Text Available An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test.

  11. SAW Classification Algorithm for Chinese Text Classification

    OpenAIRE

    Xiaoli Guo; Huiyu Sun; Tiehua Zhou; Ling Wang; Zhaoyang Qu; Jiannan Zang

    2015-01-01

    Considering the explosive growth of data, the increased amount of text data’s effect on the performance of text categorization forward the need for higher requirements, such that the existing classification method cannot be satisfied. Based on the study of existing text classification technology and semantics, this paper puts forward a kind of Chinese text classification oriented SAW (Structural Auxiliary Word) algorithm. The algorithm uses the special space effect of Chinese text where words...

  12. Soft Tissue Tumor Immunohistochemistry Update: Illustrative Examples of Diagnostic Pearls to Avoid Pitfalls.

    Science.gov (United States)

    Wei, Shi; Henderson-Jackson, Evita; Qian, Xiaohua; Bui, Marilyn M

    2017-08-01

    - Current 2013 World Health Organization classification of tumors of soft tissue arranges these tumors into 12 groups according to their histogenesis. Tumor behavior is classified as benign, intermediate (locally aggressive), intermediate (rarely metastasizing), and malignant. In our practice, a general approach to reaching a definitive diagnosis of soft tissue tumors is to first evaluate clinicoradiologic, histomorphologic, and cytomorphologic features of the tumor to generate some pertinent differential diagnoses. These include the potential line of histogenesis and whether the tumor is benign or malignant, and low or high grade. Although molecular/genetic testing is increasingly finding its applications in characterizing soft tissue tumors, currently immunohistochemistry still not only plays an indispensable role in defining tumor histogenesis, but also serves as a surrogate for underlining molecular/genetic alterations. Objective- To provide an overview focusing on the current concepts in the classification and diagnosis of soft tissue tumors, incorporating immunohistochemistry. This article uses examples to discuss how to use the traditional and new immunohistochemical markers for the diagnosis of soft tissue tumors. Practical diagnostic pearls, summary tables, and figures are used to show how to avoid diagnostic pitfalls. - Data were obtained from pertinent peer-reviewed English-language literature and the authors' first-hand experience as bone and soft tissue pathologists. - -The ultimate goal for a pathologist is to render a specific diagnosis that provides diagnostic, prognostic, and therapeutic information to guide patient care. Immunohistochemistry is integral to the diagnosis and management of soft tissue tumors.

  13. Classification in context

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  14. Feasibility of a novel deformable image registration technique to facilitate classification, targeting, and monitoring of tumor and normal tissue

    International Nuclear Information System (INIS)

    Brock, Kristy K.; Dawson, Laura A.; Sharpe, Michael B.; Moseley, Douglas J.; Jaffray, David A.

    2006-01-01

    Purpose: To investigate the feasibility of a biomechanical-based deformable image registration technique for the integration of multimodality imaging, image guided treatment, and response monitoring. Methods and Materials: A multiorgan deformable image registration technique based on finite element modeling (FEM) and surface projection alignment of selected regions of interest with biomechanical material and interface models has been developed. FEM also provides an inherent method for direct tracking specified regions through treatment and follow-up. Results: The technique was demonstrated on 5 liver cancer patients. Differences of up to 1 cm of motion were seen between the diaphragm and the tumor center of mass after deformable image registration of exhale and inhale CT scans. Spatial differences of 5 mm or more were observed for up to 86% of the surface of the defined tumor after deformable image registration of the computed tomography (CT) and magnetic resonance images. Up to 6.8 mm of motion was observed for the tumor after deformable image registration of the CT and cone-beam CT scan after rigid registration of the liver. Deformable registration of the CT to the follow-up CT allowed a more accurate assessment of tumor response. Conclusions: This biomechanical-based deformable image registration technique incorporates classification, targeting, and monitoring of tumor and normal tissue using one methodology

  15. Estimation of continuous thumb angle and force using electromyogram classification

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Siddiqi

    2016-09-01

    Full Text Available Human hand functions range from precise minute handling to heavy and robust movements. Remarkably, 50% of all hand functions are made possible by the thumb. Therefore, developing an artificial thumb that can mimic the actions of a real thumb precisely is a major achievement. Despite many efforts dedicated to this area of research, control of artificial thumb movements in resemblance to our natural movement still poses as a challenge. Most of the development in this area is based on discontinuous thumb position control, which makes it possible to recreate several of the most important functions of the thumb but does not result in total imitation. This work looks into the classification of electromyogram signals from thumb muscles for the prediction of thumb angle and force during flexion motion. For this purpose, an experimental setup is developed to measure the thumb angle and force throughout the range of flexion and simultaneously gather the electromyogram signals. Further, various features are extracted from these signals for classification and the most suitable feature set is determined and applied to different classifiers. A “piecewise discretization” approach is used for continuous angle prediction. Breaking away from previous research studies, the frequency-domain features performed better than the time-domain features, with the best feature combination turning out to be median frequency–mean frequency–mean power. As for the classifiers, the support vector machine proved to be the most accurate classifier giving about 70% accuracy for both angle and force classification and close to 50% for joint angle–force classification.

  16. Modeling time-to-event (survival) data using classification tree analysis.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  17. Classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  18. Robust rooftop extraction from visible band images using higher order CRF

    KAUST Repository

    Li, Er

    2015-08-01

    In this paper, we propose a robust framework for building extraction in visible band images. We first get an initial classification of the pixels based on an unsupervised presegmentation. Then, we develop a novel conditional random field (CRF) formulation to achieve accurate rooftops extraction, which incorporates pixel-level information and segment-level information for the identification of rooftops. Comparing with the commonly used CRF model, a higher order potential defined on segment is added in our model, by exploiting region consistency and shape feature at segment level. Our experiments show that the proposed higher order CRF model outperforms the state-of-the-art methods both at pixel and object levels on rooftops with complex structures and sizes in challenging environments. © 1980-2012 IEEE.

  19. Spectral multi-energy CT texture analysis with machine learning for tissue classification: an investigation using classification of benign parotid tumours as a testing paradigm.

    Science.gov (United States)

    Al Ajmi, Eiman; Forghani, Behzad; Reinhold, Caroline; Bayat, Maryam; Forghani, Reza

    2018-06-01

    There is a rich amount of quantitative information in spectral datasets generated from dual-energy CT (DECT). In this study, we compare the performance of texture analysis performed on multi-energy datasets to that of virtual monochromatic images (VMIs) at 65 keV only, using classification of the two most common benign parotid neoplasms as a testing paradigm. Forty-two patients with pathologically proven Warthin tumour (n = 25) or pleomorphic adenoma (n = 17) were evaluated. Texture analysis was performed on VMIs ranging from 40 to 140 keV in 5-keV increments (multi-energy analysis) or 65-keV VMIs only, which is typically considered equivalent to single-energy CT. Random forest (RF) models were constructed for outcome prediction using separate randomly selected training and testing sets or the entire patient set. Using multi-energy texture analysis, tumour classification in the independent testing set had accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 92%, 86%, 100%, 100%, and 83%, compared to 75%, 57%, 100%, 100%, and 63%, respectively, for single-energy analysis. Multi-energy texture analysis demonstrates superior performance compared to single-energy texture analysis of VMIs at 65 keV for classification of benign parotid tumours. • We present and validate a paradigm for texture analysis of DECT scans. • Multi-energy dataset texture analysis is superior to single-energy dataset texture analysis. • DECT texture analysis has high accura\\cy for diagnosis of benign parotid tumours. • DECT texture analysis with machine learning can enhance non-invasive diagnostic tumour evaluation.

  20. Robust Indoor Human Activity Recognition Using Wireless Signals.

    Science.gov (United States)

    Wang, Yi; Jiang, Xinli; Cao, Rongyu; Wang, Xiyang

    2015-07-15

    Wireless signals-based activity detection and recognition technology may be complementary to the existing vision-based methods, especially under the circumstance of occlusions, viewpoint change, complex background, lighting condition change, and so on. This paper explores the properties of the channel state information (CSI) of Wi-Fi signals, and presents a robust indoor daily human activity recognition framework with only one pair of transmission points (TP) and access points (AP). First of all, some indoor human actions are selected as primitive actions forming a training set. Then, an online filtering method is designed to make actions' CSI curves smooth and allow them to contain enough pattern information. Each primitive action pattern can be segmented from the outliers of its multi-input multi-output (MIMO) signals by a proposed segmentation method. Lastly, in online activities recognition, by selecting proper features and Support Vector Machine (SVM) based multi-classification, activities constituted by primitive actions can be recognized insensitive to the locations, orientations, and speeds.

  1. Robust Indoor Human Activity Recognition Using Wireless Signals

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2015-07-01

    Full Text Available Wireless signals–based activity detection and recognition technology may be complementary to the existing vision-based methods, especially under the circumstance of occlusions, viewpoint change, complex background, lighting condition change, and so on. This paper explores the properties of the channel state information (CSI of Wi-Fi signals, and presents a robust indoor daily human activity recognition framework with only one pair of transmission points (TP and access points (AP. First of all, some indoor human actions are selected as primitive actions forming a training set. Then, an online filtering method is designed to make actions’ CSI curves smooth and allow them to contain enough pattern information. Each primitive action pattern can be segmented from the outliers of its multi-input multi-output (MIMO signals by a proposed segmentation method. Lastly, in online activities recognition, by selecting proper features and Support Vector Machine (SVM based multi-classification, activities constituted by primitive actions can be recognized insensitive to the locations, orientations, and speeds.

  2. Real-Time Subject-Independent Pattern Classification of Overt and Covert Movements from fNIRS Signals.

    Directory of Open Access Journals (Sweden)

    Neethu Robinson

    Full Text Available Recently, studies have reported the use of Near Infrared Spectroscopy (NIRS for developing Brain-Computer Interface (BCI by applying online pattern classification of brain states from subject-specific fNIRS signals. The purpose of the present study was to develop and test a real-time method for subject-specific and subject-independent classification of multi-channel fNIRS signals using support-vector machines (SVM, so as to determine its feasibility as an online neurofeedback system. Towards this goal, we used left versus right hand movement execution and movement imagery as study paradigms in a series of experiments. In the first two experiments, activations in the motor cortex during movement execution and movement imagery were used to develop subject-dependent models that obtained high classification accuracies thereby indicating the robustness of our classification method. In the third experiment, a generalized classifier-model was developed from the first two experimental data, which was then applied for subject-independent neurofeedback training. Application of this method in new participants showed mean classification accuracy of 63% for movement imagery tasks and 80% for movement execution tasks. These results, and their corresponding offline analysis reported in this study demonstrate that SVM based real-time subject-independent classification of fNIRS signals is feasible. This method has important applications in the field of hemodynamic BCIs, and neuro-rehabilitation where patients can be trained to learn spatio-temporal patterns of healthy brain activity.

  3. Determination of quantitative tissue composition by iterative reconstruction on 3D DECT volumes

    Energy Technology Data Exchange (ETDEWEB)

    Magnusson, Maria [Linkoeping Univ. (Sweden). Dept. of Electrical Engineering; Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Linkoeping Univ. (Sweden). Center for Medical Image Science and Visualization (CMIV); Malusek, Alexandr [Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Linkoeping Univ. (Sweden). Center for Medical Image Science and Visualization (CMIV); Nuclear Physics Institute AS CR, Prague (Czech Republic). Dept. of Radiation Dosimetry; Muhammad, Arif [Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Carlsson, Gudrun Alm [Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Linkoeping Univ. (Sweden). Center for Medical Image Science and Visualization (CMIV)

    2011-07-01

    Quantitative tissue classification using dual-energy CT has the potential to improve accuracy in radiation therapy dose planning as it provides more information about material composition of scanned objects than the currently used methods based on single-energy CT. One problem that hinders successful application of both single- and dual-energy CT is the presence of beam hardening and scatter artifacts in reconstructed data. Current pre- and post-correction methods used for image reconstruction often bias CT attenuation values and thus limit their applicability for quantitative tissue classification. Here we demonstrate simulation studies with a novel iterative algorithm that decomposes every soft tissue voxel into three base materials: water, protein, and adipose. The results demonstrate that beam hardening artifacts can effectively be removed and accurate estimation of mass fractions of each base material can be achieved. Our iterative algorithm starts with calculating parallel projections on two previously reconstructed DECT volumes reconstructed from fan-beam or helical projections with small conebeam angle. The parallel projections are then used in an iterative loop. Future developments include segmentation of soft and bone tissue and subsequent determination of bone composition. (orig.)

  4. Phylogenetics, ancestral state reconstruction, and a new infrafamilial classification of the pantropical Ochnaceae (Medusagynaceae, Ochnaceae s.str., Quiinaceae) based on five DNA regions

    NARCIS (Netherlands)

    Schneider, J.V.; Bissiengou, P.; Amaral, M.D.; Tahir, A.; Fay, M.F.; Thines, M.; Sosef, M.S.M.; Zizka, G.; Chatrou, L.W.

    2014-01-01

    Ochnaceae s.str. (Malpighiales) are a pantropical family of about 500 species and 27 genera of almost exclusively woody plants. Infrafamilial classification and relationships have been controversial partially due to the lack of a robust phylogenetic framework. Including all genera except Indosinia

  5. Tissue Multiplatform-Based Metabolomics/Metabonomics for Enhanced Metabolome Coverage.

    Science.gov (United States)

    Vorkas, Panagiotis A; Abellona U, M R; Li, Jia V

    2018-01-01

    The use of tissue as a matrix to elucidate disease pathology or explore intervention comes with several advantages. It allows investigation of the target alteration directly at the focal location and facilitates the detection of molecules that could become elusive after secretion into biofluids. However, tissue metabolomics/metabonomics comes with challenges not encountered in biofluid analyses. Furthermore, tissue heterogeneity does not allow for tissue aliquoting. Here we describe a multiplatform, multi-method workflow which enables metabolic profiling analysis of tissue samples, while it can deliver enhanced metabolome coverage. After applying a dual consecutive extraction (organic followed by aqueous), tissue extracts are analyzed by reversed-phase (RP-) and hydrophilic interaction liquid chromatography (HILIC-) ultra-performance liquid chromatography coupled to mass spectrometry (UPLC-MS) and nuclear magnetic resonance (NMR) spectroscopy. This pipeline incorporates the required quality control features, enhances versatility, allows provisional aliquoting of tissue extracts for future guided analyses, expands the range of metabolites robustly detected, and supports data integration. It has been successfully employed for the analysis of a wide range of tissue types.

  6. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  7. Robust medical image segmentation for hyperthermia treatment planning

    International Nuclear Information System (INIS)

    Neufeld, E.; Chavannes, N.; Kuster, N.; Samaras, T.

    2005-01-01

    Full text: This work is part of an ongoing effort to develop a comprehensive hyperthermia treatment planning (HTP) tool. The goal is to unify all the steps necessary to perform treatment planning - from image segmentation to optimization of the energy deposition pattern - in a single tool. The bases of the HTP software are the routines and know-how developed in our TRINTY project that resulted the commercial EM platform SEMCAD-X. It incorporates the non-uniform finite-difference time-domain (FDTD) method, permitting the simulation of highly detailed models. Subsequently, in order to create highly resolved patient models, a powerful and robust segmentation tool is needed. A toolbox has been created that allows the flexible combination of various segmentation methods as well as several pre-and postprocessing functions. It works primarily with CT and MRI images, which it can read in various formats. A wide variety of segmentation methods has been implemented. This includes thresholding techniques (k-means classification, expectation maximization and modal histogram analysis for automatic threshold detection, multi-dimensional if required), region growing methods (with hysteretic behavior and simultaneous competitive growing), an interactive marker based watershed transformation, level-set methods (homogeneity and edge based, fast-marching), a flexible live-wire implementation as well as fuzzy connectedness. Due to the large number of tissues that need to be segmented for HTP, no methods that rely on prior knowledge have been implemented. Various edge extraction routines, distance transforms, smoothing techniques (convolutions, anisotropic diffusion, sigma filter...), connected component analysis, topologically flexible interpolation, image algebra and morphological operations are available. Moreover, contours or surfaces can be extracted, simplified and exported. Using these different techniques on several samples, the following conclusions have been drawn: Due to the

  8. The cranial cartilages of teleosts and their classification.

    OpenAIRE

    Benjamin, M

    1990-01-01

    The structure and distribution of cartilages has been studied in 45 species from 24 families. The resulting data have been used as a basis for establishing a new classification. A cartilage is regarded as 'cell-rich' if its cells or their lacunae occupy more than half of the tissue volume. Five classes of cell-rich cartilage are recognised (a) hyaline-cell cartilage (common in the lips of bottom-dwelling cyprinids) and its subtypes fibro/hyaline-cell cartilage, elastic/hyaline-cell cartilage ...

  9. 78 FR 54970 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-09-09

    ... Service 7 CFR Part 27 [AMS-CN-13-0043] RIN 0581-AD33 Cotton Futures Classification: Optional Classification Procedure AGENCY: Agricultural Marketing Service, USDA. ACTION: Proposed rule. SUMMARY: The... optional cotton futures classification procedure--identified and known as ``registration'' by the U.S...

  10. A Decision-Tree-Based Algorithm for Speech/Music Classification and Segmentation

    Directory of Open Access Journals (Sweden)

    Lavner Yizhar

    2009-01-01

    Full Text Available We present an efficient algorithm for segmentation of audio signals into speech or music. The central motivation to our study is consumer audio applications, where various real-time enhancements are often applied. The algorithm consists of a learning phase and a classification phase. In the learning phase, predefined training data is used for computing various time-domain and frequency-domain features, for speech and music signals separately, and estimating the optimal speech/music thresholds, based on the probability density functions of the features. An automatic procedure is employed to select the best features for separation. In the test phase, initial classification is performed for each segment of the audio signal, using a three-stage sieve-like approach, applying both Bayesian and rule-based methods. To avoid erroneous rapid alternations in the classification, a smoothing technique is applied, averaging the decision on each segment with past segment decisions. Extensive evaluation of the algorithm, on a database of more than 12 hours of speech and more than 22 hours of music showed correct identification rates of 99.4% and 97.8%, respectively, and quick adjustment to alternating speech/music sections. In addition to its accuracy and robustness, the algorithm can be easily adapted to different audio types, and is suitable for real-time operation.

  11. 75 FR 70112 - Medical Devices; General and Plastic Surgery Devices; Classification of Non-Powered Suction...

    Science.gov (United States)

    2010-11-17

    .... FDA-2010-N-0513] Medical Devices; General and Plastic Surgery Devices; Classification of Non-Powered... risks. Adverse tissue reaction Material degradation Improper function of suction apparatus (e.g., reflux.... Material degradation Section 8. Stability and Shelf Life. [[Page 70113

  12. Computer-aided classification of lesions by means of their kinetic signatures in dynamic contrast-enhanced MR images

    Science.gov (United States)

    Twellmann, Thorsten; ter Haar Romeny, Bart

    2008-03-01

    The kinetic characteristics of tissue in dynamic contrast-enhanced magnetic resonance imaging data are an important source of information for the differentiation of benign and malignant lesions. Kinetic curves measured for each lesion voxel allow to infer information about the state of the local tissue. As a whole, they reflect the heterogeneity of the vascular structure within a lesion, an important criterion for the preoperative classification of lesions. Current clinical practice in analysis of tissue kinetics however is mainly based on the evaluation of the "most-suspect curve", which is only related to a small, manually or semi-automatically selected region-of-interest within a lesion and does not reflect any information about tissue heterogeneity. We propose a new method which exploits the full range of kinetic information for the automatic classification of lesions. Instead of breaking down the large amount of kinetic information to a single curve, each lesion is considered as a probability distribution in a space of kinetic features, efficiently represented by its kinetic signature obtained by adaptive vector quantization of the corresponding kinetic curves. Dissimilarity of two signatures can be objectively measured using the Mallows distance, which is a metric defined on probability distributions. The embedding of this metric in a suitable kernel function enables us to employ modern kernel-based machine learning techniques for the classification of signatures. In a study considering 81 breast lesions, the proposed method yielded an A z value of 0.89+/-0.01 for the discrimination of benign and malignant lesions in a nested leave-one-lesion-out evaluation setting.

  13. Generation of branching ureteric bud tissues from human pluripotent stem cells.

    Science.gov (United States)

    Mae, Shin-Ichi; Ryosaka, Makoto; Toyoda, Taro; Matsuse, Kyoko; Oshima, Yoichi; Tsujimoto, Hiraku; Okumura, Shiori; Shibasaki, Aya; Osafune, Kenji

    2018-01-01

    Recent progress in kidney regeneration research is noteworthy. However, the selective and robust differentiation of the ureteric bud (UB), an embryonic renal progenitor, from human pluripotent stem cells (hPSCs) remains to be established. The present study aimed to establish a robust induction method for branching UB tissue from hPSCs towards the creation of renal disease models. Here, we found that anterior intermediate mesoderm (IM) differentiates from anterior primitive streak, which allowed us to successfully develop an efficient two-dimensional differentiation method of hPSCs into Wolffian duct (WD) cells. We also established a simplified procedure to generate three-dimensional WD epithelial structures that can form branching UB tissues. This system may contribute to hPSC-based regenerative therapies and disease models for intractable disorders arising in the kidney and lower urinary tract. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. MLVA Based Classification of Mycobacterium tuberculosis Complex Lineages for a Robust Phylogeographic Snapshot of Its Worldwide Molecular Diversity

    Science.gov (United States)

    Hill, Véronique; Zozio, Thierry; Sadikalay, Syndia; Viegas, Sofia; Streit, Elisabeth; Kallenius, Gunilla; Rastogi, Nalin

    2012-01-01

    Multiple-locus variable-number tandem repeat analysis (MLVA) is useful to establish transmission routes and sources of infections for various microorganisms including Mycobacterium tuberculosis complex (MTC). The recently released SITVITWEB database contains 12-loci Mycobacterial Interspersed Repetitive Units – Variable Number of Tandem DNA Repeats (MIRU-VNTR) profiles and spoligotype patterns for thousands of MTC strains; it uses MIRU International Types (MIT) and Spoligotype International Types (SIT) to designate clustered patterns worldwide. Considering existing doubts on the ability of spoligotyping alone to reveal exact phylogenetic relationships between MTC strains, we developed a MLVA based classification for MTC genotypic lineages. We studied 6 different subsets of MTC isolates encompassing 7793 strains worldwide. Minimum spanning trees (MST) were constructed to identify major lineages, and the most common representative located as a central node was taken as the prototype defining different phylogenetic groups. A total of 7 major lineages with their respective prototypes were identified: Indo-Oceanic/MIT57, East Asian and African Indian/MIT17, Euro American/MIT116, West African-I/MIT934, West African-II/MIT664, M. bovis/MIT49, M.canettii/MIT60. Further MST subdivision identified an additional 34 sublineage MIT prototypes. The phylogenetic relationships among the 37 newly defined MIRU-VNTR lineages were inferred using a classification algorithm based on a bayesian approach. This information was used to construct an updated phylogenetic and phylogeographic snapshot of worldwide MTC diversity studied both at the regional, sub-regional, and country level according to the United Nations specifications. We also looked for IS6110 insertional events that are known to modify the results of the spoligotyping in specific circumstances, and showed that a fair portion of convergence leading to the currently observed bias in phylogenetic classification of strains may

  15. Measurement of the hyperelastic properties of 44 pathological ex vivo breast tissue samples

    International Nuclear Information System (INIS)

    O'Hagan, Joseph J; Samani, Abbas

    2009-01-01

    The elastic and hyperelastic properties of biological soft tissues have been of interest to the medical community. There are several biomedical applications where parameters characterizing such properties are critical for a reliable clinical outcome. These applications include surgery planning, needle biopsy and brachtherapy where tissue biomechanical modeling is involved. Another important application is interpreting nonlinear elastography images. While there has been considerable research on the measurement of the linear elastic modulus of small tissue samples, little research has been conducted for measuring parameters that characterize the nonlinear elasticity of tissues included in tissue slice specimens. This work presents hyperelastic measurement results of 44 pathological ex vivo breast tissue samples. For each sample, five hyperelastic models have been used, including the Yeoh, N = 2 polynomial, N = 1 Ogden, Arruda-Boyce, and Veronda-Westmann models. Results show that the Yeoh, polynomial and Ogden models are the most accurate in terms of fitting experimental data. The results indicate that almost all of the parameters corresponding to the pathological tissues are between two times to over two orders of magnitude larger than those of normal tissues, with C 11 showing the most significant difference. Furthermore, statistical analysis indicates that C 02 of the Yeoh model, and C 11 and C 20 of the polynomial model have very good potential for cancer classification as they show statistically significant differences for various cancer types, especially for invasive lobular carcinoma. In addition to the potential for use in cancer classification, the presented data are very important for applications such as surgery planning and virtual reality based clinician training systems where accurate nonlinear tissue response modeling is required.

  16. Measurement of the hyperelastic properties of 44 pathological ex vivo breast tissue samples

    Energy Technology Data Exchange (ETDEWEB)

    O' Hagan, Joseph J; Samani, Abbas [Department of Electrical and Computer Engineering, University of Western Ontario, London, ON (Canada)], E-mail: asamani@uwo.ca

    2009-04-21

    The elastic and hyperelastic properties of biological soft tissues have been of interest to the medical community. There are several biomedical applications where parameters characterizing such properties are critical for a reliable clinical outcome. These applications include surgery planning, needle biopsy and brachtherapy where tissue biomechanical modeling is involved. Another important application is interpreting nonlinear elastography images. While there has been considerable research on the measurement of the linear elastic modulus of small tissue samples, little research has been conducted for measuring parameters that characterize the nonlinear elasticity of tissues included in tissue slice specimens. This work presents hyperelastic measurement results of 44 pathological ex vivo breast tissue samples. For each sample, five hyperelastic models have been used, including the Yeoh, N = 2 polynomial, N = 1 Ogden, Arruda-Boyce, and Veronda-Westmann models. Results show that the Yeoh, polynomial and Ogden models are the most accurate in terms of fitting experimental data. The results indicate that almost all of the parameters corresponding to the pathological tissues are between two times to over two orders of magnitude larger than those of normal tissues, with C{sub 11} showing the most significant difference. Furthermore, statistical analysis indicates that C{sub 02} of the Yeoh model, and C{sub 11} and C{sub 20} of the polynomial model have very good potential for cancer classification as they show statistically significant differences for various cancer types, especially for invasive lobular carcinoma. In addition to the potential for use in cancer classification, the presented data are very important for applications such as surgery planning and virtual reality based clinician training systems where accurate nonlinear tissue response modeling is required.

  17. Measurement of the hyperelastic properties of 44 pathological ex vivo breast tissue samples

    Science.gov (United States)

    O'Hagan, Joseph J.; Samani, Abbas

    2009-04-01

    The elastic and hyperelastic properties of biological soft tissues have been of interest to the medical community. There are several biomedical applications where parameters characterizing such properties are critical for a reliable clinical outcome. These applications include surgery planning, needle biopsy and brachtherapy where tissue biomechanical modeling is involved. Another important application is interpreting nonlinear elastography images. While there has been considerable research on the measurement of the linear elastic modulus of small tissue samples, little research has been conducted for measuring parameters that characterize the nonlinear elasticity of tissues included in tissue slice specimens. This work presents hyperelastic measurement results of 44 pathological ex vivo breast tissue samples. For each sample, five hyperelastic models have been used, including the Yeoh, N = 2 polynomial, N = 1 Ogden, Arruda-Boyce, and Veronda-Westmann models. Results show that the Yeoh, polynomial and Ogden models are the most accurate in terms of fitting experimental data. The results indicate that almost all of the parameters corresponding to the pathological tissues are between two times to over two orders of magnitude larger than those of normal tissues, with C11 showing the most significant difference. Furthermore, statistical analysis indicates that C02 of the Yeoh model, and C11 and C20 of the polynomial model have very good potential for cancer classification as they show statistically significant differences for various cancer types, especially for invasive lobular carcinoma. In addition to the potential for use in cancer classification, the presented data are very important for applications such as surgery planning and virtual reality based clinician training systems where accurate nonlinear tissue response modeling is required.

  18. Using cell nuclei features to detect colon cancer tissue in hematoxylin and eosin stained slides.

    Science.gov (United States)

    Jørgensen, Alex Skovsbo; Rasmussen, Anders Munk; Andersen, Niels Kristian Mäkinen; Andersen, Simon Kragh; Emborg, Jonas; Røge, Rasmus; Østergaard, Lasse Riis

    2017-08-01

    Currently, diagnosis of colon cancer is based on manual examination of histopathological images by a pathologist. This can be time consuming and interpretation of the images is subject to inter- and intra-observer variability. This may be improved by introducing a computer-aided diagnosis (CAD) system for automatic detection of cancer tissue within whole slide hematoxylin and eosin (H&E) stains. Cancer disrupts the normal control mechanisms of cell proliferation and differentiation, affecting the structure and appearance of the cells. Therefore, extracting features from segmented cell nuclei structures may provide useful information to detect cancer tissue. A framework for automatic classification of regions of interest (ROI) containing either benign or cancerous colon tissue extracted from whole slide H&E stained images using cell nuclei features was proposed. A total of 1,596 ROI's were extracted from 87 whole slide H&E stains (44 benign and 43 cancer). A cell nuclei segmentation algorithm consisting of color deconvolution, k-means clustering, local adaptive thresholding, and cell separation was performed within the ROI's to extract cell nuclei features. From the segmented cell nuclei structures a total of 750 texture and intensity-based features were extracted for classification of the ROI's. The nine most discriminative cell nuclei features were used in a random forest classifier to determine if the ROI's contained benign or cancer tissue. The ROI classification obtained an area under the curve (AUC) of 0.96, sensitivity of 0.88, specificity of 0.92, and accuracy of 0.91 using an optimized threshold. The developed framework showed promising results in using cell nuclei features to classify ROIs into containing benign or cancer tissue in H&E stained tissue samples. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  19. Reservoir Identification: Parameter Characterization or Feature Classification

    Science.gov (United States)

    Cao, J.

    2017-12-01

    The ultimate goal of oil and gas exploration is to find the oil or gas reservoirs with industrial mining value. Therefore, the core task of modern oil and gas exploration is to identify oil or gas reservoirs on the seismic profiles. Traditionally, the reservoir is identify by seismic inversion of a series of physical parameters such as porosity, saturation, permeability, formation pressure, and so on. Due to the heterogeneity of the geological medium, the approximation of the inversion model and the incompleteness and noisy of the data, the inversion results are highly uncertain and must be calibrated or corrected with well data. In areas where there are few wells or no well, reservoir identification based on seismic inversion is high-risk. Reservoir identification is essentially a classification issue. In the identification process, the underground rocks are divided into reservoirs with industrial mining value and host rocks with non-industrial mining value. In addition to the traditional physical parameters classification, the classification may be achieved using one or a few comprehensive features. By introducing the concept of seismic-print, we have developed a new reservoir identification method based on seismic-print analysis. Furthermore, we explore the possibility to use deep leaning to discover the seismic-print characteristics of oil and gas reservoirs. Preliminary experiments have shown that the deep learning of seismic data could distinguish gas reservoirs from host rocks. The combination of both seismic-print analysis and seismic deep learning is expected to be a more robust reservoir identification method. The work was supported by NSFC under grant No. 41430323 and No. U1562219, and the National Key Research and Development Program under Grant No. 2016YFC0601

  20. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    International Nuclear Information System (INIS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Shen, Aiguo; Hu, Jiming; Jia, Jun

    2013-01-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory. (paper)

  1. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    Science.gov (United States)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-03-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.

  2. Texture analysis of speckle in optical coherence tomography images of tissue phantoms

    International Nuclear Information System (INIS)

    Gossage, Kirk W; Smith, Cynthia M; Kanter, Elizabeth M; Hariri, Lida P; Stone, Alice L; Rodriguez, Jeffrey J; Williams, Stuart K; Barton, Jennifer K

    2006-01-01

    Optical coherence tomography (OCT) is an imaging modality capable of acquiring cross-sectional images of tissue using back-reflected light. Conventional OCT images have a resolution of 10-15 μm, and are thus best suited for visualizing tissue layers and structures. OCT images of collagen (with and without endothelial cells) have no resolvable features and may appear to simply show an exponential decrease in intensity with depth. However, examination of these images reveals that they display a characteristic repetitive structure due to speckle.The purpose of this study is to evaluate the application of statistical and spectral texture analysis techniques for differentiating living and non-living tissue phantoms containing various sizes and distributions of scatterers based on speckle content in OCT images. Statistically significant differences between texture parameters and excellent classification rates were obtained when comparing various endothelial cell concentrations ranging from 0 cells/ml to 25 million cells/ml. Statistically significant results and excellent classification rates were also obtained using various sizes of microspheres with concentrations ranging from 0 microspheres/ml to 500 million microspheres/ml. This study has shown that texture analysis of OCT images may be capable of differentiating tissue phantoms containing various sizes and distributions of scatterers

  3. Classification of refrigerants; Classification des fluides frigorigenes

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    This document was made from the US standard ANSI/ASHRAE 34 published in 2001 and entitled 'designation and safety classification of refrigerants'. This classification allows to clearly organize in an international way the overall refrigerants used in the world thanks to a codification of the refrigerants in correspondence with their chemical composition. This note explains this codification: prefix, suffixes (hydrocarbons and derived fluids, azeotropic and non-azeotropic mixtures, various organic compounds, non-organic compounds), safety classification (toxicity, flammability, case of mixtures). (J.S.)

  4. Spatial distribution of soluble insulin in pig subcutaneous tissue

    DEFF Research Database (Denmark)

    Thomsen, Maria; Rasmussen, Christian Hove; Refsgaard, Hanne H F

    2015-01-01

    in the tomographic reconstructions and the amount of drug in each tissue class was quantified. With a scan time of about 45min per sample, and a robust segmentation it was possible to analyze differences in the spatial drug distribution between several similar injections. It was studied how the drug distribution...

  5. Robustness Analyses of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Hald, Frederik

    2013-01-01

    The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many mo...... with respect to robustness of timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many...... modern building codes consider the need for the robustness of structures and provide strategies and methods to obtain robustness. Therefore, a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues...

  6. Experimental Investigation for Fault Diagnosis Based on a Hybrid Approach Using Wavelet Packet and Support Vector Classification

    Directory of Open Access Journals (Sweden)

    Pengfei Li

    2014-01-01

    Full Text Available To deal with the difficulty to obtain a large number of fault samples under the practical condition for mechanical fault diagnosis, a hybrid method that combined wavelet packet decomposition and support vector classification (SVC is proposed. The wavelet packet is employed to decompose the vibration signal to obtain the energy ratio in each frequency band. Taking energy ratios as feature vectors, the pattern recognition results are obtained by the SVC. The rolling bearing and gear fault diagnostic results of the typical experimental platform show that the present approach is robust to noise and has higher classification accuracy and, thus, provides a better way to diagnose mechanical faults under the condition of small fault samples.

  7. Activation of Pax7-positive cells in a non-contractile tissue contributes to regeneration of myogenic tissues in the electric fish S. macrurus.

    Directory of Open Access Journals (Sweden)

    Christopher M Weber

    Full Text Available The ability to regenerate tissues is shared across many metazoan taxa, yet the type and extent to which multiple cellular mechanisms come into play can differ across species. For example, urodele amphibians can completely regenerate all lost tissues, including skeletal muscles after limb amputation. This remarkable ability of urodeles to restore entire limbs has been largely linked to a dedifferentiation-dependent mechanism of regeneration. However, whether cell dedifferentiation is the fundamental factor that triggers a robust regeneration capacity, and whether the loss or inhibition of this process explains the limited regeneration potential in other vertebrates is not known. Here, we studied the cellular mechanisms underlying the repetitive regeneration of myogenic tissues in the electric fish S. macrurus. Our in vivo microinjection studies of high molecular weight cell lineage tracers into single identified adult myogenic cells (muscle or noncontractile muscle-derived electrocytes revealed no fragmentation or cellularization proximal to the amputation plane. In contrast, ultrastructural and immunolabeling studies verified the presence of myogenic stem cells that express the satellite cell marker Pax7 in mature muscle fibers and electrocytes of S. macrurus. These data provide the first example of Pax-7 positive muscle stem cells localized within a non-contractile electrogenic tissue. Moreover, upon amputation, Pax-7 positive cells underwent a robust replication and were detected exclusively in regions that give rise to myogenic cells and dorsal spinal cord components revealing a regeneration process in S. macrurus that is dependent on the activation of myogenic stem cells for the renewal of both skeletal muscle and the muscle-derived electric organ. These data are consistent with the emergent concept in vertebrate regeneration that different tissues provide a distinct progenitor cell population to the regeneration blastema, and these

  8. Bearing Fault Classification Based on Conditional Random Field

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2013-01-01

    Full Text Available Condition monitoring of rolling element bearing is paramount for predicting the lifetime and performing effective maintenance of the mechanical equipment. To overcome the drawbacks of the hidden Markov model (HMM and improve the diagnosis accuracy, conditional random field (CRF model based classifier is proposed. In this model, the feature vectors sequences and the fault categories are linked by an undirected graphical model in which their relationship is represented by a global conditional probability distribution. In comparison with the HMM, the main advantage of the CRF model is that it can depict the temporal dynamic information between the observation sequences and state sequences without assuming the independence of the input feature vectors. Therefore, the interrelationship between the adjacent observation vectors can also be depicted and integrated into the model, which makes the classifier more robust and accurate than the HMM. To evaluate the effectiveness of the proposed method, four kinds of bearing vibration signals which correspond to normal, inner race pit, outer race pit and roller pit respectively are collected from the test rig. And the CRF and HMM models are built respectively to perform fault classification by taking the sub band energy features of wavelet packet decomposition (WPD as the observation sequences. Moreover, K-fold cross validation method is adopted to improve the evaluation accuracy of the classifier. The analysis and comparison under different fold times show that the accuracy rate of classification using the CRF model is higher than the HMM. This method brings some new lights on the accurate classification of the bearing faults.

  9. Robust intravascular optical coherence elastography by line correlations

    International Nuclear Information System (INIS)

    Soest, Gijs van; Mastik, Frits; Jong, Nico de; Steen, Anton F W van der

    2007-01-01

    We present a new method for intravascular optical coherence elastography, which is robust against motion artefacts. It employs the correlation between adjacent lines, instead of subsequent frames. Pressure to deform the tissue is applied synchronously with the line scan rate of the optical coherence tomography (OCT) instrument. The viability of the method is demonstrated with a simulation study. We find that the root mean square (rms) error of the displacement estimate is 0.55 μm, and the rms error of the strain is 0.6%. It is shown that high-strain spots in the vessel wall, such as observed at the sites of vulnerable atherosclerotic lesions, can be detected with the technique

  10. Contaminant classification using cosine distances based on multiple conventional sensors.

    Science.gov (United States)

    Liu, Shuming; Che, Han; Smith, Kate; Chang, Tian

    2015-02-01

    Emergent contamination events have a significant impact on water systems. After contamination detection, it is important to classify the type of contaminant quickly to provide support for remediation attempts. Conventional methods generally either rely on laboratory-based analysis, which requires a long analysis time, or on multivariable-based geometry analysis and sequence analysis, which is prone to being affected by the contaminant concentration. This paper proposes a new contaminant classification method, which discriminates contaminants in a real time manner independent of the contaminant concentration. The proposed method quantifies the similarities or dissimilarities between sensors' responses to different types of contaminants. The performance of the proposed method was evaluated using data from contaminant injection experiments in a laboratory and compared with a Euclidean distance-based method. The robustness of the proposed method was evaluated using an uncertainty analysis. The results show that the proposed method performed better in identifying the type of contaminant than the Euclidean distance based method and that it could classify the type of contaminant in minutes without significantly compromising the correct classification rate (CCR).

  11. Classification of hydration status using electrocardiogram and machine learning

    Science.gov (United States)

    Kaveh, Anthony; Chung, Wayne

    2013-10-01

    The electrocardiogram (ECG) has been used extensively in clinical practice for decades to non-invasively characterize the health of heart tissue; however, these techniques are limited to time domain features. We propose a machine classification system using support vector machines (SVM) that uses temporal and spectral information to classify health state beyond cardiac arrhythmias. Our method uses single lead ECG to classify volume depletion (or dehydration) without the lengthy and costly blood analysis tests traditionally used for detecting dehydration status. Our method builds on established clinical ECG criteria for identifying electrolyte imbalances and lends to automated, computationally efficient implementation. The method was tested on the MIT-BIH PhysioNet database to validate this purely computational method for expedient disease-state classification. The results show high sensitivity, supporting use as a cost- and time-effective screening tool.

  12. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  13. Non-Hodgkin lymphoma response evaluation with MRI texture classification

    Directory of Open Access Journals (Sweden)

    Heinonen Tomi T

    2009-06-01

    Full Text Available Abstract Background To show magnetic resonance imaging (MRI texture appearance change in non-Hodgkin lymphoma (NHL during treatment with response controlled by quantitative volume analysis. Methods A total of 19 patients having NHL with an evaluable lymphoma lesion were scanned at three imaging timepoints with 1.5T device during clinical treatment evaluation. Texture characteristics of images were analyzed and classified with MaZda application and statistical tests. Results NHL tissue MRI texture imaged before treatment and under chemotherapy was classified within several subgroups, showing best discrimination with 96% correct classification in non-linear discriminant analysis of T2-weighted images. Texture parameters of MRI data were successfully tested with statistical tests to assess the impact of the separability of the parameters in evaluating chemotherapy response in lymphoma tissue. Conclusion Texture characteristics of MRI data were classified successfully; this proved texture analysis to be potential quantitative means of representing lymphoma tissue changes during chemotherapy response monitoring.

  14. Predicting disease risk using bootstrap ranking and classification algorithms.

    Directory of Open Access Journals (Sweden)

    Ohad Manor

    Full Text Available Genome-wide association studies (GWAS are widely used to search for genetic loci that underlie human disease. Another goal is to predict disease risk for different individuals given their genetic sequence. Such predictions could either be used as a "black box" in order to promote changes in life-style and screening for early diagnosis, or as a model that can be studied to better understand the mechanism of the disease. Current methods for risk prediction typically rank single nucleotide polymorphisms (SNPs by the p-value of their association with the disease, and use the top-associated SNPs as input to a classification algorithm. However, the predictive power of such methods is relatively poor. To improve the predictive power, we devised BootRank, which uses bootstrapping in order to obtain a robust prioritization of SNPs for use in predictive models. We show that BootRank improves the ability to predict disease risk of unseen individuals in the Wellcome Trust Case Control Consortium (WTCCC data and results in a more robust set of SNPs and a larger number of enriched pathways being associated with the different diseases. Finally, we show that combining BootRank with seven different classification algorithms improves performance compared to previous studies that used the WTCCC data. Notably, diseases for which BootRank results in the largest improvements were recently shown to have more heritability than previously thought, likely due to contributions from variants with low minimum allele frequency (MAF, suggesting that BootRank can be beneficial in cases where SNPs affecting the disease are poorly tagged or have low MAF. Overall, our results show that improving disease risk prediction from genotypic information may be a tangible goal, with potential implications for personalized disease screening and treatment.

  15. Robust Visual Tracking via Online Discriminative and Low-Rank Dictionary Learning.

    Science.gov (United States)

    Zhou, Tao; Liu, Fanghui; Bhaskar, Harish; Yang, Jie

    2017-09-12

    In this paper, we propose a novel and robust tracking framework based on online discriminative and low-rank dictionary learning. The primary aim of this paper is to obtain compact and low-rank dictionaries that can provide good discriminative representations of both target and background. We accomplish this by exploiting the recovery ability of low-rank matrices. That is if we assume that the data from the same class are linearly correlated, then the corresponding basis vectors learned from the training set of each class shall render the dictionary to become approximately low-rank. The proposed dictionary learning technique incorporates a reconstruction error that improves the reliability of classification. Also, a multiconstraint objective function is designed to enable active learning of a discriminative and robust dictionary. Further, an optimal solution is obtained by iteratively computing the dictionary, coefficients, and by simultaneously learning the classifier parameters. Finally, a simple yet effective likelihood function is implemented to estimate the optimal state of the target during tracking. Moreover, to make the dictionary adaptive to the variations of the target and background during tracking, an online update criterion is employed while learning the new dictionary. Experimental results on a publicly available benchmark dataset have demonstrated that the proposed tracking algorithm performs better than other state-of-the-art trackers.

  16. An Approach for Leukemia Classification Based on Cooperative Game Theory

    Directory of Open Access Journals (Sweden)

    Atefeh Torkaman

    2011-01-01

    Full Text Available Hematological malignancies are the types of cancer that affect blood, bone marrow and lymph nodes. As these tissues are naturally connected through the immune system, a disease affecting one of them will often affect the others as well. The hematological malignancies include; Leukemia, Lymphoma, Multiple myeloma. Among them, leukemia is a serious malignancy that starts in blood tissues especially the bone marrow, where the blood is made. Researches show, leukemia is one of the common cancers in the world. So, the emphasis on diagnostic techniques and best treatments would be able to provide better prognosis and survival for patients. In this paper, an automatic diagnosis recommender system for classifying leukemia based on cooperative game is presented. Through out this research, we analyze the flow cytometry data toward the classification of leukemia into eight classes. We work on real data set from different types of leukemia that have been collected at Iran Blood Transfusion Organization (IBTO. Generally, the data set contains 400 samples taken from human leukemic bone marrow. This study deals with cooperative game used for classification according to different weights assigned to the markers. The proposed method is versatile as there are no constraints to what the input or output represent. This means that it can be used to classify a population according to their contributions. In other words, it applies equally to other groups of data. The experimental results show the accuracy rate of 93.12%, for classification and compared to decision tree (C4.5 with (90.16% in accuracy. The result demonstrates that cooperative game is very promising to be used directly for classification of leukemia as a part of Active Medical decision support system for interpretation of flow cytometry readout. This system could assist clinical hematologists to properly recognize different kinds of leukemia by preparing suggestions and this could improve the treatment

  17. Unclassified sarcomas : a study to improve classification in a cohort of Golden Retriever dogs

    NARCIS (Netherlands)

    Boerkamp, Kim M; Hellmén, Eva; Willén, Helena; Grinwis, Guy C M; Teske, Erik; Rutteman, Gerard R

    2016-01-01

    Morphologically, canine soft-tissue sarcomas (STSs) resemble human STSs. In humans, proper classification of STSs is considered essential to improve insight in the biology of these tumors, and to optimize diagnosis and therapy. To date, there is a paucity of data published on the significance of

  18. Robust spinal cord resting-state fMRI using independent component analysis-based nuisance regression noise reduction.

    Science.gov (United States)

    Hu, Yong; Jin, Richu; Li, Guangsheng; Luk, Keith Dk; Wu, Ed X

    2018-04-16

    Physiological noise reduction plays a critical role in spinal cord (SC) resting-state fMRI (rsfMRI). To reduce physiological noise and increase the robustness of SC rsfMRI by using an independent component analysis (ICA)-based nuisance regression (ICANR) method. Retrospective. Ten healthy subjects (female/male = 4/6, age = 27 ± 3 years, range 24-34 years). 3T/gradient-echo echo planar imaging (EPI). We used three alternative methods (no regression [Nil], conventional region of interest [ROI]-based noise reduction method without ICA [ROI-based], and correction of structured noise using spatial independent component analysis [CORSICA]) to compare with the performance of ICANR. Reduction of the influence of physiological noise on the SC and the reproducibility of rsfMRI analysis after noise reduction were examined. The correlation coefficient (CC) was calculated to assess the influence of physiological noise. Reproducibility was calculated by intraclass correlation (ICC). Results from different methods were compared by one-way analysis of variance (ANOVA) with post-hoc analysis. No significant difference in cerebrospinal fluid (CSF) pulsation influence or tissue motion influence were found (P = 0.223 in CSF, P = 0.2461 in tissue motion) in the ROI-based (CSF: 0.122 ± 0.020; tissue motion: 0.112 ± 0.015), and Nil (CSF: 0.134 ± 0.026; tissue motion: 0.124 ± 0.019). CORSICA showed a significantly stronger influence of CSF pulsation and tissue motion (CSF: 0.166 ± 0.045, P = 0.048; tissue motion: 0.160 ± 0.032, P = 0.048) than Nil. ICANR showed a significantly weaker influence of CSF pulsation and tissue motion (CSF: 0.076 ± 0.007, P = 0.0003; tissue motion: 0.081 ± 0.014, P = 0.0182) than Nil. The ICC values in the Nil, ROI-based, CORSICA, and ICANR were 0.669, 0.645, 0.561, and 0.766, respectively. ICANR more effectively reduced physiological noise from both tissue motion and CSF pulsation than three alternative methods. ICANR increases the robustness of SC rsf

  19. Differentiation of osteophyte types in osteoarthritis - proposal of a histological classification.

    Science.gov (United States)

    Junker, Susann; Krumbholz, Grit; Frommer, Klaus W; Rehart, Stefan; Steinmeyer, Jürgen; Rickert, Markus; Schett, Georg; Müller-Ladner, Ulf; Neumann, Elena

    2016-01-01

    Osteoarthritis is not only characterized by cartilage degradation but also involves subchondral bone remodeling and osteophyte formation. Osteophytes are fibrocartilage-capped bony outgrowths originating from the periosteum. The pathophysiology of osteophyte formation is not completely understood. Yet, different research approaches are under way. Therefore, a histological osteophyte classification to achieve comparable results in osteophyte research was established for application to basic science research questions. The osteophytes were collected from knee joints of osteoarthritis patients (n=10, 94 osteophytes in total) after joint replacement surgery. Their size and origin in the respective joint were photo-documented. To develop an osteophyte classification, serial tissue sections were evaluated using histological (hematoxylin and eosin, Masson's trichrome, toluidine blue) and immunohistochemical staining (collagen type II). Based on the histological and immunohistochemical evaluation, osteophytes were categorized into four different types depending on the degree of ossification and the percentage of mesenchymal connective tissue. Size and localization of osteophytes were independent from the histological stages. This histological classification system of osteoarthritis osteophytes provides a helpful tool for analyzing and monitoring osteophyte development and for characterizing osteophyte types within a single human joint and may therefore contribute to achieve comparable results when analyzing histological findings in osteophytes. Copyright © 2015 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.

  20. Ecosystem services provided by a complex coastal region: challenges of classification and mapping

    Science.gov (United States)

    Sousa, Lisa P.; Sousa, Ana I.; Alves, Fátima L.; Lillebø, Ana I.

    2016-03-01

    A variety of ecosystem services classification systems and mapping approaches are available in the scientific and technical literature, which needs to be selected and adapted when applied to complex territories (e.g. in the interface between water and land, estuary and sea). This paper provides a framework for addressing ecosystem services in complex coastal regions. The roadmap comprises the definition of the exact geographic boundaries of the study area; the use of CICES (Common International Classification of Ecosystem Services) for ecosystem services identification and classification; and the definition of qualitative indicators that will serve as basis to map the ecosystem services. Due to its complexity, the Ria de Aveiro coastal region was selected as case study, presenting an opportunity to explore the application of such approaches at a regional scale. The main challenges of implementing the proposed roadmap, together with its advantages are discussed in this research. The results highlight the importance of considering both the connectivity of natural systems and the complexity of the governance framework; the flexibility and robustness, but also the challenges when applying CICES at regional scale; and the challenges regarding ecosystem services mapping.

  1. Pulmonary emphysema classification based on an improved texton learning model by sparse representation

    Science.gov (United States)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-03-01

    In this paper, we present a texture classification method based on texton learned via sparse representation (SR) with new feature histogram maps in the classification of emphysema. First, an overcomplete dictionary of textons is learned via KSVD learning on every class image patches in the training dataset. In this stage, high-pass filter is introduced to exclude patches in smooth area to speed up the dictionary learning process. Second, 3D joint-SR coefficients and intensity histograms of the test images are used for characterizing regions of interest (ROIs) instead of conventional feature histograms constructed from SR coefficients of the test images over the dictionary. Classification is then performed using a classifier with distance as a histogram dissimilarity measure. Four hundreds and seventy annotated ROIs extracted from 14 test subjects, including 6 paraseptal emphysema (PSE) subjects, 5 centrilobular emphysema (CLE) subjects and 3 panlobular emphysema (PLE) subjects, are used to evaluate the effectiveness and robustness of the proposed method. The proposed method is tested on 167 PSE, 240 CLE and 63 PLE ROIs consisting of mild, moderate and severe pulmonary emphysema. The accuracy of the proposed system is around 74%, 88% and 89% for PSE, CLE and PLE, respectively.

  2. Mid-level image representations for real-time heart view plane classification of echocardiograms.

    Science.gov (United States)

    Penatti, Otávio A B; Werneck, Rafael de O; de Almeida, Waldir R; Stein, Bernardo V; Pazinato, Daniel V; Mendes Júnior, Pedro R; Torres, Ricardo da S; Rocha, Anderson

    2015-11-01

    In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes. The results show that our approach is effective and efficient for the target problem, making it suitable for use in real-time setups. The proposed representations are also robust to different image transformations, e.g., downsampling, noise filtering, and different machine learning classifiers, keeping classification accuracy above 90%. Feature extraction can be performed in 30 fps or 60 fps in some cases. This paper also includes an in-depth review of the literature in the area of automatic echocardiogram view classification giving the reader a through comprehension of this field of study. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Ecosystem services provided by a complex coastal region: challenges of classification and mapping.

    Science.gov (United States)

    Sousa, Lisa P; Sousa, Ana I; Alves, Fátima L; Lillebø, Ana I

    2016-03-11

    A variety of ecosystem services classification systems and mapping approaches are available in the scientific and technical literature, which needs to be selected and adapted when applied to complex territories (e.g. in the interface between water and land, estuary and sea). This paper provides a framework for addressing ecosystem services in complex coastal regions. The roadmap comprises the definition of the exact geographic boundaries of the study area; the use of CICES (Common International Classification of Ecosystem Services) for ecosystem services identification and classification; and the definition of qualitative indicators that will serve as basis to map the ecosystem services. Due to its complexity, the Ria de Aveiro coastal region was selected as case study, presenting an opportunity to explore the application of such approaches at a regional scale. The main challenges of implementing the proposed roadmap, together with its advantages are discussed in this research. The results highlight the importance of considering both the connectivity of natural systems and the complexity of the governance framework; the flexibility and robustness, but also the challenges when applying CICES at regional scale; and the challenges regarding ecosystem services mapping.

  4. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  5. [Biocybernetic approach to the thermometric methods of blood supply measurements of periodontal tissues].

    Science.gov (United States)

    Pastusiak, J; Zakrzewski, J

    1988-11-01

    Specific biocybernetic approach to the problem of the blood supply determination of paradontium tissues by means of thermometric methods has been presented in the paper. The compartment models of the measuring procedure have been given. Dilutodynamic methology and classification has been applied. Such an approach enables to select appropriate biophysical parameters describing the state of blood supply of paradontium tissues and optimal design of transducers and measuring methods.

  6. An improved parameter estimation and comparison for soft tissue constitutive models containing an exponential function.

    Science.gov (United States)

    Aggarwal, Ankush

    2017-08-01

    Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.

  7. An application to pulmonary emphysema classification based on model of texton learning by sparse representation

    Science.gov (United States)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-03-01

    We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.

  8. State-of-the-Art Methods for Brain Tissue Segmentation: A Review.

    Science.gov (United States)

    Dora, Lingraj; Agrawal, Sanjay; Panda, Rutuparna; Abraham, Ajith

    2017-01-01

    Brain tissue segmentation is one of the most sought after research areas in medical image processing. It provides detailed quantitative brain analysis for accurate disease diagnosis, detection, and classification of abnormalities. It plays an essential role in discriminating healthy tissues from lesion tissues. Therefore, accurate disease diagnosis and treatment planning depend merely on the performance of the segmentation method used. In this review, we have studied the recent advances in brain tissue segmentation methods and their state-of-the-art in neuroscience research. The review also highlights the major challenges faced during tissue segmentation of the brain. An effective comparison is made among state-of-the-art brain tissue segmentation methods. Moreover, a study of some of the validation measures to evaluate different segmentation methods is also discussed. The brain tissue segmentation, content in terms of methodologies, and experiments presented in this review are encouraging enough to attract researchers working in this field.

  9. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  10. Classification of interstitial lung disease patterns with topological texture features

    Science.gov (United States)

    Huber, Markus B.; Nagarajan, Mahesh; Leinsinger, Gerda; Ray, Lawrence A.; Wismüller, Axel

    2010-03-01

    Topological texture features were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honey-combing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. A set of 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and three Minkowski Functionals (MFs, e.g. MF.euler). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions and the significance thresholds were adjusted for multiple comparisons by the Bonferroni correction. The best classification results were obtained by the MF features, which performed significantly better than all the standard GLCM and MD features (p < 0.005) for both classifiers. The highest accuracy was found for MF.euler (97.5%, 96.6%; for the k-NN and RBFN classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced topological texture features can provide superior classification performance in computer-assisted diagnosis of interstitial lung diseases when compared to standard texture analysis methods.

  11. Robust Nonnegative Matrix Factorization via Joint Graph Laplacian and Discriminative Information for Identifying Differentially Expressed Genes

    Directory of Open Access Journals (Sweden)

    Ling-Yun Dai

    2017-01-01

    Full Text Available Differential expression plays an important role in cancer diagnosis and classification. In recent years, many methods have been used to identify differentially expressed genes. However, the recognition rate and reliability of gene selection still need to be improved. In this paper, a novel constrained method named robust nonnegative matrix factorization via joint graph Laplacian and discriminative information (GLD-RNMF is proposed for identifying differentially expressed genes, in which manifold learning and the discriminative label information are incorporated into the traditional nonnegative matrix factorization model to train the objective matrix. Specifically, L2,1-norm minimization is enforced on both the error function and the regularization term which is robust to outliers and noise in gene data. Furthermore, the multiplicative update rules and the details of convergence proof are shown for the new model. The experimental results on two publicly available cancer datasets demonstrate that GLD-RNMF is an effective method for identifying differentially expressed genes.

  12. a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data

    Science.gov (United States)

    Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.

    2018-04-01

    Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.

  13. Classification of hydrocephalus: critical analysis of classification categories and advantages of "Multi-categorical Hydrocephalus Classification" (Mc HC).

    Science.gov (United States)

    Oi, Shizuo

    2011-10-01

    Hydrocephalus is a complex pathophysiology with disturbed cerebrospinal fluid (CSF) circulation. There are numerous numbers of classification trials published focusing on various criteria, such as associated anomalies/underlying lesions, CSF circulation/intracranial pressure patterns, clinical features, and other categories. However, no definitive classification exists comprehensively to cover the variety of these aspects. The new classification of hydrocephalus, "Multi-categorical Hydrocephalus Classification" (Mc HC), was invented and developed to cover the entire aspects of hydrocephalus with all considerable classification items and categories. Ten categories include "Mc HC" category I: onset (age, phase), II: cause, III: underlying lesion, IV: symptomatology, V: pathophysiology 1-CSF circulation, VI: pathophysiology 2-ICP dynamics, VII: chronology, VII: post-shunt, VIII: post-endoscopic third ventriculostomy, and X: others. From a 100-year search of publication related to the classification of hydrocephalus, 14 representative publications were reviewed and divided into the 10 categories. The Baumkuchen classification graph made from the round o'clock classification demonstrated the historical tendency of deviation to the categories in pathophysiology, either CSF or ICP dynamics. In the preliminary clinical application, it was concluded that "Mc HC" is extremely effective in expressing the individual state with various categories in the past and present condition or among the compatible cases of hydrocephalus along with the possible chronological change in the future.

  14. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  15. Automatic tissue characterization from ultrasound imagery

    Science.gov (United States)

    Kadah, Yasser M.; Farag, Aly A.; Youssef, Abou-Bakr M.; Badawi, Ahmed M.

    1993-08-01

    In this work, feature extraction algorithms are proposed to extract the tissue characterization parameters from liver images. Then the resulting parameter set is further processed to obtain the minimum number of parameters representing the most discriminating pattern space for classification. This preprocessing step was applied to over 120 pathology-investigated cases to obtain the learning data for designing the classifier. The extracted features are divided into independent training and test sets and are used to construct both statistical and neural classifiers. The optimal criteria for these classifiers are set to have minimum error, ease of implementation and learning, and the flexibility for future modifications. Various algorithms for implementing various classification techniques are presented and tested on the data. The best performance was obtained using a single layer tensor model functional link network. Also, the voting k-nearest neighbor classifier provided comparably good diagnostic rates.

  16. A Robust and Fast System for CTC Computer-Aided Detection of Colorectal Lesions

    Directory of Open Access Journals (Sweden)

    Gareth Beddoe

    2010-01-01

    Full Text Available We present a complete, end-to-end computer-aided detection (CAD system for identifying lesions in the colon, imaged with computed tomography (CT. This system includes facilities for colon segmentation, candidate generation, feature analysis, and classification. The algorithms have been designed to offer robust performance to variation in image data and patient preparation. By utilizing efficient 2D and 3D processing, software optimizations, multi-threading, feature selection, and an optimized cascade classifier, the CAD system quickly determines a set of detection marks. The colon CAD system has been validated on the largest set of data to date, and demonstrates excellent performance, in terms of its high sensitivity, low false positive rate, and computational efficiency.

  17. SECONDARY PULMONARY ARTERIAL HYPERTENSION IN SYSTEMIC DISEASES OF CONNECTIVE TISSUE

    Directory of Open Access Journals (Sweden)

    N. A. Shostak

    2016-01-01

    Full Text Available Modern definition of pulmonary arterial hypertension (PAH as well as data on prevalence and incidence of secondary PAH in systemic disease of connective tissue is presented,  including data of USA, France and Scotland registers. The main chains of pathogenesis, classification approaches, clinical features and diagnostics are described. 

  18. SECONDARY PULMONARY ARTERIAL HYPERTENSION IN SYSTEMIC DISEASES OF CONNECTIVE TISSUE

    Directory of Open Access Journals (Sweden)

    N. A. Shostak

    2009-01-01

    Full Text Available Modern definition of pulmonary arterial hypertension (PAH as well as data on prevalence and incidence of secondary PAH in systemic disease of connective tissue is presented,  including data of USA, France and Scotland registers. The main chains of pathogenesis, classification approaches, clinical features and diagnostics are described. 

  19. Clinically-inspired automatic classification of ovarian carcinoma subtypes

    Directory of Open Access Journals (Sweden)

    Aicha BenTaieb

    2016-01-01

    Full Text Available Context: It has been shown that ovarian carcinoma subtypes are distinct pathologic entities with differing prognostic and therapeutic implications. Histotyping by pathologists has good reproducibility, but occasional cases are challenging and require immunohistochemistry and subspecialty consultation. Motivated by the need for more accurate and reproducible diagnoses and to facilitate pathologists′ workflow, we propose an automatic framework for ovarian carcinoma classification. Materials and Methods: Our method is inspired by pathologists′ workflow. We analyse imaged tissues at two magnification levels and extract clinically-inspired color, texture, and segmentation-based shape descriptors using image-processing methods. We propose a carefully designed machine learning technique composed of four modules: A dissimilarity matrix, dimensionality reduction, feature selection and a support vector machine classifier to separate the five ovarian carcinoma subtypes using the extracted features. Results: This paper presents the details of our implementation and its validation on a clinically derived dataset of eighty high-resolution histopathology images. The proposed system achieved a multiclass classification accuracy of 95.0% when classifying unseen tissues. Assessment of the classifier′s confusion (confusion matrix between the five different ovarian carcinoma subtypes agrees with clinician′s confusion and reflects the difficulty in diagnosing endometrioid and serous carcinomas. Conclusions: Our results from this first study highlight the difficulty of ovarian carcinoma diagnosis which originate from the intrinsic class-imbalance observed among subtypes and suggest that the automatic analysis of ovarian carcinoma subtypes could be valuable to clinician′s diagnostic procedure by providing a second opinion.

  20. Robust Trust in Expert Testimony

    Directory of Open Access Journals (Sweden)

    Christian Dahlman

    2015-05-01

    Full Text Available The standard of proof in criminal trials should require that the evidence presented by the prosecution is robust. This requirement of robustness says that it must be unlikely that additional information would change the probability that the defendant is guilty. Robustness is difficult for a judge to estimate, as it requires the judge to assess the possible effect of information that the he or she does not have. This article is concerned with expert witnesses and proposes a method for reviewing the robustness of expert testimony. According to the proposed method, the robustness of expert testimony is estimated with regard to competence, motivation, external strength, internal strength and relevance. The danger of trusting non-robust expert testimony is illustrated with an analysis of the Thomas Quick Case, a Swedish legal scandal where a patient at a mental institution was wrongfully convicted for eight murders.

  1. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    Science.gov (United States)

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  2. Robustness Analysis of Timber Truss Structure

    DEFF Research Database (Denmark)

    Rajčić, Vlatka; Čizmar, Dean; Kirkegaard, Poul Henning

    2010-01-01

    The present paper discusses robustness of structures in general and the robustness requirements given in the codes. Robustness of timber structures is also an issues as this is closely related to Working group 3 (Robustness of systems) of the COST E55 project. Finally, an example of a robustness...... evaluation of a widespan timber truss structure is presented. This structure was built few years ago near Zagreb and has a span of 45m. Reliability analysis of the main members and the system is conducted and based on this a robustness analysis is preformed....

  3. Feature Selection Methods for Robust Decoding of Finger Movements in a Non-human Primate

    Science.gov (United States)

    Padmanaban, Subash; Baker, Justin; Greger, Bradley

    2018-01-01

    Objective: The performance of machine learning algorithms used for neural decoding of dexterous tasks may be impeded due to problems arising when dealing with high-dimensional data. The objective of feature selection algorithms is to choose a near-optimal subset of features from the original feature space to improve the performance of the decoding algorithm. The aim of our study was to compare the effects of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis (PCA), and Mutual Information Maximization on SVM classification performance for a dexterous decoding task. Approach: A nonhuman primate (NHP) was trained to perform small coordinated movements—similar to typing. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials (AP) during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon AP firing rates. We used the SVM classification to examine the functional parameters of (i) robustness to simulated failure and (ii) longevity of classification. We also compared the effect of using isolated-neuron and multi-unit firing rates as the feature vector supplied to the SVM. Main results: The average decoding accuracy for multi-unit features and single-unit features using Mutual Information Maximization (MIM) across 47 sessions was 96.74 ± 3.5% and 97.65 ± 3.36% respectively. The reduction in decoding accuracy between using 100% of the features and 10% of features based on MIM was 45.56% (from 93.7 to 51.09%) and 4.75% (from 95.32 to 90.79%) for multi-unit and single-unit features respectively. MIM had best performance compared to other feature selection methods. Significance: These results suggest improved decoding performance can be achieved by using optimally selected features. The results based on clinically relevant performance metrics also suggest that the decoding

  4. ELUCIDATING BRAIN CONNECTIVITY NETWORKS IN MAJOR DEPRESSIVE DISORDER USING CLASSIFICATION-BASED SCORING.

    Science.gov (United States)

    Sacchet, Matthew D; Prasad, Gautam; Foland-Ross, Lara C; Thompson, Paul M; Gotlib, Ian H

    2014-04-01

    Graph theory is increasingly used in the field of neuroscience to understand the large-scale network structure of the human brain. There is also considerable interest in applying machine learning techniques in clinical settings, for example, to make diagnoses or predict treatment outcomes. Here we used support-vector machines (SVMs), in conjunction with whole-brain tractography, to identify graph metrics that best differentiate individuals with Major Depressive Disorder (MDD) from nondepressed controls. To do this, we applied a novel feature-scoring procedure that incorporates iterative classifier performance to assess feature robustness. We found that small-worldness , a measure of the balance between global integration and local specialization, most reliably differentiated MDD from nondepressed individuals. Post-hoc regional analyses suggested that heightened connectivity of the subcallosal cingulate gyrus (SCG) in MDDs contributes to these differences. The current study provides a novel way to assess the robustness of classification features and reveals anomalies in large-scale neural networks in MDD.

  5. A comparative study of PCA, SIMCA and Cole model for classification of bioimpedance spectroscopy measurements.

    Science.gov (United States)

    Nejadgholi, Isar; Bolic, Miodrag

    2015-08-01

    Due to safety and low cost of bioimpedance spectroscopy (BIS), classification of BIS can be potentially a preferred way of detecting changes in living tissues. However, for longitudinal datasets linear classifiers fail to classify conventional Cole parameters extracted from BIS measurements because of their high variability. In some applications, linear classification based on Principal Component Analysis (PCA) has shown more accurate results. Yet, these methods have not been established for BIS classification, since PCA features have neither been investigated in combination with other classifiers nor have been compared to conventional Cole features in benchmark classification tasks. In this work, PCA and Cole features are compared in three synthesized benchmark classification tasks which are expected to be detected by BIS. These three tasks are classification of before and after geometry change, relative composition change and blood perfusion in a cylindrical organ. Our results show that in all tasks the features extracted by PCA are more discriminant than Cole parameters. Moreover, a pilot study was done on a longitudinal arm BIS dataset including eight subjects and three arm positions. The goal of the study was to compare different methods in arm position classification which includes all three synthesized changes mentioned above. Our comparative study on various classification methods shows that the best classification accuracy is obtained when PCA features are classified by a K-Nearest Neighbors (KNN) classifier. The results of this work suggest that PCA+KNN is a promising method to be considered for classification of BIS datasets that deal with subject and time variability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Discrimination of soft tissues using laser-induced breakdown spectroscopy in combination with k nearest neighbors (kNN) and support vector machine (SVM) classifiers

    Science.gov (United States)

    Li, Xiaohui; Yang, Sibo; Fan, Rongwei; Yu, Xin; Chen, Deying

    2018-06-01

    In this paper, discrimination of soft tissues using laser-induced breakdown spectroscopy (LIBS) in combination with multivariate statistical methods is presented. Fresh pork fat, skin, ham, loin and tenderloin muscle tissues are manually cut into slices and ablated using a 1064 nm pulsed Nd:YAG laser. Discrimination analyses between fat, skin and muscle tissues, and further between highly similar ham, loin and tenderloin muscle tissues, are performed based on the LIBS spectra in combination with multivariate statistical methods, including principal component analysis (PCA), k nearest neighbors (kNN) classification, and support vector machine (SVM) classification. Performances of the discrimination models, including accuracy, sensitivity and specificity, are evaluated using 10-fold cross validation. The classification models are optimized to achieve best discrimination performances. The fat, skin and muscle tissues can be definitely discriminated using both kNN and SVM classifiers, with accuracy of over 99.83%, sensitivity of over 0.995 and specificity of over 0.998. The highly similar ham, loin and tenderloin muscle tissues can also be discriminated with acceptable performances. The best performances are achieved with SVM classifier using Gaussian kernel function, with accuracy of 76.84%, sensitivity of over 0.742 and specificity of over 0.869. The results show that the LIBS technique assisted with multivariate statistical methods could be a powerful tool for online discrimination of soft tissues, even for tissues of high similarity, such as muscles from different parts of the animal body. This technique could be used for discrimination of tissues suffering minor clinical changes, thus may advance the diagnosis of early lesions and abnormalities.

  7. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    Science.gov (United States)

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A neurally inspired musical instrument classification system based upon the sound onset.

    Science.gov (United States)

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  9. An extension of the receiver operating characteristic curve and AUC-optimal classification.

    Science.gov (United States)

    Takenouchi, Takashi; Komori, Osamu; Eguchi, Shinto

    2012-10-01

    While most proposed methods for solving classification problems focus on minimization of the classification error rate, we are interested in the receiver operating characteristic (ROC) curve, which provides more information about classification performance than the error rate does. The area under the ROC curve (AUC) is a natural measure for overall assessment of a classifier based on the ROC curve. We discuss a class of concave functions for AUC maximization in which a boosting-type algorithm including RankBoost is considered, and the Bayesian risk consistency and the lower bound of the optimum function are discussed. A procedure derived by maximizing a specific optimum function has high robustness, based on gross error sensitivity. Additionally, we focus on the partial AUC, which is the partial area under the ROC curve. For example, in medical screening, a high true-positive rate to the fixed lower false-positive rate is preferable and thus the partial AUC corresponding to lower false-positive rates is much more important than the remaining AUC. We extend the class of concave optimum functions for partial AUC optimality with the boosting algorithm. We investigated the validity of the proposed method through several experiments with data sets in the UCI repository.

  10. Concentration profiling of minerals in iliac crest bone tissue of opium addicted humans using inductively coupled plasma and discriminant analysis techniques.

    Science.gov (United States)

    Mani-Varnosfaderani, Ahmad; Jamshidi, Mahbobeh; Yeganeh, Ali; Mahmoudi, Mani

    2016-02-20

    Opium addiction is one of the main health problems in developing countries and induces serious defects on the human body. In this work, the concentrations of 32 minerals including alkaline, heavy and toxic metals have been determined in the iliac crest bone tissue of 22 opium addicted individuals using inductively coupled plasma-optical emission spectroscopy (ICP-OES). The bone tissues of 30 humans with no physiological and metabolomic diseases were used as the control group. For subsequent analyses, the linear and quadratic discriminant analysis techniques have been used for classification of the data into "addicted" and "non-addicted" groups. Moreover, the counter-propagation artificial neural network (CPANN) has been used for clustering of the data. The results revealed that the CPANN is a robust model and thoroughly classifies the data. The area under the curve for the receiver operating characteristic curve for this model was more than 0.91. Investigation of the results revealed that the opium consumption causes a deficiency in the level of Calcium, Phosphate, Potassium and Sodium in iliac crest bone tissue. Moreover, this type of addiction induces an increment in the level of toxic and heavy metals such as Co, Cr, Mo and Ni in iliac crest tissue. The correlation analysis revealed that there were no significant dependencies between the age of the samples and the mineral content of their iliac crest, in this study. The results of this work suggest that the opium addicted individuals need thorough and restricted dietary and medical care programs after recovery phases, in order to have healthy bones. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Hyperspectral Image Enhancement and Mixture Deep-Learning Classification of Corneal Epithelium Injuries.

    Science.gov (United States)

    Noor, Siti Salwa Md; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang

    2017-11-16

    In our preliminary study, the reflectance signatures obtained from hyperspectral imaging (HSI) of normal and abnormal corneal epithelium tissues of porcine show similar morphology with subtle differences. Here we present image enhancement algorithms that can be used to improve the interpretability of data into clinically relevant information to facilitate diagnostics. A total of 25 corneal epithelium images without the application of eye staining were used. Three image feature extraction approaches were applied for image classification: (i) image feature classification from histogram using a support vector machine with a Gaussian radial basis function (SVM-GRBF); (ii) physical image feature classification using deep-learning Convolutional Neural Networks (CNNs) only; and (iii) the combined classification of CNNs and SVM-Linear. The performance results indicate that our chosen image features from the histogram and length-scale parameter were able to classify with up to 100% accuracy; particularly, at CNNs and CNNs-SVM, by employing 80% of the data sample for training and 20% for testing. Thus, in the assessment of corneal epithelium injuries, HSI has high potential as a method that could surpass current technologies regarding speed, objectivity, and reliability.

  12. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  13. Single and Multi-Date Landsat Classifications of Basalt to Support Soil Survey Efforts

    Directory of Open Access Journals (Sweden)

    Jessica J. Mitchell

    2013-10-01

    Full Text Available Basalt outcrops are significant features in the Western United States and consistently present challenges to Natural Resources Conservation Service (NRCS soil mapping efforts. Current soil survey methods to estimate basalt outcrops involve field transects and are impractical for mapping regionally extensive areas. The purpose of this research was to investigate remote sensing methods to effectively determine the presence of basalt rock outcrops. Five Landsat 5 TM scenes (path 39, row 29 over the year 2007 growing season were processed and analyzed to detect and quantify basalt outcrops across the Clark Area Soil Survey, ID, USA (4,570 km2. The Robust Classification Method (RCM using the Spectral Angle Mapper (SAM method and Random Forest (RF classifications was applied to individual scenes and to a multitemporal stack of the five images. The highest performing RCM basalt classification was obtained using the 18 July scene, which yielded an overall accuracy of 60.45%. The RF classifications applied to the same datasets yielded slightly better overall classification rates when using the multitemporal stack (72.35% than when using the 18 July scene (71.13% and the same rate of successfully predicting basalt (61.76% using out-of-bag sampling. For optimal RCM and RF classifications, uncertainty tended to be lowest in irrigated areas; however, the RCM uncertainty map included more extensive areas of low uncertainty that also encompassed forested hillslopes and riparian areas. RCM uncertainty was sensitive to the influence of bright soil reflectance, while RF uncertainty was sensitive to the influence of shadows. Quantification of basalt requires continued investigation to reduce the influence of vegetation, lichen and loess on basalt detection. With further development, remote sensing tools have the potential to support soil survey mapping of lava fields covering expansive areas in the Western United States and other regions of the world with similar

  14. RNA Imaging with Multiplexed Error Robust Fluorescence in situ Hybridization

    Science.gov (United States)

    Moffitt, Jeffrey R.; Zhuang, Xiaowei

    2016-01-01

    Quantitative measurements of both the copy number and spatial distribution of large fractions of the transcriptome in single-cells could revolutionize our understanding of a variety of cellular and tissue behaviors in both healthy and diseased states. Single-molecule Fluorescence In Situ Hybridization (smFISH)—an approach where individual RNAs are labeled with fluorescent probes and imaged in their native cellular and tissue context—provides both the copy number and spatial context of RNAs but has been limited in the number of RNA species that can be measured simultaneously. Here we describe Multiplexed Error Robust Fluorescence In Situ Hybridization (MERFISH), a massively parallelized form of smFISH that can image and identify hundreds to thousands of different RNA species simultaneously with high accuracy in individual cells in their native spatial context. We provide detailed protocols on all aspects of MERFISH, including probe design, data collection, and data analysis to allow interested laboratories to perform MERFISH measurements themselves. PMID:27241748

  15. Robustness in econometrics

    CERN Document Server

    Sriboonchitta, Songsak; Huynh, Van-Nam

    2017-01-01

    This book presents recent research on robustness in econometrics. Robust data processing techniques – i.e., techniques that yield results minimally affected by outliers – and their applications to real-life economic and financial situations are the main focus of this book. The book also discusses applications of more traditional statistical techniques to econometric problems. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. In day-by-day data, we often encounter outliers that do not reflect the long-term economic trends, e.g., unexpected and abrupt fluctuations. As such, it is important to develop robust data processing techniques that can accommodate these fluctuations.

  16. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  17. Robust microwave-assisted extraction protocol for determination of total mercury and methylmercury in fish tissues

    Energy Technology Data Exchange (ETDEWEB)

    Reyes, L. Hinojosa; Rahman, G.M. Mizanur [Department of Chemistry and Biochemistry, Duquesne University, Pittsburgh, PA 15282 (United States); Kingston, H.M. Skip [Department of Chemistry and Biochemistry, Duquesne University, Pittsburgh, PA 15282 (United States)], E-mail: kingston@duq.edu

    2009-01-12

    A rapid and efficient closed vessel microwave-assisted extraction (MAE) method based on acidic leaching was developed and optimized for the extraction of total mercury (Hg), inorganic mercury (Hg{sup 2+}) and methylmercury (CH{sub 3}Hg{sup +}) from fish tissues. The quantitative extraction of total Hg and mercury species from biological samples was achieved by using 5 mol L{sup -1} HCl and 0.25 mol L{sup -1} NaCl during 10 min at 60 deg. C. Total Hg content was determined using inductively coupled plasma mass spectrometry (ICP-MS). Mercury species were measured by liquid chromatography hyphenated with inductively coupled plasma mass spectrometry (LC-ICP-MS). The method was validated using biological certified reference materials ERM-CE464, DOLT-3, and NIST SRM-1946. The analytical results were in good agreement with the certified reference values of total Hg and CH{sub 3}Hg{sup +} at a 95% confidence level. Further, accuracy validation using speciated isotope-dilution mass spectrometry (SIDMS, as described in the EPA Method 6800) was carried out. SIDMS was also applied to study and correct for unwanted species transformation reactions during and/or after sample preparation steps. For the studied reference materials, no statistically significant transformation between mercury species was observed during the extraction and determination procedures. The proposed method was successfully applied to fish tissues with good agreement between SIDMS results and external calibration (EC) results. Interspecies transformations in fish tissues were slightly higher than certified reference materials due to differences in matrix composition. Depending on the type of fish tissue, up to 10.24% of Hg{sup 2+} was methylated and up to 1.75% of CH{sub 3}Hg{sup +} was demethylated to Hg{sup 2+}.

  18. Robust Reliability or reliable robustness? - Integrated consideration of robustness and reliability aspects

    DEFF Research Database (Denmark)

    Kemmler, S.; Eifler, Tobias; Bertsche, B.

    2015-01-01

    products are and vice versa. For a comprehensive understanding and to use existing synergies between both domains, this paper discusses the basic principles of Reliability- and Robust Design theory. The development of a comprehensive model will enable an integrated consideration of both domains...

  19. False-positive reduction in CAD mass detection using a competitive classification strategy

    International Nuclear Information System (INIS)

    Li Lihua; Zheng Yang; Zhang Lei; Clark, Robert A.

    2001-01-01

    High false-positive (FP) rate remains to be one of the major problems to be solved in CAD study because too many false-positively cued signals will potentially degrade the performance of detecting true-positive regions and increase the call-back rate in CAD environment. In this paper, we proposed a novel classification method for FP reduction, where the conventional 'hard' decision classifier is cascaded with a 'soft' decision classification with the objective to reduce false-positives in the cases with multiple FPs retained after the 'hard' decision classification. The 'soft' classification takes a competitive classification strategy in which only the 'best' ones are selected from the pre-classified suspicious regions as the true mass in each case. A neural network structure is designed to implement the proposed competitive classification. Comparative studies of FP reduction on a database of 79 images by a 'hard' decision classification and a combined 'hard'-'soft' classification method demonstrated the efficiency of the proposed classification strategy. For example, for the high FP sub-database which has only 31.7% of total images but accounts for 63.5% of whole FPs generated in single 'hard' classification, the FPs can be reduced for 56% (from 8.36 to 3.72 per image) by using the proposed method at the cost of 1% TP loss (from 69% to 68%) in whole database, while it can only be reduced for 27% (from 8.36 to 6.08 per image) by simply increasing the threshold of 'hard' classifier with a cost of TP loss as high as 14% (from 69% to 55%). On the average in whole database, the FP reduction by hybrid 'hard'-'soft' classification is 1.58 per image as compared to 1.11 by 'hard' classification at the TP costs described above. Because the cases with high dense tissue are of higher risk of cancer incidence and false-negative detection in mammogram screening, and usually generate more FPs in CAD detection, the method proposed in this paper will be very helpful in improving

  20. Robust species taxonomy assignment algorithm for 16S rRNA NGS reads: application to oral carcinoma samples

    Directory of Open Access Journals (Sweden)

    Nezar Noor Al-Hebshi

    2015-09-01

    Full Text Available Background: Usefulness of next-generation sequencing (NGS in assessing bacteria associated with oral squamous cell carcinoma (OSCC has been undermined by inability to classify reads to the species level. Objective: The purpose of this study was to develop a robust algorithm for species-level classification of NGS reads from oral samples and to pilot test it for profiling bacteria within OSCC tissues. Methods: Bacterial 16S V1-V3 libraries were prepared from three OSCC DNA samples and sequenced using 454's FLX chemistry. High-quality, well-aligned, and non-chimeric reads ≥350 bp were classified using a novel, multi-stage algorithm that involves matching reads to reference sequences in revised versions of the Human Oral Microbiome Database (HOMD, HOMD extended (HOMDEXT, and Greengene Gold (GGG at alignment coverage and percentage identity ≥98%, followed by assignment to species level based on top hit reference sequences. Priority was given to hits in HOMD, then HOMDEXT and finally GGG. Unmatched reads were subject to operational taxonomic unit analysis. Results: Nearly, 92.8% of the reads were matched to updated-HOMD 13.2, 1.83% to trusted-HOMDEXT, and 1.36% to modified-GGG. Of all matched reads, 99.6% were classified to species level. A total of 228 species-level taxa were identified, representing 11 phyla; the most abundant were Proteobacteria, Bacteroidetes, Firmicutes, Fusobacteria, and Actinobacteria. Thirty-five species-level taxa were detected in all samples. On average, Prevotella oris, Neisseria flava, Neisseria flavescens/subflava, Fusobacterium nucleatum ss polymorphum, Aggregatibacter segnis, Streptococcus mitis, and Fusobacterium periodontium were the most abundant. Bacteroides fragilis, a species rarely isolated from the oral cavity, was detected in two samples. Conclusion: This multi-stage algorithm maximizes the fraction of reads classified to the species level while ensuring reliable classification by giving priority to the

  1. Hand eczema classification

    DEFF Research Database (Denmark)

    Diepgen, T L; Andersen, Klaus Ejner; Brandao, F M

    2008-01-01

    of the disease is rarely evidence based, and a classification system for different subdiagnoses of hand eczema is not agreed upon. Randomized controlled trials investigating the treatment of hand eczema are called for. For this, as well as for clinical purposes, a generally accepted classification system...... A classification system for hand eczema is proposed. Conclusions It is suggested that this classification be used in clinical work and in clinical trials....

  2. Classification of Several Optically Complex Waters in China Using in Situ Remote Sensing Reflectance

    Directory of Open Access Journals (Sweden)

    Qian Shen

    2015-11-01

    Full Text Available Determining the dominant optically active substances in water bodies via classification can improve the accuracy of bio-optical and water quality parameters estimated by remote sensing. This study provides four robust centroid sets from in situ remote sensing reflectance (Rrs (λ data presenting typical optical types obtained by plugging different similarity measures into fuzzy c-means (FCM clustering. Four typical types of waters were studied: (1 highly mixed eutrophic waters, with the proportion of absorption of colored dissolved organic matter (CDOM, phytoplankton, and non-living particulate matter at approximately 20%, 20%, and 60% respectively; (2 CDOM-dominated relatively clear waters, with approximately 45% by proportion of CDOM absorption; (3 nonliving solids-dominated waters, with approximately 88% by proportion of absorption of nonliving particulate matter; and (4 cyanobacteria-composed scum. We also simulated spectra from seven ocean color satellite sensors to assess their classification ability. POLarization and Directionality of the Earth's Reflectances (POLDER, Sentinel-2A, and MEdium Resolution Imaging Spectrometer (MERIS were found to perform better than the rest. Further, a classification tree for MERIS, in which the characteristics of Rrs (709/Rrs (681, Rrs (560/Rrs (709, Rrs (560/Rrs (620, and Rrs (709/Rrs (761 are integrated, is also proposed in this paper. The overall accuracy and Kappa coefficient of the proposed classification tree are 76.2% and 0.632, respectively.

  3. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    Science.gov (United States)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  4. Classification of the web

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...... that call for inquiries into the theoretical foundation of bibliographic classification theory....

  5. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1993-04-01

    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  6. Classification of clinical autofluorescence spectra of oral leukoplakia using an artificial neural network : a pilot study

    NARCIS (Netherlands)

    van Staveren, HJ; van Veen, RLP; Speelman, OC; Witjes, MJH; Roodenburg, JLN

    The performance of an artificial neural network was evaluated as an alternative classification technique of autofluorescence spectra of oral leukoplakia, which may reflect the grade of tissue dysplasia. Twenty-two visible lesions of 21 patients suffering from oral leukoplakia and six locations on

  7. Malignant fatty tumors: classification, clinical course, imaging appearance and treatment

    International Nuclear Information System (INIS)

    Peterson, J.J.; Kransdorf, M.J.; Bancroft, L.W.; O'Connor, M.I.

    2003-01-01

    Liposarcoma is a relatively common soft tissue malignancy with a wide spectrum of clinical presentations and imaging appearances. Several subtypes are described, ranging from lesions nearly entirely composed of mature adipose tissue, to tumors with very sparse adipose elements. The imaging appearance of these fatty masses is frequently sufficiently characteristic to allow a specific diagnosis, while in other cases, although a specific diagnosis is not achievable, a meaningful limited differential diagnosis can be established. The purpose of this paper is to review the spectrum of malignant fatty tumors, highlighting the current classification system, clinical presentation and behavior, treatment and spectrum of imaging appearances. The imaging review will emphasize CT scanning and MR imaging, and will stress differentiating radiologic features. (orig.)

  8. Three-dimensional micro-scale strain mapping in living biological soft tissues.

    Science.gov (United States)

    Moo, Eng Kuan; Sibole, Scott C; Han, Sang Kuy; Herzog, Walter

    2018-04-01

    Non-invasive characterization of the mechanical micro-environment surrounding cells in biological tissues at multiple length scales is important for the understanding of the role of mechanics in regulating the biosynthesis and phenotype of cells. However, there is a lack of imaging methods that allow for characterization of the cell micro-environment in three-dimensional (3D) space. The aims of this study were (i) to develop a multi-photon laser microscopy protocol capable of imprinting 3D grid lines onto living tissue at a high spatial resolution, and (ii) to develop image processing software capable of analyzing the resulting microscopic images and performing high resolution 3D strain analyses. Using articular cartilage as the biological tissue of interest, we present a novel two-photon excitation imaging technique for measuring the internal 3D kinematics in intact cartilage at sub-micrometer resolution, spanning length scales from the tissue to the cell level. Using custom image processing software, we provide accurate and robust 3D micro-strain analysis that allows for detailed qualitative and quantitative assessment of the 3D tissue kinematics. This novel technique preserves tissue structural integrity post-scanning, therefore allowing for multiple strain measurements at different time points in the same specimen. The proposed technique is versatile and opens doors for experimental and theoretical investigations on the relationship between tissue deformation and cell biosynthesis. Studies of this nature may enhance our understanding of the mechanisms underlying cell mechano-transduction, and thus, adaptation and degeneration of soft connective tissues. We presented a novel two-photon excitation imaging technique for measuring the internal 3D kinematics in intact cartilage at sub-micrometer resolution, spanning from tissue length scale to cellular length scale. Using a custom image processing software (lsmgridtrack), we provide accurate and robust micro

  9. Hazard classification methodology

    International Nuclear Information System (INIS)

    Brereton, S.J.

    1996-01-01

    This document outlines the hazard classification methodology used to determine the hazard classification of the NIF LTAB, OAB, and the support facilities on the basis of radionuclides and chemicals. The hazard classification determines the safety analysis requirements for a facility

  10. The value of virtual touch tissue image (VTI) and virtual touch tissue quantification (VTQ) in the differential diagnosis of thyroid nodules

    International Nuclear Information System (INIS)

    Zhang, Feng-Juan; Han, Ruo-Ling; Zhao, Xin-Ming

    2014-01-01

    Highlights: • All nodules in the research were confirmed by histopathology. • The classification method of VTI was easy to learn. • VTQ could provide quantitative elasticity measurements for thyroid nodules. • VTI classification could provide semi-quantitative elasticity analysis. • The area ratio could show invasive extent of malignant tumor. - Abstract: Objectives: To explore the value of virtual touch tissue image (VTI) and virtual touch tissue quantification (VTQ) in the differential diagnosis of thyroid nodules. Methods: One-hundred and seven patients with 113 thyroid nodules were performed conventional ultrasound and acoustic radiation force impulse (ARFI) elastography. The stiffness of the nodules on virtual touch tissue image (VTI) was graded, and the area ratios (AR) of nodules on VTI images versus on B-mode images were calculated. Shear wave velocity (SWV) within the thyroid nodules were measured using virtual touch tissue quantification (VTQ) technique. The pathological diagnosis as the gold standard draws the receiver-operating characteristic curve (ROC) to find the cut-off point of VTI grades, AR and SWV to predict thyroid cancer. Results: The difference in VTI grades of malignant and benign nodules was statistically significant (P < 0.05), as well as in AR and SWV. There was no significant difference in the AR of nodules or the SWV of nodules in benign group or in malignant group. The sensitivity, specificity, accuracy, positive predictive value (PPV), and negative predictive value (NPV) of VTI grades, AR, and SWV in the differential diagnosis of thyroid nodules were calculated. There was no significant difference in diagnostic accuracy among the three methods. Conclusion: VTI grades, AR of nodules on VTI images versus on B-mode images and SWV within the nodules can help the differential diagnosis of thyroid nodules

  11. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  12. Zone-specific logistic regression models improve classification of prostate cancer on multi-parametric MRI

    Energy Technology Data Exchange (ETDEWEB)

    Dikaios, Nikolaos; Halligan, Steve; Taylor, Stuart; Atkinson, David; Punwani, Shonit [University College London, Centre for Medical Imaging, London (United Kingdom); University College London Hospital, Departments of Radiology, London (United Kingdom); Alkalbani, Jokha; Sidhu, Harbir Singh [University College London, Centre for Medical Imaging, London (United Kingdom); Abd-Alazeez, Mohamed; Ahmed, Hashim U.; Emberton, Mark [University College London, Research Department of Urology, Division of Surgery and Interventional Science, London (United Kingdom); Kirkham, Alex [University College London Hospital, Departments of Radiology, London (United Kingdom); Freeman, Alex [University College London Hospital, Department of Histopathology, London (United Kingdom)

    2015-09-15

    To assess the interchangeability of zone-specific (peripheral-zone (PZ) and transition-zone (TZ)) multiparametric-MRI (mp-MRI) logistic-regression (LR) models for classification of prostate cancer. Two hundred and thirty-one patients (70 TZ training-cohort; 76 PZ training-cohort; 85 TZ temporal validation-cohort) underwent mp-MRI and transperineal-template-prostate-mapping biopsy. PZ and TZ uni/multi-variate mp-MRI LR-models for classification of significant cancer (any cancer-core-length (CCL) with Gleason > 3 + 3 or any grade with CCL ≥ 4 mm) were derived from the respective cohorts and validated within the same zone by leave-one-out analysis. Inter-zonal performance was tested by applying TZ models to the PZ training-cohort and vice-versa. Classification performance of TZ models for TZ cancer was further assessed in the TZ validation-cohort. ROC area-under-curve (ROC-AUC) analysis was used to compare models. The univariate parameters with the best classification performance were the normalised T2 signal (T2nSI) within the TZ (ROC-AUC = 0.77) and normalized early contrast-enhanced T1 signal (DCE-nSI) within the PZ (ROC-AUC = 0.79). Performance was not significantly improved by bi-variate/tri-variate modelling. PZ models that contained DCE-nSI performed poorly in classification of TZ cancer. The TZ model based solely on maximum-enhancement poorly classified PZ cancer. LR-models dependent on DCE-MRI parameters alone are not interchangeable between prostatic zones; however, models based exclusively on T2 and/or ADC are more robust for inter-zonal application. (orig.)

  13. Tissue-engineering strategies for the tendon/ligament-to-bone insertion.

    Science.gov (United States)

    Smith, Lester; Xia, Younan; Galatz, Leesa M; Genin, Guy M; Thomopoulos, Stavros

    2012-01-01

    Injuries to connective tissues are painful and disabling and result in costly medical expenses. These injuries often require reattachment of an unmineralized connective tissue to bone. The uninjured tendon/ligament-to-bone insertion (enthesis) is a functionally graded material that exhibits a gradual transition from soft tissue (i.e., tendon or ligament) to hard tissue (i.e., mineralized bone) through a fibrocartilaginous transition region. This transition is believed to facilitate force transmission between the two dissimilar tissues by ameliorating potentially damaging interfacial stress concentrations. The transition region is impaired or lost upon tendon/ligament injury and is not regenerated following surgical repair or natural healing, exposing the tissue to risk of reinjury. The need to regenerate a robust tendon-to-bone insertion has led a number of tissue engineering repair strategies. This review treats the tendon-to-bone insertion site as a tissue structure whose primary role is mechanical and discusses current and emerging strategies for engineering the tendon/ligament-to-bone insertion in this context. The focus lies on strategies for producing mechanical structures that can guide and subsequently sustain a graded tissue structure and the associated cell populations.

  14. Rough set soft computing cancer classification and network: one stone, two birds.

    Science.gov (United States)

    Zhang, Yue

    2010-07-15

    Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article.

  15. Lichenoid tissue reaction/interface dermatitis: Recognition, classification, etiology, and clinicopathological overtones

    Directory of Open Access Journals (Sweden)

    Virendra N Sehgal

    2011-01-01

    Full Text Available Lichenoid tissue reaction or interface dermatitis embrace several clinical conditions, the prototype of which is lichen planus and its variants, drug induced lichenoid dermatitis, special forms of lichenoid dermatitis, lichenoid dermatitis in lupus erythematosus, and miscellaneous disorders showing lichenoid dermatitis, the salient clinical and histological features of which are described to facilitate their diagnosis. Background of lichenoid reaction pattern has been briefly outlined to enlighten those interested in this entity.

  16. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.

    Science.gov (United States)

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried; De Vos, Winnok H

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.

  17. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.

    Directory of Open Access Journals (Sweden)

    Marlies Verschuuren

    Full Text Available A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND, which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.

  18. Comparison of models of automatic classification of textural patterns of mineral presents in Colombian coals

    International Nuclear Information System (INIS)

    Lopez Carvajal, Jaime; Branch Bedoya, John Willian

    2005-01-01

    The automatic classification of objects is a very interesting approach under several problem domains. This paper outlines some results obtained under different classification models to categorize textural patterns of minerals using real digital images. The data set used was characterized by a small size and noise presence. The implemented models were the Bayesian classifier, Neural Network (2-5-1), support vector machine, decision tree and 3-nearest neighbors. The results after applying crossed validation show that the Bayesian model (84%) proved better predictive capacity than the others, mainly due to its noise robustness behavior. The neuronal network (68%) and the SVM (67%) gave promising results, because they could be improved increasing the data amount used, while the decision tree (55%) and K-NN (54%) did not seem to be adequate for this problem, because of their sensibility to noise

  19. Robustness in Railway Operations (RobustRailS)

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo; Nielsen, Otto Anker

    This study considers the problem of enhancing railway timetable robustness without adding slack time, hence increasing the travel time. The approach integrates a transit assignment model to assess how passengers adapt their behaviour whenever operations are changed. First, the approach considers...

  20. Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    Directory of Open Access Journals (Sweden)

    Junbao Zheng

    2012-03-01

    Full Text Available Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor as well as its parallel channels (inner factor. The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.