WorldWideScience

Sample records for margin based classifiers

  1. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  2. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  3. Maximum margin classifier working in a set of strings.

    Science.gov (United States)

    Koyano, Hitoshi; Hayashida, Morihiro; Akutsu, Tatsuya

    2016-03-01

    Numbers and numerical vectors account for a large portion of data. However, recently, the amount of string data generated has increased dramatically. Consequently, classifying string data is a common problem in many fields. The most widely used approach to this problem is to convert strings into numerical vectors using string kernels and subsequently apply a support vector machine that works in a numerical vector space. However, this non-one-to-one conversion involves a loss of information and makes it impossible to evaluate, using probability theory, the generalization error of a learning machine, considering that the given data to train and test the machine are strings generated according to probability laws. In this study, we approach this classification problem by constructing a classifier that works in a set of strings. To evaluate the generalization error of such a classifier theoretically, probability theory for strings is required. Therefore, we first extend a limit theorem for a consensus sequence of strings demonstrated by one of the authors and co-workers in a previous study. Using the obtained result, we then demonstrate that our learning machine classifies strings in an asymptotically optimal manner. Furthermore, we demonstrate the usefulness of our machine in practical data analysis by applying it to predicting protein-protein interactions using amino acid sequences and classifying RNAs by the secondary structure using nucleotide sequences.

  4. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    Science.gov (United States)

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs).

  5. Aggregation Operator Based Fuzzy Pattern Classifier Design

    DEFF Research Database (Denmark)

    Mönks, Uwe; Larsen, Henrik Legind; Lohweg, Volker

    2009-01-01

    This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable....... The performances of novel classifiers using substitutes of MFPC's geometric mean aggregator are benchmarked in the scope of an image processing application against the MFPC to reveal classification improvement potentials for obtaining higher classification rates....

  6. CRBRP structural and thermal margin beyond the design base

    International Nuclear Information System (INIS)

    Strawbridge, L.E.

    1979-01-01

    Prudent margins beyond the design base have been included in the design of Clinch River Breeder Reactor Plant to further reduce the risk to the public from highly improbable occurrences. These margins include Structural Margin Beyond the Design Base to address the energetics aspects and Thermal Margin Beyond the Design Base to address the longer term thermal and radiological consequences. The assessments that led to the specification of these margins are described, along with the experimental support for those assessments. 8 refs

  7. Reinforcement Learning Based Artificial Immune Classifier

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  8. Double-sided Moral Hazard and Margin-based Royalty

    OpenAIRE

    NARIU, Tatsuhiko; UEDA, Kaoru; LEE, DongJoon

    2009-01-01

    This paper analyzes royalty modes in the franchise arrangements of convenience stores under double-sided moral hazard. In Japan, the majority of franchisors charge margin-based royalties based on net margins rather than sales-based royalties based on sales. We show that the franchisor can attain the first-best outcome by adopting margin-based royalties under double-sided moral hazard. We consider a case where a franchisee sells two kinds of goods; one is shipped from its franchisor and the ot...

  9. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    Directory of Open Access Journals (Sweden)

    Cuihong Wen

    Full Text Available Optical Music Recognition (OMR has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM. The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM, which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs and Neural Networks (NNs.

  10. Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model

    Directory of Open Access Journals (Sweden)

    Bogdan Gliwa

    2011-01-01

    Full Text Available The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods. Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented.

  11. Case base classification on digital mammograms: improving the performance of case base classifier

    Science.gov (United States)

    Raman, Valliappan; Then, H. H.; Sumari, Putra; Venkatesa Mohan, N.

    2011-10-01

    Breast cancer continues to be a significant public health problem in the world. Early detection is the key for improving breast cancer prognosis. The aim of the research presented here is in twofold. First stage of research involves machine learning techniques, which segments and extracts features from the mass of digital mammograms. Second level is on problem solving approach which includes classification of mass by performance based case base classifier. In this paper we build a case-based Classifier in order to diagnose mammographic images. We explain different methods and behaviors that have been added to the classifier to improve the performance of the classifier. Currently the initial Performance base Classifier with Bagging is proposed in the paper and it's been implemented and it shows an improvement in specificity and sensitivity.

  12. SpectraClassifier 1.0: a user friendly, automated MRS-based classifier-development system

    Directory of Open Access Journals (Sweden)

    Julià-Sapé Margarida

    2010-02-01

    Full Text Available Abstract Background SpectraClassifier (SC is a Java solution for designing and implementing Magnetic Resonance Spectroscopy (MRS-based classifiers. The main goal of SC is to allow users with minimum background knowledge of multivariate statistics to perform a fully automated pattern recognition analysis. SC incorporates feature selection (greedy stepwise approach, either forward or backward, and feature extraction (PCA. Fisher Linear Discriminant Analysis is the method of choice for classification. Classifier evaluation is performed through various methods: display of the confusion matrix of the training and testing datasets; K-fold cross-validation, leave-one-out and bootstrapping as well as Receiver Operating Characteristic (ROC curves. Results SC is composed of the following modules: Classifier design, Data exploration, Data visualisation, Classifier evaluation, Reports, and Classifier history. It is able to read low resolution in-vivo MRS (single-voxel and multi-voxel and high resolution tissue MRS (HRMAS, processed with existing tools (jMRUI, INTERPRET, 3DiCSI or TopSpin. In addition, to facilitate exchanging data between applications, a standard format capable of storing all the information needed for a dataset was developed. Each functionality of SC has been specifically validated with real data with the purpose of bug-testing and methods validation. Data from the INTERPRET project was used. Conclusions SC is a user-friendly software designed to fulfil the needs of potential users in the MRS community. It accepts all kinds of pre-processed MRS data types and classifies them semi-automatically, allowing spectroscopists to concentrate on interpretation of results with the use of its visualisation tools.

  13. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    Science.gov (United States)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  14. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    International Nuclear Information System (INIS)

    Blanco, A; Rodriguez, R; Martinez-Maranon, I

    2014-01-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity

  15. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  16. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin

    2014-01-01

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  17. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  18. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  19. A NEW FRAMEWORK FOR OBJECT-BASED IMAGE ANALYSIS BASED ON SEGMENTATION SCALE SPACE AND RANDOM FOREST CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Hadavand

    2015-12-01

    Full Text Available In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS, a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  20. SAR Target Recognition Based on Multi-feature Multiple Representation Classifier Fusion

    Directory of Open Access Journals (Sweden)

    Zhang Xinzheng

    2017-10-01

    Full Text Available In this paper, we present a Synthetic Aperture Radar (SAR image target recognition algorithm based on multi-feature multiple representation learning classifier fusion. First, it extracts three features from the SAR images, namely principal component analysis, wavelet transform, and Two-Dimensional Slice Zernike Moments (2DSZM features. Second, we harness the sparse representation classifier and the cooperative representation classifier with the above-mentioned features to get six predictive labels. Finally, we adopt classifier fusion to obtain the final recognition decision. We researched three different classifier fusion algorithms in our experiments, and the results demonstrate thatusing Bayesian decision fusion gives thebest recognition performance. The method based on multi-feature multiple representation learning classifier fusion integrates the discrimination of multi-features and combines the sparse and cooperative representation classification performance to gain complementary advantages and to improve recognition accuracy. The experiments are based on the Moving and Stationary Target Acquisition and Recognition (MSTAR database,and they demonstrate the effectiveness of the proposed approach.

  1. Data Stream Classification Based on the Gamma Classifier

    Directory of Open Access Journals (Sweden)

    Abril Valeria Uriarte-Arcia

    2015-01-01

    Full Text Available The ever increasing data generation confronts us with the problem of handling online massive amounts of information. One of the biggest challenges is how to extract valuable information from these massive continuous data streams during single scanning. In a data stream context, data arrive continuously at high speed; therefore the algorithms developed to address this context must be efficient regarding memory and time management and capable of detecting changes over time in the underlying distribution that generated the data. This work describes a novel method for the task of pattern classification over a continuous data stream based on an associative model. The proposed method is based on the Gamma classifier, which is inspired by the Alpha-Beta associative memories, which are both supervised pattern recognition models. The proposed method is capable of handling the space and time constrain inherent to data stream scenarios. The Data Streaming Gamma classifier (DS-Gamma classifier implements a sliding window approach to provide concept drift detection and a forgetting mechanism. In order to test the classifier, several experiments were performed using different data stream scenarios with real and synthetic data streams. The experimental results show that the method exhibits competitive performance when compared to other state-of-the-art algorithms.

  2. Robust Template Decomposition without Weight Restriction for Cellular Neural Networks Implementing Arbitrary Boolean Functions Using Support Vector Classifiers

    Directory of Open Access Journals (Sweden)

    Yih-Lon Lin

    2013-01-01

    Full Text Available If the given Boolean function is linearly separable, a robust uncoupled cellular neural network can be designed as a maximal margin classifier. On the other hand, if the given Boolean function is linearly separable but has a small geometric margin or it is not linearly separable, a popular approach is to find a sequence of robust uncoupled cellular neural networks implementing the given Boolean function. In the past research works using this approach, the control template parameters and thresholds are restricted to assume only a given finite set of integers, and this is certainly unnecessary for the template design. In this study, we try to remove this restriction. Minterm- and maxterm-based decomposition algorithms utilizing the soft margin and maximal margin support vector classifiers are proposed to design a sequence of robust templates implementing an arbitrary Boolean function. Several illustrative examples are simulated to demonstrate the efficiency of the proposed method by comparing our results with those produced by other decomposition methods with restricted weights.

  3. A systems biology-based classifier for hepatocellular carcinoma diagnosis.

    Directory of Open Access Journals (Sweden)

    Yanqiong Zhang

    Full Text Available AIM: The diagnosis of hepatocellular carcinoma (HCC in the early stage is crucial to the application of curative treatments which are the only hope for increasing the life expectancy of patients. Recently, several large-scale studies have shed light on this problem through analysis of gene expression profiles to identify markers correlated with HCC progression. However, those marker sets shared few genes in common and were poorly validated using independent data. Therefore, we developed a systems biology based classifier by combining the differential gene expression with topological features of human protein interaction networks to enhance the ability of HCC diagnosis. METHODS AND RESULTS: In the Oncomine platform, genes differentially expressed in HCC tissues relative to their corresponding normal tissues were filtered by a corrected Q value cut-off and Concept filters. The identified genes that are common to different microarray datasets were chosen as the candidate markers. Then, their networks were analyzed by GeneGO Meta-Core software and the hub genes were chosen. After that, an HCC diagnostic classifier was constructed by Partial Least Squares modeling based on the microarray gene expression data of the hub genes. Validations of diagnostic performance showed that this classifier had high predictive accuracy (85.88∼92.71% and area under ROC curve (approximating 1.0, and that the network topological features integrated into this classifier contribute greatly to improving the predictive performance. Furthermore, it has been demonstrated that this modeling strategy is not only applicable to HCC, but also to other cancers. CONCLUSION: Our analysis suggests that the systems biology-based classifier that combines the differential gene expression and topological features of human protein interaction network may enhance the diagnostic performance of HCC classifier.

  4. Molecular markers in the surgical margin of oral carcinomas

    DEFF Research Database (Denmark)

    Bilde, A.; Buchwald, C. von; Dabelsteen, E.

    2009-01-01

    epithelium in the surgical resection margin may explain the local recurrence rate. The purpose of this study is to investigate the presence of senescence markers, which may represent early malignant changes in the margin that in routine pathological evaluations are classified as histologically normal...

  5. Ensemble of classifiers based network intrusion detection system performance bound

    CSIR Research Space (South Africa)

    Mkuzangwe, Nenekazi NP

    2017-11-01

    Full Text Available This paper provides a performance bound of a network intrusion detection system (NIDS) that uses an ensemble of classifiers. Currently researchers rely on implementing the ensemble of classifiers based NIDS before they can determine the performance...

  6. Local curvature analysis for classifying breast tumors: Preliminary analysis in dedicated breast CT

    International Nuclear Information System (INIS)

    Lee, Juhun; Nishikawa, Robert M.; Reiser, Ingrid; Boone, John M.; Lindfors, Karen K.

    2015-01-01

    Purpose: The purpose of this study is to measure the effectiveness of local curvature measures as novel image features for classifying breast tumors. Methods: A total of 119 breast lesions from 104 noncontrast dedicated breast computed tomography images of women were used in this study. Volumetric segmentation was done using a seed-based segmentation algorithm and then a triangulated surface was extracted from the resulting segmentation. Total, mean, and Gaussian curvatures were then computed. Normalized curvatures were used as classification features. In addition, traditional image features were also extracted and a forward feature selection scheme was used to select the optimal feature set. Logistic regression was used as a classifier and leave-one-out cross-validation was utilized to evaluate the classification performances of the features. The area under the receiver operating characteristic curve (AUC, area under curve) was used as a figure of merit. Results: Among curvature measures, the normalized total curvature (C_T) showed the best classification performance (AUC of 0.74), while the others showed no classification power individually. Five traditional image features (two shape, two margin, and one texture descriptors) were selected via the feature selection scheme and its resulting classifier achieved an AUC of 0.83. Among those five features, the radial gradient index (RGI), which is a margin descriptor, showed the best classification performance (AUC of 0.73). A classifier combining RGI and C_T yielded an AUC of 0.81, which showed similar performance (i.e., no statistically significant difference) to the classifier with the above five traditional image features. Additional comparisons in AUC values between classifiers using different combinations of traditional image features and C_T were conducted. The results showed that C_T was able to replace the other four image features for the classification task. Conclusions: The normalized curvature measure

  7. Interface Prostheses With Classifier-Feedback-Based User Training.

    Science.gov (United States)

    Fang, Yinfeng; Zhou, Dalin; Li, Kairu; Liu, Honghai

    2017-11-01

    It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well as the centroids of the training samples, whose dimensionality is reduced to minimal number by dimension reduction. Clustering feedback provides a criterion that guides users to adjust motion gestures and muscle contraction forces intentionally. The experiment results have demonstrated that hand motion recognition accuracy increases steadily along the progress of the clustering-feedback-based user training, while conventional classifier-feedback methods, i.e., label feedback, hardly achieve any improvement. The result concludes that the use of proper classifier feedback can accelerate the process of user training, and implies prosperous future for the amputees with limited or no experience in pattern-recognition-based prosthetic device manipulation.It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well

  8. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Khan, L.; Israël, Menno; Petrushin, V.A.; van den Broek, Egon; van der Putten, Peter

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  9. Feature and score fusion based multiple classifier selection for iris recognition.

    Science.gov (United States)

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  10. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2014-01-01

    Full Text Available The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  11. Analysis and minimization of overtraining effect in rule-based classifiers for computer-aided diagnosis

    International Nuclear Information System (INIS)

    Li Qiang; Doi Kunio

    2006-01-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists detect various lesions in medical images. In CAD schemes, classifiers play a key role in achieving a high lesion detection rate and a low false-positive rate. Although many popular classifiers such as linear discriminant analysis and artificial neural networks have been employed in CAD schemes for reduction of false positives, a rule-based classifier has probably been the simplest and most frequently used one since the early days of development of various CAD schemes. However, with existing rule-based classifiers, there are major disadvantages that significantly reduce their practicality and credibility. The disadvantages include manual design, poor reproducibility, poor evaluation methods such as resubstitution, and a large overtraining effect. An automated rule-based classifier with a minimized overtraining effect can overcome or significantly reduce the extent of the above-mentioned disadvantages. In this study, we developed an 'optimal' method for the selection of cutoff thresholds and a fully automated rule-based classifier. Experimental results performed with Monte Carlo simulation and a real lung nodule CT data set demonstrated that the automated threshold selection method can completely eliminate overtraining effect in the procedure of cutoff threshold selection, and thus can minimize overall overtraining effect in the constructed rule-based classifier. We believe that this threshold selection method is very useful in the construction of automated rule-based classifiers with minimized overtraining effect

  12. SVM Classifiers: The Objects Identification on the Base of Their Hyperspectral Features

    Directory of Open Access Journals (Sweden)

    Demidova Liliya

    2017-01-01

    Full Text Available The problem of the objects identification on the base of their hyperspectral features has been considered. It is offered to use the SVM classifiers on the base of the modified PSO algorithm, adapted to specifics of the problem of the objects identification on the base of their hyperspectral features. The results of the objects identification on the base of their hyperspectral features with using of the SVM classifiers have been presented.

  13. The scenario-based generalization of radiation therapy margins

    International Nuclear Information System (INIS)

    Fredriksson, Albin; Bokrantz, Rasmus

    2016-01-01

    We give a scenario-based treatment plan optimization formulation that is equivalent to planning with geometric margins if the scenario doses are calculated using the static dose cloud approximation. If the scenario doses are instead calculated more accurately, then our formulation provides a novel robust planning method that overcomes many of the difficulties associated with previous scenario-based robust planning methods. In particular, our method protects only against uncertainties that can occur in practice, it gives a sharp dose fall-off outside high dose regions, and it avoids underdosage of the target in ‘easy’ scenarios. The method shares the benefits of the previous scenario-based robust planning methods over geometric margins for applications where the static dose cloud approximation is inaccurate, such as irradiation with few fields and irradiation with ion beams. These properties are demonstrated on a suite of phantom cases planned for treatment with scanned proton beams subject to systematic setup uncertainty. (paper)

  14. Effective diagnosis of Alzheimer’s disease by means of large margin-based methodology

    Directory of Open Access Journals (Sweden)

    Chaves Rosa

    2012-07-01

    Full Text Available Abstract Background Functional brain images such as Single-Photon Emission Computed Tomography (SPECT and Positron Emission Tomography (PET have been widely used to guide the clinicians in the Alzheimer’s Disease (AD diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD Systems. Methods It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT, Principal Component Analysis (PCA or Partial Least Squares (PLS (the two latter also analysed with a LMNN transformation. Regarding the classifiers, kernel Support Vector Machines (SVMs and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. Results Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i linear transformation of the PLS or PCA reduced data, ii feature reduction technique, and iii classifier (with Euclidean, Mahalanobis or Energy-based methodology. The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT and 90.67%, 88% and 93.33% (for PET, respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. Conclusions All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between

  15. Learning to Detect Traffic Incidents from Data Based on Tree Augmented Naive Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Dawei Li

    2017-01-01

    Full Text Available This study develops a tree augmented naive Bayesian (TAN classifier based incident detection algorithm. Compared with the Bayesian networks based detection algorithms developed in the previous studies, this algorithm has less dependency on experts’ knowledge. The structure of TAN classifier for incident detection is learned from data. The discretization of continuous attributes is processed using an entropy-based method automatically. A simulation dataset on the section of the Ayer Rajah Expressway (AYE in Singapore is used to demonstrate the development of proposed algorithm, including wavelet denoising, normalization, entropy-based discretization, and structure learning. The performance of TAN based algorithm is evaluated compared with the previous developed Bayesian network (BN based and multilayer feed forward (MLF neural networks based algorithms with the same AYE data. The experiment results show that the TAN based algorithms perform better than the BN classifiers and have a similar performance to the MLF based algorithm. However, TAN based algorithm would have wider vista of applications because the theory of TAN classifiers is much less complicated than MLF. It should be found from the experiment that the TAN classifier based algorithm has a significant superiority over the speed of model training and calibration compared with MLF.

  16. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    OpenAIRE

    M. Favorskaya; A. Nosov; A. Popov

    2015-01-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin dete...

  17. A support vector machine (SVM) based voltage stability classifier

    Energy Technology Data Exchange (ETDEWEB)

    Dosano, R.D.; Song, H. [Kunsan National Univ., Kunsan, Jeonbuk (Korea, Republic of); Lee, B. [Korea Univ., Seoul (Korea, Republic of)

    2007-07-01

    Power system stability has become even more complex and critical with the advent of deregulated energy markets and the growing desire to completely employ existing transmission and infrastructure. The economic pressure on electricity markets forces the operation of power systems and components to their limit of capacity and performance. System conditions can be more exposed to instability due to greater uncertainty in day to day system operations and increase in the number of potential components for system disturbances potentially resulting in voltage stability. This paper proposed a support vector machine (SVM) based power system voltage stability classifier using local measurements of voltage and active power of load. It described the procedure for fast classification of long-term voltage stability using the SVM algorithm. The application of the SVM based voltage stability classifier was presented with reference to the choice of input parameters; input data preconditioning; moving window for feature vector; determination of learning samples; and other considerations in SVM applications. The paper presented a case study with numerical examples of an 11-bus test system. The test results for the feasibility study demonstrated that the classifier could offer an excellent performance in classification with time-series measurements in terms of long-term voltage stability. 9 refs., 14 figs.

  18. Correlation Dimension-Based Classifier

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    2014-01-01

    Roč. 44, č. 12 (2014), s. 2253-2263 ISSN 2168-2267 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : classifier * multidimensional data * correlation dimension * scaling exponent * polynomial expansion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014

  19. Classifier models and architectures for EEG-based neonatal seizure detection

    International Nuclear Information System (INIS)

    Greene, B R; Marnane, W P; Lightbody, G; Reilly, R B; Boylan, G B

    2008-01-01

    Neonatal seizures are the most common neurological emergency in the neonatal period and are associated with a poor long-term outcome. Early detection and treatment may improve prognosis. This paper aims to develop an optimal set of parameters and a comprehensive scheme for patient-independent multi-channel EEG-based neonatal seizure detection. We employed a dataset containing 411 neonatal seizures. The dataset consists of multi-channel EEG recordings with a mean duration of 14.8 h from 17 neonatal patients. Early-integration and late-integration classifier architectures were considered for the combination of information across EEG channels. Three classifier models based on linear discriminants, quadratic discriminants and regularized discriminants were employed. Furthermore, the effect of electrode montage was considered. The best performing seizure detection system was found to be an early integration configuration employing a regularized discriminant classifier model. A referential EEG montage was found to outperform the more standard bipolar electrode montage for automated neonatal seizure detection. A cross-fold validation estimate of the classifier performance for the best performing system yielded 81.03% of seizures correctly detected with a false detection rate of 3.82%. With post-processing, the false detection rate was reduced to 1.30% with 59.49% of seizures correctly detected. These results represent a comprehensive illustration that robust reliable patient-independent neonatal seizure detection is possible using multi-channel EEG

  20. Model-Based Systems Engineering Approach to Managing Mass Margin

    Science.gov (United States)

    Chung, Seung H.; Bayer, Todd J.; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Christopher; Lam, Doris

    2012-01-01

    When designing a flight system from concept through implementation, one of the fundamental systems engineering tasks ismanaging the mass margin and a mass equipment list (MEL) of the flight system. While generating a MEL and computing a mass margin is conceptually a trivial task, maintaining consistent and correct MELs and mass margins can be challenging due to the current practices of maintaining duplicate information in various forms, such as diagrams and tables, and in various media, such as files and emails. We have overcome this challenge through a model-based systems engineering (MBSE) approach within which we allow only a single-source-of-truth. In this paper we describe the modeling patternsused to capture the single-source-of-truth and the views that have been developed for the Europa Habitability Mission (EHM) project, a mission concept study, at the Jet Propulsion Laboratory (JPL).

  1. Effective Heart Disease Detection Based on Quantitative Computerized Traditional Chinese Medicine Using Representation Based Classifiers

    Directory of Open Access Journals (Sweden)

    Ting Shu

    2017-01-01

    Full Text Available At present, heart disease is the number one cause of death worldwide. Traditionally, heart disease is commonly detected using blood tests, electrocardiogram, cardiac computerized tomography scan, cardiac magnetic resonance imaging, and so on. However, these traditional diagnostic methods are time consuming and/or invasive. In this paper, we propose an effective noninvasive computerized method based on facial images to quantitatively detect heart disease. Specifically, facial key block color features are extracted from facial images and analyzed using the Probabilistic Collaborative Representation Based Classifier. The idea of facial key block color analysis is founded in Traditional Chinese Medicine. A new dataset consisting of 581 heart disease and 581 healthy samples was experimented by the proposed method. In order to optimize the Probabilistic Collaborative Representation Based Classifier, an analysis of its parameters was performed. According to the experimental results, the proposed method obtains the highest accuracy compared with other classifiers and is proven to be effective at heart disease detection.

  2. Univariate decision tree induction using maximum margin classification

    OpenAIRE

    Yıldız, Olcay Taner

    2012-01-01

    In many pattern recognition applications, first decision trees are used due to their simplicity and easily interpretable nature. In this paper, we propose a new decision tree learning algorithm called univariate margin tree where, for each continuous attribute, the best split is found using convex optimization. Our simulation results on 47 data sets show that the novel margin tree classifier performs at least as good as C4.5 and linear discriminant tree (LDT) with a similar time complexity. F...

  3. Finger vein identification using fuzzy-based k-nearest centroid neighbor classifier

    Science.gov (United States)

    Rosdi, Bakhtiar Affendi; Jaafar, Haryati; Ramli, Dzati Athiar

    2015-02-01

    In this paper, a new approach for personal identification using finger vein image is presented. Finger vein is an emerging type of biometrics that attracts attention of researchers in biometrics area. As compared to other biometric traits such as face, fingerprint and iris, finger vein is more secured and hard to counterfeit since the features are inside the human body. So far, most of the researchers focus on how to extract robust features from the captured vein images. Not much research was conducted on the classification of the extracted features. In this paper, a new classifier called fuzzy-based k-nearest centroid neighbor (FkNCN) is applied to classify the finger vein image. The proposed FkNCN employs a surrounding rule to obtain the k-nearest centroid neighbors based on the spatial distributions of the training images and their distance to the test image. Then, the fuzzy membership function is utilized to assign the test image to the class which is frequently represented by the k-nearest centroid neighbors. Experimental evaluation using our own database which was collected from 492 fingers shows that the proposed FkNCN has better performance than the k-nearest neighbor, k-nearest-centroid neighbor and fuzzy-based-k-nearest neighbor classifiers. This shows that the proposed classifier is able to identify the finger vein image effectively.

  4. Lung Nodule Image Classification Based on Local Difference Pattern and Combined Classifier.

    Science.gov (United States)

    Mao, Keming; Deng, Zhuofu

    2016-01-01

    This paper proposes a novel lung nodule classification method for low-dose CT images. The method includes two stages. First, Local Difference Pattern (LDP) is proposed to encode the feature representation, which is extracted by comparing intensity difference along circular regions centered at the lung nodule. Then, the single-center classifier is trained based on LDP. Due to the diversity of feature distribution for different class, the training images are further clustered into multiple cores and the multicenter classifier is constructed. The two classifiers are combined to make the final decision. Experimental results on public dataset show the superior performance of LDP and the combined classifier.

  5. Lung Nodule Image Classification Based on Local Difference Pattern and Combined Classifier

    Directory of Open Access Journals (Sweden)

    Keming Mao

    2016-01-01

    Full Text Available This paper proposes a novel lung nodule classification method for low-dose CT images. The method includes two stages. First, Local Difference Pattern (LDP is proposed to encode the feature representation, which is extracted by comparing intensity difference along circular regions centered at the lung nodule. Then, the single-center classifier is trained based on LDP. Due to the diversity of feature distribution for different class, the training images are further clustered into multiple cores and the multicenter classifier is constructed. The two classifiers are combined to make the final decision. Experimental results on public dataset show the superior performance of LDP and the combined classifier.

  6. Evaluation of a rapid LMP-based approach for calculating marginal unit emissions

    International Nuclear Information System (INIS)

    Rogers, Michelle M.; Wang, Yang; Wang, Caisheng; McElmurry, Shawn P.; Miller, Carol J.

    2013-01-01

    Graphical abstract: Display Omitted - Highlights: • Pollutant emissions estimated based on locational marginal price and eGRID data. • Stochastic model using IEEE RTS-96 system used to evaluate LMP approach. • Incorporating membership function enhanced reliability of pollutant estimate. • Error in pollutant estimate typically 2 and X and SO 2 . - Abstract: To evaluate the sustainability of systems that draw power from electrical grids there is a need to rapidly and accurately quantify pollutant emissions associated with power generation. Air emissions resulting from electricity generation vary widely among power plants based on the types of fuel consumed, the efficiency of the plant, and the type of pollution control systems in service. To address this need, methods for estimating real-time air emissions from power generation based on locational marginal prices (LMPs) have been developed. Based on LMPs the type of the marginal generating unit can be identified and pollutant emissions are estimated. While conceptually demonstrated, this LMP approach has not been rigorously tested. The purpose of this paper is to (1) improve the LMP method for predicting pollutant emissions and (2) evaluate the reliability of this technique through power system simulations. Previous LMP methods were expanded to include marginal emissions estimates using an LMP Emissions Estimation Method (LEEM). The accuracy of emission estimates was further improved by incorporating a probability distribution function that characterize generator fuel costs and a membership function (MF) capable of accounting for multiple marginal generation units. Emission estimates were compared to those predicted from power flow simulations. The improved LEEM was found to predict the marginal generation type approximately 70% of the time based on typical system conditions (e.g. loads and fuel costs) without the use of a MF. With the addition of a MF, the LEEM was found to provide emission estimates with

  7. Double Ramp Loss Based Reject Option Classifier

    Science.gov (United States)

    2015-05-22

    of convex (DC) functions. To minimize it, we use DC programming approach [1]. The proposed method has following advantages: (1) the proposed loss LDR ...space constraints. We see that LDR does not put any restriction on ρ for it to be an upper bound of L0−d−1. 2.2 Risk Formulation Using LDR Let S = {(xn...classifier learnt using LDR based approach (C = 100, μ = 1, d = .2). Filled circles and triangles represent the support vectors. 4 Experimental Results We show

  8. WEB-BASED ADAPTIVE TESTING SYSTEM (WATS FOR CLASSIFYING STUDENTS ACADEMIC ABILITY

    Directory of Open Access Journals (Sweden)

    Jaemu LEE,

    2012-08-01

    Full Text Available Computer Adaptive Testing (CAT has been highlighted as a promising assessment method to fulfill two testing purposes: estimating student academic ability and classifying student academic level. In this paper, we introduced the Web-based Adaptive Testing System (WATS developed to support a cost effective assessment for classifying students’ ability into different academic levels. Instead of using a traditional paper and pencil test, the WATS is expected to serve as an alternate method to promptly diagnosis and identify underachieving students through Web-based testing. The WATS can also help provide students with appropriate learning contents and necessary academic support in time. In this paper, theoretical background and structure of WATS, item construction process based upon item response theory, and user interfaces of WATS were discussed.

  9. Fuzzy prototype classifier based on items and its application in recommender system

    Directory of Open Access Journals (Sweden)

    Mei Cai

    2017-01-01

    Full Text Available Currently, recommender systems (RS are incorporating implicit information from social circle of the Internet. The implicit social information in human mind is not easy to reflect in appropriate decision making techniques. This paper consists of 2 contributions. First, we develop an item-based prototype classifier (IPC in which a prototype represents a social circlers preferences as a pattern classification technique. We assume the social circle which distinguishes with others by the items their members like. The prototype structure of the classifier is defined by two2-dimensional matrices. We use information gain and OWA aggregator to construct a feature space. The item-based classifier assigns a new item to some prototypes with different prototypicalities. We reform a typical data setmIris data set in UCI Machine Learning Repository to verify our fuzzy prototype classifier. The second proposition of this paper is to give the application of IPC in recommender system to solve new item cold-start problems. We modify the dataset of MovieLens to perform experimental demonstrations of the proposed ideas.

  10. Hot roller embossing system equipped with a temperature margin-based controller

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seyoung, E-mail: seyoungkim@kimm.re.kr; Son, Youngsu; Lee, Sunghee; Ham, Sangyong; Kim, Byungin [Department of Robotics and Mechatronics, Korea Institute of Machinery and Materials (KIMM), Daejeon (Korea, Republic of)

    2014-08-15

    A temperature control system was proposed for hot roller embossing. The roll surface was heated using induction coils and cooled with a circulating chilled water system. The temperature of the roll surface was precisely controlled by a temperature margin-based control algorithm that we developed. Implementation of the control system reduced deviations in the roll surface temperature to less than ±2 °C. The tight temperature control and the ability to rapidly increase and decrease the roll temperature will allow optimum operating parameters to be developed quickly. The temperature margin-based controller could also be used to optimize the time course of electrical power and shorten the cooling time by choosing an appropriate temperature margin, possibly for limited power consumption. The chiller-equipped heating roll with the proposed control algorithm is expected to decrease the time needed to determine the optimal embossing process.

  11. Hot roller embossing system equipped with a temperature margin-based controller

    International Nuclear Information System (INIS)

    Kim, Seyoung; Son, Youngsu; Lee, Sunghee; Ham, Sangyong; Kim, Byungin

    2014-01-01

    A temperature control system was proposed for hot roller embossing. The roll surface was heated using induction coils and cooled with a circulating chilled water system. The temperature of the roll surface was precisely controlled by a temperature margin-based control algorithm that we developed. Implementation of the control system reduced deviations in the roll surface temperature to less than ±2 °C. The tight temperature control and the ability to rapidly increase and decrease the roll temperature will allow optimum operating parameters to be developed quickly. The temperature margin-based controller could also be used to optimize the time course of electrical power and shorten the cooling time by choosing an appropriate temperature margin, possibly for limited power consumption. The chiller-equipped heating roll with the proposed control algorithm is expected to decrease the time needed to determine the optimal embossing process

  12. FERAL : Network-based classifier with application to breast cancer outcome prediction

    NARCIS (Netherlands)

    Allahyar, A.; De Ridder, J.

    2015-01-01

    Motivation: Breast cancer outcome prediction based on gene expression profiles is an important strategy for personalize patient care. To improve performance and consistency of discovered markers of the initial molecular classifiers, network-based outcome prediction methods (NOPs) have been proposed.

  13. Re-appraisal of the Magma-rich versus Magma-poor Paradigm at Rifted Margins: consequences for breakup processes

    Science.gov (United States)

    Tugend, J.; Gillard, M.; Manatschal, G.; Nirrengarten, M.; Harkin, C. J.; Epin, M. E.; Sauter, D.; Autin, J.; Kusznir, N. J.; McDermott, K.

    2017-12-01

    Rifted margins are often classified based on their magmatic budget only. Magma-rich margins are commonly considered to have excess decompression melting at lithospheric breakup compared with steady state seafloor spreading while magma-poor margins have suppressed melting. New observations derived from high quality geophysical data sets and drill-hole data have revealed the diversity of rifted margin architecture and variable distribution of magmatism. Recent studies suggest, however, that rifted margins have more complex and polyphase tectono-magmatic evolutions than previously assumed and cannot be characterized based on the observed volume of magma alone. We compare the magmatic budget related to lithospheric breakup along two high-resolution long-offset deep reflection seismic profiles across the SE-Indian (magma-poor) and Uruguayan (magma-rich) rifted margins. Resolving the volume of magmatic additions is difficult. Interpretations are non-unique and several of them appear plausible for each case involving variable magmatic volumes and mechanisms to achieve lithospheric breakup. A supposedly 'magma-poor' rifted margin (SE-India) may show a 'magma-rich' lithospheric breakup whereas a 'magma-rich' rifted margin (Uruguay) does not necessarily show excess magmatism at lithospheric breakup compared with steady-state seafloor spreading. This questions the paradigm that rifted margins can be subdivided in either magma-poor or magma-rich margins. The Uruguayan and other magma-rich rifted margins appear characterized by an early onset of decompression melting relative to crustal breakup. For the converse, where the onset of decompression melting is late compared with the timing of crustal breakup, mantle exhumation can occur (e.g. SE-India). Our work highlights the difficulty in determining a magmatic budget at rifted margins based on seismic reflection data alone, showing the limitations of margin classification based solely on magmatic volumes. The timing of

  14. Effective Sequential Classifier Training for SVM-Based Multitemporal Remote Sensing Image Classification

    Science.gov (United States)

    Guo, Yiqing; Jia, Xiuping; Paull, David

    2018-06-01

    The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.

  15. Fusion of classifiers for REIS-based detection of suspicious breast lesions

    Science.gov (United States)

    Lederman, Dror; Wang, Xingwei; Zheng, Bin; Sumkin, Jules H.; Tublin, Mitchell; Gur, David

    2011-03-01

    After developing a multi-probe resonance-frequency electrical impedance spectroscopy (REIS) system aimed at detecting women with breast abnormalities that may indicate a developing breast cancer, we have been conducting a prospective clinical study to explore the feasibility of applying this REIS system to classify younger women (breast cancer. The system comprises one central probe placed in contact with the nipple, and six additional probes uniformly distributed along an outside circle to be placed in contact with six points on the outer breast skin surface. In this preliminary study, we selected an initial set of 174 examinations on participants that have completed REIS examinations and have clinical status verification. Among these, 66 examinations were recommended for biopsy due to findings of a highly suspicious breast lesion ("positives"), and 108 were determined as negative during imaging based procedures ("negatives"). A set of REIS-based features, extracted using a mirror-matched approach, was computed and fed into five machine learning classifiers. A genetic algorithm was used to select an optimal subset of features for each of the five classifiers. Three fusion rules, namely sum rule, weighted sum rule and weighted median rule, were used to combine the results of the classifiers. Performance evaluation was performed using a leave-one-case-out cross-validation method. The results indicated that REIS may provide a new technology to identify younger women with higher than average risk of having or developing breast cancer. Furthermore, it was shown that fusion rule, such as a weighted median fusion rule and a weighted sum fusion rule may improve performance as compared with the highest performing single classifier.

  16. Discovering mammography-based machine learning classifiers for breast cancer diagnosis.

    Science.gov (United States)

    Ramos-Pollán, Raúl; Guevara-López, Miguel Angel; Suárez-Ortega, Cesar; Díaz-Herrero, Guillermo; Franco-Valiente, Jose Miguel; Rubio-Del-Solar, Manuel; González-de-Posada, Naimy; Vaz, Mario Augusto Pires; Loureiro, Joana; Ramos, Isabel

    2012-08-01

    This work explores the design of mammography-based machine learning classifiers (MLC) and proposes a new method to build MLC for breast cancer diagnosis. We massively evaluated MLC configurations to classify features vectors extracted from segmented regions (pathological lesion or normal tissue) on craniocaudal (CC) and/or mediolateral oblique (MLO) mammography image views, providing BI-RADS diagnosis. Previously, appropriate combinations of image processing and normalization techniques were applied to reduce image artifacts and increase mammograms details. The method can be used under different data acquisition circumstances and exploits computer clusters to select well performing MLC configurations. We evaluated 286 cases extracted from the repository owned by HSJ-FMUP, where specialized radiologists segmented regions on CC and/or MLO images (biopsies provided the golden standard). Around 20,000 MLC configurations were evaluated, obtaining classifiers achieving an area under the ROC curve of 0.996 when combining features vectors extracted from CC and MLO views of the same case.

  17. Marginalization of the Youth

    DEFF Research Database (Denmark)

    Jensen, Niels Rosendal

    2009-01-01

    The article is based on a key note speach in Bielefeld on the subject "welfare state and marginalized youth", focusing upon the high ambition of expanding schooling in Denmark from 9 to 12 years. The unintended effect may be a new kind of marginalization.......The article is based on a key note speach in Bielefeld on the subject "welfare state and marginalized youth", focusing upon the high ambition of expanding schooling in Denmark from 9 to 12 years. The unintended effect may be a new kind of marginalization....

  18. Conference Report: The New Discovery of Margins: Theory-Based Excursions in Marginal Social Fields

    Directory of Open Access Journals (Sweden)

    Babette Kirchner

    2014-05-01

    Full Text Available At this year's spring conference of the Sociology of Knowledge Section of the German Sociological Association, a diverse range of theoretical concepts and multiple empirical insights into different marginal social fields were presented. As in everyday life, drawing a line between center and margin can be seen as an important challenge that must equally be faced in sociology. The socially constructed borderline appears to be highly variable. Therefore it has to be delineated or fixed somehow. The construction of margins is necessary for society in general and smaller social groupings alike to confirm one's own "normal" identity, or one's own membership on the fringes. The different contributions exemplify what was established at the beginning of the conference: Namely that society and its margins are defined differently according to the empirical as well as conceptual focus. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1402148

  19. Contributions to knowledge of the continental margin of Uruguay. Uruguayan continental margin: morphology, geology and identification of the base of the slope

    International Nuclear Information System (INIS)

    Preciozzi, F.

    2014-01-01

    This work is about the morphology, geology and the identification of the base of the slope in the The Uruguayan continental margin which corresponds to the the type of divergent, volcanic and segmented margins. Morphologically is constituted by a clearly defined continental shelf, as well as a continental slope that presents configuration changes from north to south and passes directly to the abyssal plain

  20. Just-in-time adaptive classifiers-part II: designing the classifier.

    Science.gov (United States)

    Alippi, Cesare; Roveri, Manuel

    2008-12-01

    Aging effects, environmental changes, thermal drifts, and soft and hard faults affect physical systems by changing their nature and behavior over time. To cope with a process evolution adaptive solutions must be envisaged to track its dynamics; in this direction, adaptive classifiers are generally designed by assuming the stationary hypothesis for the process generating the data with very few results addressing nonstationary environments. This paper proposes a methodology based on k-nearest neighbor (NN) classifiers for designing adaptive classification systems able to react to changing conditions just-in-time (JIT), i.e., exactly when it is needed. k-NN classifiers have been selected for their computational-free training phase, the possibility to easily estimate the model complexity k and keep under control the computational complexity of the classifier through suitable data reduction mechanisms. A JIT classifier requires a temporal detection of a (possible) process deviation (aspect tackled in a companion paper) followed by an adaptive management of the knowledge base (KB) of the classifier to cope with the process change. The novelty of the proposed approach resides in the general framework supporting the real-time update of the KB of the classification system in response to novel information coming from the process both in stationary conditions (accuracy improvement) and in nonstationary ones (process tracking) and in providing a suitable estimate of k. It is shown that the classification system grants consistency once the change targets the process generating the data in a new stationary state, as it is the case in many real applications.

  1. Speaker gender identification based on majority vote classifiers

    Science.gov (United States)

    Mezghani, Eya; Charfeddine, Maha; Nicolas, Henri; Ben Amar, Chokri

    2017-03-01

    Speaker gender identification is considered among the most important tools in several multimedia applications namely in automatic speech recognition, interactive voice response systems and audio browsing systems. Gender identification systems performance is closely linked to the selected feature set and the employed classification model. Typical techniques are based on selecting the best performing classification method or searching optimum tuning of one classifier parameters through experimentation. In this paper, we consider a relevant and rich set of features involving pitch, MFCCs as well as other temporal and frequency-domain descriptors. Five classification models including decision tree, discriminant analysis, nave Bayes, support vector machine and k-nearest neighbor was experimented. The three best perming classifiers among the five ones will contribute by majority voting between their scores. Experimentations were performed on three different datasets spoken in three languages: English, German and Arabic in order to validate language independency of the proposed scheme. Results confirm that the presented system has reached a satisfying accuracy rate and promising classification performance thanks to the discriminating abilities and diversity of the used features combined with mid-level statistics.

  2. A bench-top hyperspectral imaging system to classify beef from Nellore cattle based on tenderness

    Science.gov (United States)

    Nubiato, Keni Eduardo Zanoni; Mazon, Madeline Rezende; Antonelo, Daniel Silva; Calkins, Chris R.; Naganathan, Govindarajan Konda; Subbiah, Jeyamkondan; da Luz e Silva, Saulo

    2018-03-01

    The aim of this study was to evaluate the accuracy of classification of Nellore beef aged for 0, 7, 14, or 21 days and classification based on tenderness and aging period using a bench-top hyperspectral imaging system. A hyperspectral imaging system (λ = 928-2524 nm) was used to collect hyperspectral images of the Longissimus thoracis et lumborum (aging n = 376 and tenderness n = 345) of Nellore cattle. The image processing steps included selection of region of interest, extraction of spectra, and indentification and evalution of selected wavelengths for classification. Six linear discriminant models were developed to classify samples based on tenderness and aging period. The model using the first derivative of partial absorbance spectra (give wavelength range spectra) was able to classify steaks based on the tenderness with an overall accuracy of 89.8%. The model using the first derivative of full absorbance spectra was able to classify steaks based on aging period with an overall accuracy of 84.8%. The results demonstrate that the HIS may be a viable technology for classifying beef based on tenderness and aging period.

  3. Automatic construction of a recurrent neural network based classifier for vehicle passage detection

    Science.gov (United States)

    Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur

    2017-03-01

    Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.

  4. Human factors quantification via boundary identification of flight performance margin

    Directory of Open Access Journals (Sweden)

    Yang Changpeng

    2014-08-01

    Full Text Available A systematic methodology including a computational pilot model and a pattern recognition method is presented to identify the boundary of the flight performance margin for quantifying the human factors. The pilot model is proposed to correlate a set of quantitative human factors which represent the attributes and characteristics of a group of pilots. Three information processing components which are influenced by human factors are modeled: information perception, decision making, and action execution. By treating the human factors as stochastic variables that follow appropriate probability density functions, the effects of human factors on flight performance can be investigated through Monte Carlo (MC simulation. Kernel density estimation algorithm is selected to find and rank the influential human factors. Subsequently, human factors are quantified through identifying the boundary of the flight performance margin by the k-nearest neighbor (k-NN classifier. Simulation-based analysis shows that flight performance can be dramatically improved with the quantitative human factors.

  5. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    Science.gov (United States)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  6. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    International Nuclear Information System (INIS)

    Jiang Li; Shi Tielin; Xuan Jianping

    2012-01-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  7. Identifying and Classifying Mobile Business Models Based on Meta-Synthesis Approach

    Directory of Open Access Journals (Sweden)

    Porrandokht Niroomand

    2012-03-01

    Full Text Available The appearance of mobile has provided unique opportunities and fields through the development and creation of businesses and has been able to create the new job opportunities. The current research tries to familiarize entrepreneures who are running the businesses especially in the area of mobile services with business models. These business models can familiarize them for implementing the new ideas and designs since they can enter to business market. Searching in many papers shows that there are no propitiated papers and researches that can identify, categorize and analyze the mobile business models. Consequently, this paper involves innovation. The first part of this paper presents the review about the concepts and theories about the different mobile generations, the mobile commerce and business models. Afterwards, 92 models are compared, interpreted, translated and combined using 33 papers, books based on two different criteria that are expert criterion and kind of product criterion. In the classification of models according to models that are presented by experts, the models are classified based on criteria such as business fields, business partners, the rate of dynamism, the kind of activity, the focus areas, the mobile generations, transparency, the type of operator activities, marketing and advertisements. The models that are classified based on the kind of product have been analyzed and classified at four different areas of mobile commerce including the content production, technology (software and hardware, network and synthetic.

  8. Training Classifiers with Shadow Features for Sensor-Based Human Activity Recognition.

    Science.gov (United States)

    Fong, Simon; Song, Wei; Cho, Kyungeun; Wong, Raymond; Wong, Kelvin K L

    2017-02-27

    In this paper, a novel training/testing process for building/using a classification model based on human activity recognition (HAR) is proposed. Traditionally, HAR has been accomplished by a classifier that learns the activities of a person by training with skeletal data obtained from a motion sensor, such as Microsoft Kinect. These skeletal data are the spatial coordinates (x, y, z) of different parts of the human body. The numeric information forms time series, temporal records of movement sequences that can be used for training a classifier. In addition to the spatial features that describe current positions in the skeletal data, new features called 'shadow features' are used to improve the supervised learning efficacy of the classifier. Shadow features are inferred from the dynamics of body movements, and thereby modelling the underlying momentum of the performed activities. They provide extra dimensions of information for characterising activities in the classification process, and thereby significantly improve the classification accuracy. Two cases of HAR are tested using a classification model trained with shadow features: one is by using wearable sensor and the other is by a Kinect-based remote sensor. Our experiments can demonstrate the advantages of the new method, which will have an impact on human activity detection research.

  9. Training Classifiers with Shadow Features for Sensor-Based Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2017-02-01

    Full Text Available In this paper, a novel training/testing process for building/using a classification model based on human activity recognition (HAR is proposed. Traditionally, HAR has been accomplished by a classifier that learns the activities of a person by training with skeletal data obtained from a motion sensor, such as Microsoft Kinect. These skeletal data are the spatial coordinates (x, y, z of different parts of the human body. The numeric information forms time series, temporal records of movement sequences that can be used for training a classifier. In addition to the spatial features that describe current positions in the skeletal data, new features called ‘shadow features’ are used to improve the supervised learning efficacy of the classifier. Shadow features are inferred from the dynamics of body movements, and thereby modelling the underlying momentum of the performed activities. They provide extra dimensions of information for characterising activities in the classification process, and thereby significantly improve the classification accuracy. Two cases of HAR are tested using a classification model trained with shadow features: one is by using wearable sensor and the other is by a Kinect-based remote sensor. Our experiments can demonstrate the advantages of the new method, which will have an impact on human activity detection research.

  10. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors

    International Nuclear Information System (INIS)

    Herschtal, A; Te Marvelde, L; Mengersen, K; Foroudi, F; Ball, D; Devereux, T; Pham, D; Greer, P B; Pichler, P; Eade, T; Kneebone, A; Bell, L; Caine, H; Hindson, B; Kron, T; Hosseinifard, Z

    2015-01-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts −19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements. (paper)

  11. Hybrid classifiers methods of data, knowledge, and classifier combination

    CERN Document Server

    Wozniak, Michal

    2014-01-01

    This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.

  12. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations.

    Science.gov (United States)

    Zhang, Yi; Ren, Jinchang; Jiang, Jianmin

    2015-01-01

    Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.

  13. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    2015-01-01

    Full Text Available Maximum likelihood classifier (MLC and support vector machines (SVM are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.

  14. A Topic Model Approach to Representing and Classifying Football Plays

    KAUST Repository

    Varadarajan, Jagannadan

    2013-09-09

    We address the problem of modeling and classifying American Football offense teams’ plays in video, a challenging example of group activity analysis. Automatic play classification will allow coaches to infer patterns and tendencies of opponents more ef- ficiently, resulting in better strategy planning in a game. We define a football play as a unique combination of player trajectories. To this end, we develop a framework that uses player trajectories as inputs to MedLDA, a supervised topic model. The joint maximiza- tion of both likelihood and inter-class margins of MedLDA in learning the topics allows us to learn semantically meaningful play type templates, as well as, classify different play types with 70% average accuracy. Furthermore, this method is extended to analyze individual player roles in classifying each play type. We validate our method on a large dataset comprising 271 play clips from real-world football games, which will be made publicly available for future comparisons.

  15. Matthew and marginality

    Directory of Open Access Journals (Sweden)

    Denis C. Duling

    1995-12-01

    Full Text Available This article explores marginality theory as it was first proposed in  the social sciences, that is related to persons caught between two competing cultures (Park; Stonequist, and, then, as it was developed in sociology as related to the poor (Germani and in anthropology as it was related to involuntary marginality and voluntary marginality (Victor Turner. It then examines a (normative scheme' in antiquity that creates involuntary marginality at the macrosocial level, namely, Lenski's social stratification model in an agrarian society, and indicates how Matthean language might fit with a sample inventory  of socioreligious roles. Next, it examines some (normative schemes' in  antiquity for voluntary margi-nality at the microsocial level, namely, groups, and examines how the Matthean gospel would fit based on indications of factions and leaders. The article ,shows that the author of the Gospel of Matthew has an ideology of (voluntary marginality', but his gospel includes some hope for (involuntary  marginals' in  the  real world, though it is somewhat tempered. It also suggests that the writer of the Gospel is a (marginal man', especially in the sense defined by the early theorists (Park; Stone-quist.

  16. Nonlinear Knowledge in Kernel-Based Multiple Criteria Programming Classifier

    Science.gov (United States)

    Zhang, Dongling; Tian, Yingjie; Shi, Yong

    Kernel-based Multiple Criteria Linear Programming (KMCLP) model is used as classification methods, which can learn from training examples. Whereas, in traditional machine learning area, data sets are classified only by prior knowledge. Some works combine the above two classification principle to overcome the defaults of each approach. In this paper, we propose a model to incorporate the nonlinear knowledge into KMCLP in order to solve the problem when input consists of not only training example, but also nonlinear prior knowledge. In dealing with real world case breast cancer diagnosis, the model shows its better performance than the model solely based on training data.

  17. Switchgrass-Based Bioethanol Productivity and Potential Environmental Impact from Marginal Lands in China

    Directory of Open Access Journals (Sweden)

    Xun Zhang

    2017-02-01

    Full Text Available Switchgrass displays an excellent potential to serve as a non-food bioenergy feedstock for bioethanol production in China due to its high potential yield on marginal lands. However, few studies have been conducted on the spatial distribution of switchgrass-based bioethanol production potential in China. This study created a land surface process model (Environmental Policy Integrated Climate GIS (Geographic Information System-based (GEPIC model coupled with a life cycle analysis (LCA to explore the spatial distribution of potential bioethanol production and present a comprehensive analysis of energy efficiency and environmental impacts throughout its whole life cycle. It provides a new approach to study the bioethanol productivity and potential environmental impact from marginal lands based on the high spatial resolution GIS data, and this applies not only to China, but also to other regions and to other types of energy plant. The results indicate that approximately 59 million ha of marginal land in China are suitable for planting switchgrass, and 22 million tons of ethanol can be produced from this land. Additionally, a potential net energy gain (NEG of 1.75 x 106 million MJ will be achieved if all of the marginal land can be used in China, and Yunnan Province offers the most significant one that accounts for 35% of the total. Finally, this study obtained that the total environmental effect index of switchgrass-based bioethanol is the equivalent of a population of approximately 20,300, and a reduction in the global warming potential (GWP is the most significant environmental impact.

  18. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    Science.gov (United States)

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database.

  19. Evaluation of motion management strategies based on required margins

    International Nuclear Information System (INIS)

    Sawkey, D; Svatos, M; Zankowski, C

    2012-01-01

    Strategies for delivering radiation to a moving lesion each require a margin to compensate for uncertainties in treatment. These motion margins have been determined here by separating the total uncertainty into components. Probability density functions for the individual sources of uncertainty were calculated for ten motion traces obtained from the literature. Motion margins required to compensate for the center of mass motion of the clinical treatment volume were found by convolving the individual sources of uncertainty. For measurements of position at a frequency of 33 Hz, system latency was the dominant source of positional uncertainty. Averaged over the ten motion traces, the motion margin for tracking with a latency of 200 ms was 4.6 mm. Gating with a duty cycle of 33% required a mean motion margin of 3.2–3.4 mm, and tracking with a latency of 100 ms required a motion margin of 3.1 mm. Feasible reductions in the effects of the sources of uncertainty, for example by using a simple prediction algorithm to anticipate the lesion position at the end of the latency period, resulted in a mean motion margin of 1.7 mm for tracking with a latency of 100 ms, 2.4 mm for tracking with a latency of 200 ms, and 2.1–2.2 mm for the gating strategies with duty cycles of 33%. A crossover tracking latency of 150 ms was found, below which tracking strategies could take advantage of narrower motion margins than gating strategies. The methods described here provide a means to guide selection of a motion management strategy for a given patient. (paper)

  20. Conference Report: The New Discovery of Margins: Theory-Based Excursions in Marginal Social Fields

    OpenAIRE

    Kirchner, Babette; Lorenzen, Jule-Marie; Striffler, Christine

    2014-01-01

    At this year's spring conference of the Sociology of Knowledge Section of the German Sociological Association, a diverse range of theoretical concepts and multiple empirical insights into different marginal social fields were presented. As in everyday life, drawing a line between center and margin can be seen as an important challenge that must equally be faced in sociology. The socially constructed borderline appears to be highly variable. Therefore it has to be delineated or fixed somehow. ...

  1. Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy.

    Directory of Open Access Journals (Sweden)

    Lina Zhang

    Full Text Available Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information, PSSM (Position Specific Scoring Matrix, RSA (Relative Solvent Accessibility, and CTD (Composition, Transition, Distribution. The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest, SMO (Sequential Minimal Optimization, NNA (Nearest Neighbor Algorithm, and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew's Correlation Coefficient of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc.

  2. Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers

    OpenAIRE

    Rosenberg, Ishai; Shabtai, Asaf; Rokach, Lior; Elovici, Yuval

    2017-01-01

    In this paper, we present a black-box attack against API call based machine learning malware classifiers, focusing on generating adversarial sequences combining API calls and static features (e.g., printable strings) that will be misclassified by the classifier without affecting the malware functionality. We show that this attack is effective against many classifiers due to the transferability principle between RNN variants, feed forward DNNs, and traditional machine learning classifiers such...

  3. A method of distributed avionics data processing based on SVM classifier

    Science.gov (United States)

    Guo, Hangyu; Wang, Jinyan; Kang, Minyang; Xu, Guojing

    2018-03-01

    Under the environment of system combat, in order to solve the problem on management and analysis of the massive heterogeneous data on multi-platform avionics system, this paper proposes a management solution which called avionics "resource cloud" based on big data technology, and designs an aided decision classifier based on SVM algorithm. We design an experiment with STK simulation, the result shows that this method has a high accuracy and a broad application prospect.

  4. EVALUATING A COMPUTER BASED SKILLS ACQUISITION TRAINER TO CLASSIFY BADMINTON PLAYERS

    Directory of Open Access Journals (Sweden)

    Minh Vu Huynh

    2011-09-01

    Full Text Available The aim of the present study was to compare the statistical ability of both neural networks and discriminant function analysis on the newly developed SATB program. Using these statistical tools, we identified the accuracy of the SATB in classifying badminton players into different skill level groups. Forty-one participants, classified as advanced, intermediate, or beginner skilled level, participated in this study. Results indicated neural networks are more effective in predicting group membership, and displayed higher predictive validity when compared to discriminant analysis. Using these outcomes, in conjunction with the physiological and biomechanical variables of the participants, we assessed the authenticity and accuracy of the SATB and commented on the overall effectiveness of the visual based training approach to training badminton athletes

  5. Classification of Multiple Chinese Liquors by Means of a QCM-based E-Nose and MDS-SVM Classifier.

    Science.gov (United States)

    Li, Qiang; Gu, Yu; Jia, Jing

    2017-01-30

    Chinese liquors are internationally well-known fermentative alcoholic beverages. They have unique flavors attributable to the use of various bacteria and fungi, raw materials, and production processes. Developing a novel, rapid, and reliable method to identify multiple Chinese liquors is of positive significance. This paper presents a pattern recognition system for classifying ten brands of Chinese liquors based on multidimensional scaling (MDS) and support vector machine (SVM) algorithms in a quartz crystal microbalance (QCM)-based electronic nose (e-nose) we designed. We evaluated the comprehensive performance of the MDS-SVM classifier that predicted all ten brands of Chinese liquors individually. The prediction accuracy (98.3%) showed superior performance of the MDS-SVM classifier over the back-propagation artificial neural network (BP-ANN) classifier (93.3%) and moving average-linear discriminant analysis (MA-LDA) classifier (87.6%). The MDS-SVM classifier has reasonable reliability, good fitting and prediction (generalization) performance in classification of the Chinese liquors. Taking both application of the e-nose and validation of the MDS-SVM classifier into account, we have thus created a useful method for the classification of multiple Chinese liquors.

  6. Classification of Multiple Chinese Liquors by Means of a QCM-based E-Nose and MDS-SVM Classifier

    Directory of Open Access Journals (Sweden)

    Qiang Li

    2017-01-01

    Full Text Available Chinese liquors are internationally well-known fermentative alcoholic beverages. They have unique flavors attributable to the use of various bacteria and fungi, raw materials, and production processes. Developing a novel, rapid, and reliable method to identify multiple Chinese liquors is of positive significance. This paper presents a pattern recognition system for classifying ten brands of Chinese liquors based on multidimensional scaling (MDS and support vector machine (SVM algorithms in a quartz crystal microbalance (QCM-based electronic nose (e-nose we designed. We evaluated the comprehensive performance of the MDS-SVM classifier that predicted all ten brands of Chinese liquors individually. The prediction accuracy (98.3% showed superior performance of the MDS-SVM classifier over the back-propagation artificial neural network (BP-ANN classifier (93.3% and moving average-linear discriminant analysis (MA-LDA classifier (87.6%. The MDS-SVM classifier has reasonable reliability, good fitting and prediction (generalization performance in classification of the Chinese liquors. Taking both application of the e-nose and validation of the MDS-SVM classifier into account, we have thus created a useful method for the classification of multiple Chinese liquors.

  7. Heart Sound Biometric System Based on Marginal Spectrum Analysis

    Science.gov (United States)

    Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin

    2013-01-01

    This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515

  8. Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers

    Science.gov (United States)

    Favorskaya, M.; Nosov, A.; Popov, A.

    2015-05-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case). Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset "Multi-modal Gesture Recognition Challenge 2013: Dataset and Results" including 393 dynamic hand-gestures was chosen. The proposed method yielded 84-91% recognition accuracy, in average, for restricted set of dynamic gestures.

  9. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    Directory of Open Access Journals (Sweden)

    M. Favorskaya

    2015-05-01

    Full Text Available Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case. Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset “Multi-modal Gesture Recognition Challenge 2013: Dataset and Results” including 393 dynamic hand-gestures was chosen. The proposed method yielded 84–91% recognition accuracy, in average, for restricted set of dynamic gestures.

  10. Distance and Density Similarity Based Enhanced k-NN Classifier for Improving Fault Diagnosis Performance of Bearings

    Directory of Open Access Journals (Sweden)

    Sharif Uddin

    2016-01-01

    Full Text Available An enhanced k-nearest neighbor (k-NN classification algorithm is presented, which uses a density based similarity measure in addition to a distance based similarity measure to improve the diagnostic performance in bearing fault diagnosis. Due to its use of distance based similarity measure alone, the classification accuracy of traditional k-NN deteriorates in case of overlapping samples and outliers and is highly susceptible to the neighborhood size, k. This study addresses these limitations by proposing the use of both distance and density based measures of similarity between training and test samples. The proposed k-NN classifier is used to enhance the diagnostic performance of a bearing fault diagnosis scheme, which classifies different fault conditions based upon hybrid feature vectors extracted from acoustic emission (AE signals. Experimental results demonstrate that the proposed scheme, which uses the enhanced k-NN classifier, yields better diagnostic performance and is more robust to variations in the neighborhood size, k.

  11. Pathologic margin involvement and the risk of recurrence in patients treated with breast-conserving therapy

    International Nuclear Information System (INIS)

    Gage, Irene; Nixon, Asa J.; Schnitt, Stuart J.; Recht, Abram; Gelman, Rebecca; Silver, Barbara; Connolly, James L.; Harris, Jay R.

    1995-01-01

    PURPOSE: To assess the relationship between microscopic margin status and recurrence after breast-conserving therapy for tumors with or without an extensive intraductal component (EIC). MATERIALS AND METHODS: During the years 1968 to 1986, 1865 women with unilateral clinical stage I or II breast cancer were treated with radiation therapy for breast conservation. Of these, 340 received ≥60 Gy to the tumor bed and had margins that were evaluable on review of their pathologic slides; these constitute the study population. The median follow-up was 109 months. All available slides were reviewed by one of the study pathologists (SS, JC). Final radial margins of excision were classified as negative >1 mm (no invasive or ductal carcinoma in-situ within 1 mm of the inked margin), negative ≤1 mm (any carcinoma ≤1 mm of the inked margin but not at ink) or positive (any carcinoma at the inked margin). A focally positive margin was defined as any invasive or in-situ carcinoma at the margin in ≤3 LPF. The extent of positivity was not evaluable in 2 patients and the distance of the tumor from the margin was not evaluable in 48 patients with a negative margin. Thirty-nine percent of EIC-negative and 46% of EIC-positive patients underwent a re-excision and, for these, the final margin analyzed was from the re-excised specimen. The median dose to the tumor bed was 63 Gy for patients with positive margins and 62 Gy for patients with negative margins. Recurrent disease was classified as ipsilateral breast recurrence (IBR) or distant metastasis/regional nodal failure (DM/RNF). RESULTS: Five year crude rates for the first site of recurrence were calculated for 340 patients evaluable at 5 years. Results were tabulated separately for all patients, EIC-negative and EIC-positive. All p-values tested for differences in the distribution of sites of first failure. CONCLUSIONS: The risk of ipsilateral breast recurrence is equally low for patients with close (≤1 mm) or negative (>1 mm

  12. Marginal microleakage of class V resin-based composite restorations bonded with six one-step self-etch systems

    Directory of Open Access Journals (Sweden)

    Alfonso Sánchez-Ayala

    2013-06-01

    Full Text Available This study compared the microleakage of class V restorations bonded with various one-step self-etching adhesives. Seventy class V resin-based composite restorations were prepared on the buccal and lingual surfaces of 35 premolars, by using: Clearfil S 3 Bond, G-Bond, iBond, One Coat 7.0, OptiBond All-In-One, or Xeno IV. The Adper Single Bond etch-and-rinse two-step adhesive was employed as a control. Specimens were thermocycled for 500 cycles in separate water baths at 5°C and 55°C and loaded under 40 to 70 N for 50,000 cycles. Marginal microleakage was measured based on the penetration of a tracer agent. Although the control showed no microleakage at the enamel margins, there were no differences between groups (p = 0.06. None of the adhesives avoided microleakage at the dentin margins, and they displayed similar performances (p = 0.76. When both margins were compared, iBond® presented higher microleakage (p < 0.05 at the enamel margins (median, 1.00; Q3–Q1, 1.25–0.00 compared to the dentin margins (median, 0.00; Q3–Q1, 0.25–0.00. The study adhesives showed similar abilities to seal the margins of class V restorations, except for iBond®, which presented lower performance at the enamel margin.

  13. Class-specific Error Bounds for Ensemble Classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Prenger, R; Lemmond, T; Varshney, K; Chen, B; Hanley, W

    2009-10-06

    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missed detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.

  14. Multiple classifier systems in texton-based approach for the classification of CT images of Lung

    DEFF Research Database (Denmark)

    Gangeh, Mehrdad J.; Sørensen, Lauge; Shaker, Saher B.

    2010-01-01

    In this paper, we propose using texton signatures based on raw pixel representation along with a parallel multiple classifier system for the classification of emphysema in computed tomography images of the lung. The multiple classifier system is composed of support vector machines on the texton.......e., texton size and k value in k-means. Our results show that while aggregation of single decisions by SVMs over various k values using multiple classifier systems helps to improve the results compared to single SVMs, combining over different texton sizes is not beneficial. The performance of the proposed...

  15. Intelligent Recognition of Lung Nodule Combining Rule-based and C-SVM Classifiers

    Directory of Open Access Journals (Sweden)

    Bin Li

    2012-02-01

    Full Text Available Computer-aided detection(CAD system for lung nodules plays the important role in the diagnosis of lung cancer. In this paper, an improved intelligent recognition method of lung nodule in HRCT combing rule-based and cost-sensitive support vector machine(C-SVM classifiers is proposed for detecting both solid nodules and ground-glass opacity(GGO nodules(part solid and nonsolid. This method consists of several steps. Firstly, segmentation of regions of interest(ROIs, including pulmonary parenchyma and lung nodule candidates, is a difficult task. On one side, the presence of noise lowers the visibility of low-contrast objects. On the other side, different types of nodules, including small nodules, nodules connecting to vasculature or other structures, part-solid or nonsolid nodules, are complex, noisy, weak edge or difficult to define the boundary. In order to overcome the difficulties of obvious boundary-leak and slow evolvement speed problem in segmentatioin of weak edge, an overall segmentation method is proposed, they are: the lung parenchyma is extracted based on threshold and morphologic segmentation method; the image denoising and enhancing is realized by nonlinear anisotropic diffusion filtering(NADF method; candidate pulmonary nodules are segmented by the improved C-V level set method, in which the segmentation result of EM-based fuzzy threshold method is used as the initial contour of active contour model and a constrained energy term is added into the PDE of level set function. Then, lung nodules are classified by using the intelligent classifiers combining rules and C-SVM. Rule-based classification is first used to remove easily dismissible nonnodule objects, then C-SVM classification are used to further classify nodule candidates and reduce the number of false positive(FP objects. In order to increase the efficiency of SVM, an improved training method is used to train SVM, which uses the grid search method to search the optimal

  16. Intelligent Recognition of Lung Nodule Combining Rule-based and C-SVM Classifiers

    Directory of Open Access Journals (Sweden)

    Bin Li

    2011-10-01

    Full Text Available Computer-aided detection(CAD system for lung nodules plays the important role in the diagnosis of lung cancer. In this paper, an improved intelligent recognition method of lung nodule in HRCT combing rule-based and costsensitive support vector machine(C-SVM classifiers is proposed for detecting both solid nodules and ground-glass opacity(GGO nodules(part solid and nonsolid. This method consists of several steps. Firstly, segmentation of regions of interest(ROIs, including pulmonary parenchyma and lung nodule candidates, is a difficult task. On one side, the presence of noise lowers the visibility of low-contrast objects. On the other side, different types of nodules, including small nodules, nodules connecting to vasculature or other structures, part-solid or nonsolid nodules, are complex, noisy, weak edge or difficult to define the boundary. In order to overcome the difficulties of obvious boundary-leak and slow evolvement speed problem in segmentatioin of weak edge, an overall segmentation method is proposed, they are: the lung parenchyma is extracted based on threshold and morphologic segmentation method; the image denoising and enhancing is realized by nonlinear anisotropic diffusion filtering(NADF method;candidate pulmonary nodules are segmented by the improved C-V level set method, in which the segmentation result of EM-based fuzzy threshold method is used as the initial contour of active contour model and a constrained energy term is added into the PDE of level set function. Then, lung nodules are classified by using the intelligent classifiers combining rules and C-SVM. Rule-based classification is first used to remove easily dismissible nonnodule objects, then C-SVM classification are used to further classify nodule candidates and reduce the number of false positive(FP objects. In order to increase the efficiency of SVM, an improved training method is used to train SVM, which uses the grid search method to search the optimal parameters

  17. nRC: non-coding RNA Classifier based on structural features.

    Science.gov (United States)

    Fiannaca, Antonino; La Rosa, Massimo; La Paglia, Laura; Rizzo, Riccardo; Urso, Alfonso

    2017-01-01

    Non-coding RNA (ncRNA) are small non-coding sequences involved in gene expression regulation of many biological processes and diseases. The recent discovery of a large set of different ncRNAs with biologically relevant roles has opened the way to develop methods able to discriminate between the different ncRNA classes. Moreover, the lack of knowledge about the complete mechanisms in regulative processes, together with the development of high-throughput technologies, has required the help of bioinformatics tools in addressing biologists and clinicians with a deeper comprehension of the functional roles of ncRNAs. In this work, we introduce a new ncRNA classification tool, nRC (non-coding RNA Classifier). Our approach is based on features extraction from the ncRNA secondary structure together with a supervised classification algorithm implementing a deep learning architecture based on convolutional neural networks. We tested our approach for the classification of 13 different ncRNA classes. We obtained classification scores, using the most common statistical measures. In particular, we reach an accuracy and sensitivity score of about 74%. The proposed method outperforms other similar classification methods based on secondary structure features and machine learning algorithms, including the RNAcon tool that, to date, is the reference classifier. nRC tool is freely available as a docker image at https://hub.docker.com/r/tblab/nrc/. The source code of nRC tool is also available at https://github.com/IcarPA-TBlab/nrc.

  18. Detecting Dutch political tweets : A classifier based on voting system using supervised learning

    NARCIS (Netherlands)

    de Mello Araújo, Eric Fernandes; Ebbelaar, Dave

    The task of classifying political tweets has been shown to be very difficult, with controversial results in many works and with non-replicable methods. Most of the works with this goal use rule-based methods to identify political tweets. We propose here two methods, being one rule-based approach,

  19. Emergency Load Shedding Strategy Based on Sensitivity Analysis of Relay Operation Margin against Cascading Events

    DEFF Research Database (Denmark)

    Liu, Zhou; Chen, Zhe; Sun, Haishun Sun

    2012-01-01

    the runtime emergent states of related system component. Based on sensitivity analysis between the relay operation margin and power system state variables, an optimal load shedding strategy is applied to adjust the emergent states timely before the unwanted relay operation. Load dynamics is also taken...... into account to compensate load shedding amount calculation. And the multi-agent technology is applied for the whole strategy implementation. A test system is built in real time digital simulator (RTDS) and has demonstrated the effectiveness of the proposed strategy.......In order to prevent long term voltage instability and induced cascading events, a load shedding strategy based on the sensitivity of relay operation margin to load powers is discussed and proposed in this paper. The operation margin of critical impedance backup relay is defined to identify...

  20. Automatic discrimination between safe and unsafe swallowing using a reputation-based classifier

    Directory of Open Access Journals (Sweden)

    Nikjoo Mohammad S

    2011-11-01

    Full Text Available Abstract Background Swallowing accelerometry has been suggested as a potential non-invasive tool for bedside dysphagia screening. Various vibratory signal features and complementary measurement modalities have been put forth in the literature for the potential discrimination between safe and unsafe swallowing. To date, automatic classification of swallowing accelerometry has exclusively involved a single-axis of vibration although a second axis is known to contain additional information about the nature of the swallow. Furthermore, the only published attempt at automatic classification in adult patients has been based on a small sample of swallowing vibrations. Methods In this paper, a large corpus of dual-axis accelerometric signals were collected from 30 older adults (aged 65.47 ± 13.4 years, 15 male referred to videofluoroscopic examination on the suspicion of dysphagia. We invoked a reputation-based classifier combination to automatically categorize the dual-axis accelerometric signals into safe and unsafe swallows, as labeled via videofluoroscopic review. From these participants, a total of 224 swallowing samples were obtained, 164 of which were labeled as unsafe swallows (swallows where the bolus entered the airway and 60 as safe swallows. Three separate support vector machine (SVM classifiers and eight different features were selected for classification. Results With selected time, frequency and information theoretic features, the reputation-based algorithm distinguished between safe and unsafe swallowing with promising accuracy (80.48 ± 5.0%, high sensitivity (97.1 ± 2% and modest specificity (64 ± 8.8%. Interpretation of the most discriminatory features revealed that in general, unsafe swallows had lower mean vibration amplitude and faster autocorrelation decay, suggestive of decreased hyoid excursion and compromised coordination, respectively. Further, owing to its performance-based weighting of component classifiers, the static

  1. Simulation-based marginal likelihood for cluster strong lensing cosmology

    Science.gov (United States)

    Killedar, M.; Borgani, S.; Fabjan, D.; Dolag, K.; Granato, G.; Meneghetti, M.; Planelles, S.; Ragone-Figueroa, C.

    2018-01-01

    Comparisons between observed and predicted strong lensing properties of galaxy clusters have been routinely used to claim either tension or consistency with Λ cold dark matter cosmology. However, standard approaches to such cosmological tests are unable to quantify the preference for one cosmology over another. We advocate approximating the relevant Bayes factor using a marginal likelihood that is based on the following summary statistic: the posterior probability distribution function for the parameters of the scaling relation between Einstein radii and cluster mass, α and β. We demonstrate, for the first time, a method of estimating the marginal likelihood using the X-ray selected z > 0.5 Massive Cluster Survey clusters as a case in point and employing both N-body and hydrodynamic simulations of clusters. We investigate the uncertainty in this estimate and consequential ability to compare competing cosmologies, which arises from incomplete descriptions of baryonic processes, discrepancies in cluster selection criteria, redshift distribution and dynamical state. The relation between triaxial cluster masses at various overdensities provides a promising alternative to the strong lensing test.

  2. Exemplar-based optical neural net classifier for color pattern recognition

    Science.gov (United States)

    Yu, Francis T. S.; Uang, Chii-Maw; Yang, Xiangyang

    1992-10-01

    We present a color exemplar-based neural network that can be used as an optimum image classifier or an associative memory. Color decomposition and composition technique is used for constructing the polychromatic interconnection weight matrix (IWM). The Hamming net algorithm is modified to relax the dynamic range requirement of the spatial light modulator and to reduce the number of iteration cycles in the winner-take-all layer. Computer simulation results demonstrated the feasibility of this approach

  3. Classification of EEG signals using a genetic-based machine learning classifier.

    Science.gov (United States)

    Skinner, B T; Nguyen, H T; Liu, D K

    2007-01-01

    This paper investigates the efficacy of the genetic-based learning classifier system XCS, for the classification of noisy, artefact-inclusive human electroencephalogram (EEG) signals represented using large condition strings (108bits). EEG signals from three participants were recorded while they performed four mental tasks designed to elicit hemispheric responses. Autoregressive (AR) models and Fast Fourier Transform (FFT) methods were used to form feature vectors with which mental tasks can be discriminated. XCS achieved a maximum classification accuracy of 99.3% and a best average of 88.9%. The relative classification performance of XCS was then compared against four non-evolutionary classifier systems originating from different learning techniques. The experimental results will be used as part of our larger research effort investigating the feasibility of using EEG signals as an interface to allow paralysed persons to control a powered wheelchair or other devices.

  4. Composite Classifiers for Automatic Target Recognition

    National Research Council Canada - National Science Library

    Wang, Lin-Cheng

    1998-01-01

    ...) using forward-looking infrared (FLIR) imagery. Two existing classifiers, one based on learning vector quantization and the other on modular neural networks, are used as the building blocks for our composite classifiers...

  5. Textual and shape-based feature extraction and neuro-fuzzy classifier for nuclear track recognition

    Science.gov (United States)

    Khayat, Omid; Afarideh, Hossein

    2013-04-01

    Track counting algorithms as one of the fundamental principles of nuclear science have been emphasized in the recent years. Accurate measurement of nuclear tracks on solid-state nuclear track detectors is the aim of track counting systems. Commonly track counting systems comprise a hardware system for the task of imaging and software for analysing the track images. In this paper, a track recognition algorithm based on 12 defined textual and shape-based features and a neuro-fuzzy classifier is proposed. Features are defined so as to discern the tracks from the background and small objects. Then, according to the defined features, tracks are detected using a trained neuro-fuzzy system. Features and the classifier are finally validated via 100 Alpha track images and 40 training samples. It is shown that principle textual and shape-based features concomitantly yield a high rate of track detection compared with the single-feature based methods.

  6. Customized Computed Tomography-Based Boost Volumes in Breast-Conserving Therapy: Use of Three-Dimensional Histologic Information for Clinical Target Volume Margins

    International Nuclear Information System (INIS)

    Hanbeukers, Bianca; Borger, Jacques; Ende, Piet van den; Ent, Fred van der; Houben, Ruud; Jager, Jos; Keymeulen, Kristien; Murrer, Lars; Sastrowijoto, Suprapto; Vijver, Koen van de; Boersma, Liesbeth

    2009-01-01

    Purpose: To determine the difference in size between computed tomography (CT)-based irradiated boost volumes and simulator-based irradiated volumes in patients treated with breast-conserving therapy and to analyze whether the use of anisotropic three-dimensional clinical target volume (CTV) margins using the histologically determined free resection margins allows for a significant reduction of the CT-based boost volumes. Patients and Methods: The CT data from 49 patients were used to delineate a planning target volume (PTV) with isotropic CTV margins and to delineate a PTV sim that mimicked the PTV as delineated in the era of conventional simulation. For 17 patients, a PTV with anisotropic CTV margins was defined by applying customized three-dimensional CTV margins, according to the free excision margins in six directions. Boost treatment plans consisted of conformal portals for the CT-based PTVs and rectangular fields for the PTV sim . Results: The irradiated volume (volume receiving ≥95% of the prescribed dose [V 95 ]) for the PTV with isotropic CTV margins was 1.6 times greater than that for the PTV sim : 228 cm 3 vs. 147 cm 3 (p 95 was similar to the V 95 for the PTV sim (190 cm 3 vs. 162 cm 3 ; p = NS). The main determinant for the irradiated volume was the size of the excision cavity (p < .001), which was mainly related to the interval between surgery and the planning CT scan (p = .029). Conclusion: CT-based PTVs with isotropic margins for the CTV yield much greater irradiated volumes than fluoroscopically based PTVs. Applying individualized anisotropic CTV margins allowed for a significant reduction of the irradiated boost volume.

  7. Combining Biometric Fractal Pattern and Particle Swarm Optimization-Based Classifier for Fingerprint Recognition

    Directory of Open Access Journals (Sweden)

    Chia-Hung Lin

    2010-01-01

    Full Text Available This paper proposes combining the biometric fractal pattern and particle swarm optimization (PSO-based classifier for fingerprint recognition. Fingerprints have arch, loop, whorl, and accidental morphologies, and embed singular points, resulting in the establishment of fingerprint individuality. An automatic fingerprint identification system consists of two stages: digital image processing (DIP and pattern recognition. DIP is used to convert to binary images, refine out noise, and locate the reference point. For binary images, Katz's algorithm is employed to estimate the fractal dimension (FD from a two-dimensional (2D image. Biometric features are extracted as fractal patterns using different FDs. Probabilistic neural network (PNN as a classifier performs to compare the fractal patterns among the small-scale database. A PSO algorithm is used to tune the optimal parameters and heighten the accuracy. For 30 subjects in the laboratory, the proposed classifier demonstrates greater efficiency and higher accuracy in fingerprint recognition.

  8. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    Science.gov (United States)

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.

  9. Thai Finger-Spelling Recognition Using a Cascaded Classifier Based on Histogram of Orientation Gradient Features

    Directory of Open Access Journals (Sweden)

    Kittasil Silanon

    2017-01-01

    Full Text Available Hand posture recognition is an essential module in applications such as human-computer interaction (HCI, games, and sign language systems, in which performance and robustness are the primary requirements. In this paper, we proposed automatic classification to recognize 21 hand postures that represent letters in Thai finger-spelling based on Histogram of Orientation Gradient (HOG feature (which is applied with more focus on the information within certain region of the image rather than each single pixel and Adaptive Boost (i.e., AdaBoost learning technique to select the best weak classifier and to construct a strong classifier that consists of several weak classifiers to be cascaded in detection architecture. We collected 21 static hand posture images from 10 subjects for testing and training in Thai letters finger-spelling. The parameters for the training process have been adjusted in three experiments, false positive rates (FPR, true positive rates (TPR, and number of training stages (N, to achieve the most suitable training model for each hand posture. All cascaded classifiers are loaded into the system simultaneously to classify different hand postures. A correlation coefficient is computed to distinguish the hand postures that are similar. The system achieves approximately 78% accuracy on average on all classifier experiments.

  10. The Effect of Water or Wax-based Binders on the Chemical and Morphological Characteristics of the Margin Ceramic-Framework Interface.

    Science.gov (United States)

    Güler, Umut; de Queiroz, José Renato Cavalcanti; de Oliveira, Luiz Fernando Cappa; Canay, Senay; Ozcan, Mutlu

    2015-09-01

    This study evaluated the effect of binder choice in mixing ceramic powder on the chemical and morphological features between the margin ceramic-framework interfaces. Titanium and zirconia frameworks (15 x 5 x 0.5 mm3) were veneered with margin ceramics prepared with two different binders, namely a) water/conventional or b) wax-based. For each zirconia framework material, four different margin ceramics were used: a- Creation Zi (Creation Willi Geller International); b- GC Initial Zr (GC America); Triceram (Dentaurum); and d- IPS emax (voclar Vivadent). For the titanium framework, three different margin ceramics were used: a- Creation Ti (Creation Willi Geller International); b- Triceram (Dentaurum); and c- VITA Titaniumkeramik (Vita Zahnfabrik). The chemical composition of the framework-margin ceramic interface was analyzed using Energy Dispersive X-ray Spectroscopy (EDS) and porosity level was quantified within the margin ceramic using an image program (ImageJ) from four random areas (100 x 100 pixels) on each SEM image. EDS analysis showed the presence of Carbon at the margin ceramic-framework interface in the groups where wax-based binder technique was used with the concentration being the highest for the IPS emax ZirCAD group. While IPS system (IPS ZirCAD and IPS Emax) presented higher porosity concentration using wax binder, in the other groups wax-based binder reduced the porosity of margin ceramic, except for Titanium - Triceram combination.

  11. Entropy based classifier for cross-domain opinion mining

    Directory of Open Access Journals (Sweden)

    Jyoti S. Deshmukh

    2018-01-01

    Full Text Available In recent years, the growth of social network has increased the interest of people in analyzing reviews and opinions for products before they buy them. Consequently, this has given rise to the domain adaptation as a prominent area of research in sentiment analysis. A classifier trained from one domain often gives poor results on data from another domain. Expression of sentiment is different in every domain. The labeling cost of each domain separately is very high as well as time consuming. Therefore, this study has proposed an approach that extracts and classifies opinion words from one domain called source domain and predicts opinion words of another domain called target domain using a semi-supervised approach, which combines modified maximum entropy and bipartite graph clustering. A comparison of opinion classification on reviews on four different product domains is presented. The results demonstrate that the proposed method performs relatively well in comparison to the other methods. Comparison of SentiWordNet of domain-specific and domain-independent words reveals that on an average 72.6% and 88.4% words, respectively, are correctly classified.

  12. Improved Collaborative Representation Classifier Based on l2-Regularized for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    Shirui Huo

    2017-01-01

    Full Text Available Human action recognition is an important recent challenging task. Projecting depth images onto three depth motion maps (DMMs and extracting deep convolutional neural network (DCNN features are discriminant descriptor features to characterize the spatiotemporal information of a specific action from a sequence of depth images. In this paper, a unified improved collaborative representation framework is proposed in which the probability that a test sample belongs to the collaborative subspace of all classes can be well defined and calculated. The improved collaborative representation classifier (ICRC based on l2-regularized for human action recognition is presented to maximize the likelihood that a test sample belongs to each class, then theoretical investigation into ICRC shows that it obtains a final classification by computing the likelihood for each class. Coupled with the DMMs and DCNN features, experiments on depth image-based action recognition, including MSRAction3D and MSRGesture3D datasets, demonstrate that the proposed approach successfully using a distance-based representation classifier achieves superior performance over the state-of-the-art methods, including SRC, CRC, and SVM.

  13. Histopathological Validation of the Surface-Intermediate-Base Margin Score for Standardized Reporting of Resection Technique during Nephron Sparing Surgery.

    Science.gov (United States)

    Minervini, Andrea; Campi, Riccardo; Kutikov, Alexander; Montagnani, Ilaria; Sessa, Francesco; Serni, Sergio; Raspollini, Maria Rosaria; Carini, Marco

    2015-10-01

    The surface-intermediate-base margin score is a novel standardized reporting system of resection techniques during nephron sparing surgery. We validated the surgeon assessed surface-intermediate-base score with microscopic histopathological assessment of partial nephrectomy specimens. Between June and August 2014 data were prospectively collected from 40 consecutive patients undergoing nephron sparing surgery. The surface-intermediate-base score was assigned to all cases. The score specific areas were color coded with tissue margin ink and sectioned for histological evaluation of healthy renal margin thickness. Maximum, minimum and mean thickness of healthy renal margin for each score specific area grade (surface [S] = 0, S = 1 ; intermediate [I] or base [B] = 0, I or B = 1, I or B = 2) was reported. The Mann-Whitney U and Kruskal-Wallis tests were used to compare the thickness of healthy renal margin in S = 0 vs 1 and I or B = 0 vs 1 vs 2 grades, respectively. Maximum, minimum and mean thickness of healthy renal margin was significantly different among score specific area grades S = 0 vs 1, and I or B = 0 vs 1, 0 vs 2 and 1 vs 2 (p <0.001). The main limitations of the study are the low number of the I or B = 1 and I or B = 2 samples and the assumption that each microscopic slide reflects the entire score specific area for histological analysis. The surface-intermediate-base scoring method can be readily harnessed in real-world clinical practice and accurately mirrors histopathological analysis for quantification and reporting of healthy renal margin thickness removed during tumor excision. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  14. Patterns of failure for glioblastoma multiforme following limited-margin radiation and concurrent temozolomide

    International Nuclear Information System (INIS)

    Gebhardt, Brian J; Dobelbower, Michael C; Ennis, William H; Bag, Asim K; Markert, James M; Fiveash, John B

    2014-01-01

    To analyze patterns of failure in patients with glioblastoma multiforme (GBM) treated with limited-margin radiation therapy and concurrent temozolomide. We hypothesize that patients treated with margins in accordance with Adult Brain Tumor Consortium guidelines (ABTC) will demonstrate patterns of failure consistent with previous series of patients treated with 2–3 cm margins. A retrospective review was performed of patients treated at the University of Alabama at Birmingham for GBM between 2000 and 2011. Ninety-five patients with biopsy-proven disease and documented disease progression after treatment were analyzed. The initial planning target volume includes the T1-enhancing tumor and surrounding edema plus a 1 cm margin. The boost planning target volume includes the T1-enhancing tumor plus a 1 cm margin. The tumors were classified as in-field, marginal, or distant if greater than 80%, 20-80%, or less than 20% of the recurrent volume fell within the 95% isodose line, respectively. The median progression-free survival from the time of diagnosis to documented failure was 8 months (range 3–46). Of the 95 documented recurrences, 77 patients (81%) had an in-field component of treatment failure, 6 (6%) had a marginal component, and 27 (28%) had a distant component. Sixty-three patients (66%) demonstrated in-field only recurrence. The low rate of marginal recurrence suggests that wider margins would have little impact on the pattern of failure, validating the use of limited margins in accordance ABTC guidelines

  15. Quantifying the margin sharpness of lesions on radiological images for content-based image retrieval

    International Nuclear Information System (INIS)

    Xu Jiajing; Napel, Sandy; Greenspan, Hayit; Beaulieu, Christopher F.; Agrawal, Neeraj; Rubin, Daniel

    2012-01-01

    . Equivalence across deformations was assessed using Schuirmann's paired two one-sided tests. Results: In simulated images, the concordance correlation between measured gradient and actual gradient was 0.994. The mean (s.d.) and standard deviation NDCG score for the retrieval of K images, K = 5, 10, and 15, were 84% (8%), 85% (7%), and 85% (7%) for CT images containing liver lesions, and 82% (7%), 84% (6%), and 85% (4%) for CT images containing lung nodules, respectively. The authors’ proposed method outperformed the two existing margin characterization methods in average NDCG scores over all K, by 1.5% and 3% in datasets containing liver lesion, and 4.5% and 5% in datasets containing lung nodules. Equivalence testing showed that the authors’ feature is more robust across all margin deformations (p < 0.05) than the two existing methods for margin sharpness characterization in both simulated and clinical datasets. Conclusions: The authors have described a new image feature to quantify the margin sharpness of lesions. It has strong correlation with known margin sharpness in simulated images and in clinical CT images containing liver lesions and lung nodules. This image feature has excellent performance for retrieving images with similar margin characteristics, suggesting potential utility, in conjunction with other lesion features, for content-based image retrieval applications.

  16. Comparison of prostate set-up accuracy and margins with off-line bony anatomy corrections and online implanted fiducial-based corrections.

    Science.gov (United States)

    Greer, P B; Dahl, K; Ebert, M A; Wratten, C; White, M; Denham, J W

    2008-10-01

    The aim of the study was to determine prostate set-up accuracy and set-up margins with off-line bony anatomy-based imaging protocols, compared with online implanted fiducial marker-based imaging with daily corrections. Eleven patients were treated with implanted prostate fiducial markers and online set-up corrections. Pretreatment orthogonal electronic portal images were acquired to determine couch shifts and verification images were acquired during treatment to measure residual set-up error. The prostate set-up errors that would result from skin marker set-up, off-line bony anatomy-based protocols and online fiducial marker-based corrections were determined. Set-up margins were calculated for each set-up technique using the percentage of encompassed isocentres and a margin recipe. The prostate systematic set-up errors in the medial-lateral, superior-inferior and anterior-posterior directions for skin marker set-up were 2.2, 3.6 and 4.5 mm (1 standard deviation). For our bony anatomy-based off-line protocol the prostate systematic set-up errors were 1.6, 2.5 and 4.4 mm. For the online fiducial based set-up the results were 0.5, 1.4 and 1.4 mm. A prostate systematic error of 10.2 mm was uncorrected by the off-line bone protocol in one patient. Set-up margins calculated to encompass 98% of prostate set-up shifts were 11-14 mm with bone off-line set-up and 4-7 mm with online fiducial markers. Margins from the van Herk margin recipe were generally 1-2 mm smaller. Bony anatomy-based set-up protocols improve the group prostate set-up error compared with skin marks; however, large prostate systematic errors can remain undetected or systematic errors increased for individual patients. The margin required for set-up errors was found to be 10-15 mm unless implanted fiducial markers are available for treatment guidance.

  17. The role of deep-water sedimentary processes in shaping a continental margin: The Northwest Atlantic

    Science.gov (United States)

    Mosher, David C.; Campbell, D.C.; Gardner, J.V.; Piper, D.J.W.; Chaytor, Jason; Rebesco, M.

    2017-01-01

    The tectonic history of a margin dictates its general shape; however, its geomorphology is generally transformed by deep-sea sedimentary processes. The objective of this study is to show the influences of turbidity currents, contour currents and sediment mass failures on the geomorphology of the deep-water northwestern Atlantic margin (NWAM) between Blake Ridge and Hudson Trough, spanning about 32° of latitude and the shelf edge to the abyssal plain. This assessment is based on new multibeam echosounder data, global bathymetric models and sub-surface geophysical information.The deep-water NWAM is divided into four broad geomorphologic classifications based on their bathymetric shape: graded, above-grade, stepped and out-of-grade. These shapes were created as a function of the balance between sediment accumulation and removal that in turn were related to sedimentary processes and slope-accommodation. This descriptive method of classifying continental margins, while being non-interpretative, is more informative than the conventional continental shelf, slope and rise classification, and better facilitates interpretation concerning dominant sedimentary processes.Areas of the margin dominated by turbidity currents and slope by-pass developed graded slopes. If sediments did not by-pass the slope due to accommodation then an above grade or stepped slope resulted. Geostrophic currents created sedimentary bodies of a variety of forms and positions along the NWAM. Detached drifts form linear, above-grade slopes along their crests from the shelf edge to the deep basin. Plastered drifts formed stepped slope profiles. Sediment mass failure has had a variety of consequences on the margin morphology; large mass-failures created out-of-grade profiles, whereas smaller mass failures tended to remain on the slope and formed above-grade profiles at trough-mouth fans, or nearly graded profiles, such as offshore Cape Fear.

  18. Maximum margin semi-supervised learning with irrelevant data.

    Science.gov (United States)

    Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R

    2015-10-01

    Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright

  19. Marginal Contribution-Based Distributed Subchannel Allocation in Small Cell Networks.

    Science.gov (United States)

    Shah, Shashi; Kittipiyakul, Somsak; Lim, Yuto; Tan, Yasuo

    2018-05-10

    The paper presents a game theoretic solution for distributed subchannel allocation problem in small cell networks (SCNs) analyzed under the physical interference model. The objective is to find a distributed solution that maximizes the welfare of the SCNs, defined as the total system capacity. Although the problem can be addressed through best-response (BR) dynamics, the existence of a steady-state solution, i.e., a pure strategy Nash equilibrium (NE), cannot be guaranteed. Potential games (PGs) ensure convergence to a pure strategy NE when players rationally play according to some specified learning rules. However, such a performance guarantee comes at the expense of complete knowledge of the SCNs. To overcome such requirements, properties of PGs are exploited for scalable implementations, where we utilize the concept of marginal contribution (MC) as a tool to design learning rules of players’ utility and propose the marginal contribution-based best-response (MCBR) algorithm of low computational complexity for the distributed subchannel allocation problem. Finally, we validate and evaluate the proposed scheme through simulations for various performance metrics.

  20. Marginal Contribution-Based Distributed Subchannel Allocation in Small Cell Networks

    Directory of Open Access Journals (Sweden)

    Shashi Shah

    2018-05-01

    Full Text Available The paper presents a game theoretic solution for distributed subchannel allocation problem in small cell networks (SCNs analyzed under the physical interference model. The objective is to find a distributed solution that maximizes the welfare of the SCNs, defined as the total system capacity. Although the problem can be addressed through best-response (BR dynamics, the existence of a steady-state solution, i.e., a pure strategy Nash equilibrium (NE, cannot be guaranteed. Potential games (PGs ensure convergence to a pure strategy NE when players rationally play according to some specified learning rules. However, such a performance guarantee comes at the expense of complete knowledge of the SCNs. To overcome such requirements, properties of PGs are exploited for scalable implementations, where we utilize the concept of marginal contribution (MC as a tool to design learning rules of players’ utility and propose the marginal contribution-based best-response (MCBR algorithm of low computational complexity for the distributed subchannel allocation problem. Finally, we validate and evaluate the proposed scheme through simulations for various performance metrics.

  1. Classifying dysmorphic syndromes by using artificial neural network based hierarchical decision tree.

    Science.gov (United States)

    Özdemir, Merve Erkınay; Telatar, Ziya; Eroğul, Osman; Tunca, Yusuf

    2018-05-01

    Dysmorphic syndromes have different facial malformations. These malformations are significant to an early diagnosis of dysmorphic syndromes and contain distinctive information for face recognition. In this study we define the certain features of each syndrome by considering facial malformations and classify Fragile X, Hurler, Prader Willi, Down, Wolf Hirschhorn syndromes and healthy groups automatically. The reference points are marked on the face images and ratios between the points' distances are taken into consideration as features. We suggest a neural network based hierarchical decision tree structure in order to classify the syndrome types. We also implement k-nearest neighbor (k-NN) and artificial neural network (ANN) classifiers to compare classification accuracy with our hierarchical decision tree. The classification accuracy is 50, 73 and 86.7% with k-NN, ANN and hierarchical decision tree methods, respectively. Then, the same images are shown to a clinical expert who achieve a recognition rate of 46.7%. We develop an efficient system to recognize different syndrome types automatically in a simple, non-invasive imaging data, which is independent from the patient's age, sex and race at high accuracy. The promising results indicate that our method can be used for pre-diagnosis of the dysmorphic syndromes by clinical experts.

  2. Graphic Symbol Recognition using Graph Based Signature and Bayesian Network Classifier

    OpenAIRE

    Luqman, Muhammad Muzzamil; Brouard, Thierry; Ramel, Jean-Yves

    2010-01-01

    We present a new approach for recognition of complex graphic symbols in technical documents. Graphic symbol recognition is a well known challenge in the field of document image analysis and is at heart of most graphic recognition systems. Our method uses structural approach for symbol representation and statistical classifier for symbol recognition. In our system we represent symbols by their graph based signatures: a graphic symbol is vectorized and is converted to an attributed relational g...

  3. Hyperspectral image classifier based on beach spectral feature

    International Nuclear Information System (INIS)

    Liang, Zhang; Lianru, Gao; Bing, Zhang

    2014-01-01

    The seashore, especially coral bank, is sensitive to human activities and environmental changes. A multispectral image, with coarse spectral resolution, is inadaptable for identify subtle spectral distinctions between various beaches. To the contrary, hyperspectral image with narrow and consecutive channels increases our capability to retrieve minor spectral features which is suit for identification and classification of surface materials on the shore. Herein, this paper used airborne hyperspectral data, in addition to ground spectral data to study the beaches in Qingdao. The image data first went through image pretreatment to deal with the disturbance of noise, radiation inconsistence and distortion. In succession, the reflection spectrum, the derivative spectrum and the spectral absorption features of the beach surface were inspected in search of diagnostic features. Hence, spectra indices specific for the unique environment of seashore were developed. According to expert decisions based on image spectrums, the beaches are ultimately classified into sand beach, rock beach, vegetation beach, mud beach, bare land and water. In situ surveying reflection spectrum from GER1500 field spectrometer validated the classification production. In conclusion, the classification approach under expert decision based on feature spectrum is proved to be feasible for beaches

  4. Planning Target Margin Calculations for Prostate Radiotherapy Based on Intrafraction and Interfraction Motion Using Four Localization Methods

    International Nuclear Information System (INIS)

    Beltran, Chris; Herman, Michael G.; Davis, Brian J.

    2008-01-01

    Purpose: To determine planning target volume (PTV) margins for prostate radiotherapy based on the internal margin (IM) (intrafractional motion) and the setup margin (SM) (interfractional motion) for four daily localization methods: skin marks (tattoo), pelvic bony anatomy (bone), intraprostatic gold seeds using a 5-mm action threshold, and using no threshold. Methods and Materials: Forty prostate cancer patients were treated with external radiotherapy according to an online localization protocol using four intraprostatic gold seeds and electronic portal images (EPIs). Daily localization and treatment EPIs were obtained. These data allowed inter- and intrafractional analysis of prostate motion. The SM for the four daily localization methods and the IM were determined. Results: A total of 1532 fractions were analyzed. Tattoo localization requires a SM of 6.8 mm left-right (LR), 7.2 mm inferior-superior (IS), and 9.8 mm anterior-posterior (AP). Bone localization requires 3.1, 8.9, and 10.7 mm, respectively. The 5-mm threshold localization requires 4.0, 3.9, and 3.7 mm. No threshold localization requires 3.4, 3.2, and 3.2 mm. The intrafractional prostate motion requires an IM of 2.4 mm LR, 3.4 mm IS and AP. The PTV margin using the 5-mm threshold, including interobserver uncertainty, IM, and SM, is 4.8 mm LR, 5.4 mm IS, and 5.2 mm AP. Conclusions: Localization based on EPI with implanted gold seeds allows a large PTV margin reduction when compared with tattoo localization. Except for the LR direction, bony anatomy localization does not decrease the margins compared with tattoo localization. Intrafractional prostate motion is a limiting factor on margin reduction

  5. Parotid gland sparing effect by computed tomography-based modified lower field margin in whole brain radiotherapy

    International Nuclear Information System (INIS)

    Cho, Oyeon; Chun, Mi Son; Oh, Young Taek; Kim, Mi Hwa; Park, Hae Jin; Nam, Sang Soo; Heo, Jae Sung; Noh, O Kyu; Park, Sung Ho

    2013-01-01

    Parotid gland can be considered as a risk organ in whole brain radiotherapy (WBRT). The purpose of this study is to evaluate the parotid gland sparing effect of computed tomography (CT)-based WBRT compared to 2-dimensional plan with conventional field margin. From January 2008 to April 2011, 53 patients underwent WBRT using CT-based simulation. Bilateral two-field arrangement was used and the prescribed dose was 30 Gy in 10 fractions. We compared the parotid dose between 2 radiotherapy plans using different lower field margins: conventional field to the lower level of the atlas (CF) and modified field fitted to the brain tissue (MF). Averages of mean parotid dose of the 2 protocols with CF and MF were 17.4 Gy and 8.7 Gy, respectively (p 98% of prescribed dose were 99.7% for CF and 99.5% for MF. Compared to WBRT with CF, CT-based lower field margin modification is a simple and effective technique for sparing the parotid gland, while providing similar dose coverage of the whole brain.

  6. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  7. Decoding the Margins: What Can the Fractal Geometry of Basaltic Flow Margins Tell Us?

    Science.gov (United States)

    Schaefer, E. I.; Hamilton, C.; Neish, C.; Beard, S. P.; Bramson, A. M.; Sori, M.; Rader, E. L.

    2016-12-01

    Studying lava flows on other planetary bodies is essential to characterizing eruption styles and constraining the bodies' thermal evolution. Although planetary basaltic flows are common, many key features are not resolvable in orbital imagery. We are thus developing a technique to characterize basaltic flow type, sub-meter roughness, and sediment mantling from these data. We will present the results from upcoming fieldwork at Craters of the Moon National Monument and Preserve with FINESSE (August) and at Hawai'i Volcanoes National Park (September). We build on earlier work that showed that basaltic flow margins are approximately fractal [Bruno et al., 1992; Gaonac'h et al., 1992] and that their fractal dimensions (D) have distinct `a`ā and pāhoehoe ranges under simple conditions [Bruno et al., 1994]. Using a differential GPS rover, we have recently shown that the margin of Iceland's 2014 Holuhraun flow exhibits near-perfect (R2=0.9998) fractality for ≥24 km across dm to km scales [Schaefer et al., 2016]. This finding suggests that a fractal-based technique has significant potential to characterize flows at sub-resolution scales. We are simultaneously seeking to understand how margin fractality can be modified. A preliminary result for an `a'ā flow in Hawaii's Ka'ū Desert suggests that although aeolian mantling obscures the original flow margin, the apparent margin (i.e., sediment-lava interface) remains fractal [Schaefer et al., 2015]. Further, the apparent margin's D is likely significantly modified from that of the original margin. Other factors that we are exploring include erosion, transitional flow types, and topographic confinement. We will also rigorously test the intriguing possibility that margin D correlates with the sub-meter Hurst exponent H of the flow surface, a common metric of roughness scaling [e.g., Shepard et al., 2001]. This hypothesis is based on geometric arguments [Turcotte, 1997] and is qualitatively consistent with all results so far.

  8. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  9. Adaptive Marginal Costs-Based Distributed Economic Control of Microgrid Clusters Considering Line Loss

    Directory of Open Access Journals (Sweden)

    Xiaoqian Zhou

    2017-12-01

    Full Text Available When several microgrids (MG are interconnected into microgrid clusters (MGC, they have great potential to improve their reliability. Traditional droop control tends to make the total operating costs higher as the power is distributed by capacity ratios of distributed energy resources (DERs. This paper proposes an adaptive distributed economic control for islanded microgrids which considers line loss, specifically, an interesting marginal costs-based economic droop control is proposed, and consensus-based adaptive controller is applied, to deal with power limits and capacity constraints for storage. The whole expense can be effectively lowered by achieving identical marginal costs for DERs in MGC. Specially, the capacity constraints only for storages are also included to do further optimization. Moreover, consensus-based distributed secondary controllers are used to rapidly restore system frequency and voltage magnitudes. The above controllers only need to interact with neighbor DERs by a sparse communication network, eliminating the necessity of a central controller and enhancing the stability. A MGC, incorporating three microgrids, is used to verify the effectiveness of the proposed methods.

  10. Multiobjective optimization of classifiers by means of 3D convex-hull-based evolutionary algorithms

    NARCIS (Netherlands)

    Zhao, J.; Basto, Fernandes V.; Jiao, L.; Yevseyeva, I.; Asep, Maulana A.; Li, R.; Bäck, T.H.W.; Tang, T.; Michael, Emmerich T. M.

    2016-01-01

    The receiver operating characteristic (ROC) and detection error tradeoff(DET) curves are frequently used in the machine learning community to analyze the performance of binary classifiers. Recently, the convex-hull-based multiobjective genetic programming algorithm was proposed and successfully

  11. Comparison of prostate set-up accuracy and margins with off-line bony anatomy corrections and online implanted fiducial-based corrections

    International Nuclear Information System (INIS)

    Greer, P. B.; Dahl, K.; Ebert, M. A.; Wratten, C.; White, M.; Denham, K. W.

    2008-01-01

    Full text: The aim of the study was to determine prostate set-up accuracy and set-up margins with off-line bony anatomy-based imaging protocols, compared with online implanted fiducial marker-based imaging with daily corrections. Eleven patients were treated with implanted prostate fiducial markers and online set-up corrections. Pretreatment orthogonal electronic portal images were acquired to determine couch shifts and verification images were acquired during treatment to measure residual set-up error. The prostate set-up errors that would result from skin marker set-up, off-line bony anatomy-based protocols and online fiducial marker-based corrections were determined. Set-up margins were calculated for each set-up technique using the percentage of encompassed isocentres land a margin recipe. The prostate systematic set-up errors in the medial-lateral, superior-inferior and anterior-I posterior directions for skin marker set-up were 2.2, 3.6 and 4.5 mm (1 standard deviation). For our bony anatomy-I based off-line protocol the prostate systematic set-up errors were 1.6, 2.5 and 4.4 mm. For the online fiducial based set-up the results were 0.5, 1.4 and 1.4 mm. A prostate systematic error of 10.2 mm was uncorrected by the off-line bone protocol in one patient. Set-up margins calculated to encompass 98% of prostate set-up shifts were 111-14 mm with bone off-line set-up and 4-7 mm with online fiducial markers. Margins from the van Herk margin I recipe were generally 1-2 mm smaller. Bony anatomy-based set-up protocols improve the group prostate set-up error compared with skin marks; however, large prostate systematic errors can remain undetected or systematic (errors increased for individual patients. The margin required for set-up errors was found to be 10-15 mm unless I implanted fiducial markers are available for treatment guidance.

  12. Design of a high-sensitivity classifier based on a genetic algorithm: application to computer-aided diagnosis

    International Nuclear Information System (INIS)

    Sahiner, Berkman; Chan, Heang-Ping; Petrick, Nicholas; Helvie, Mark A.; Goodsitt, Mitchell M.

    1998-01-01

    A genetic algorithm (GA) based feature selection method was developed for the design of high-sensitivity classifiers, which were tailored to yield high sensitivity with high specificity. The fitness function of the GA was based on the receiver operating characteristic (ROC) partial area index, which is defined as the average specificity above a given sensitivity threshold. The designed GA evolved towards the selection of feature combinations which yielded high specificity in the high-sensitivity region of the ROC curve, regardless of the performance at low sensitivity. This is a desirable quality of a classifier used for breast lesion characterization, since the focus in breast lesion characterization is to diagnose correctly as many benign lesions as possible without missing malignancies. The high-sensitivity classifier, formulated as the Fisher's linear discriminant using GA-selected feature variables, was employed to classify 255 biopsy-proven mammographic masses as malignant or benign. The mammograms were digitized at a pixel size of 0.1mmx0.1mm, and regions of interest (ROIs) containing the biopsied masses were extracted by an experienced radiologist. A recently developed image transformation technique, referred to as the rubber-band straightening transform, was applied to the ROIs. Texture features extracted from the spatial grey-level dependence and run-length statistics matrices of the transformed ROIs were used to distinguish malignant and benign masses. The classification accuracy of the high-sensitivity classifier was compared with that of linear discriminant analysis with stepwise feature selection (LDA sfs ). With proper GA training, the ROC partial area of the high-sensitivity classifier above a true-positive fraction of 0.95 was significantly larger than that of LDA sfs , although the latter provided a higher total area (A z ) under the ROC curve. By setting an appropriate decision threshold, the high-sensitivity classifier and LDA sfs correctly

  13. IAEA safeguards and classified materials

    International Nuclear Information System (INIS)

    Pilat, J.F.; Eccleston, G.W.; Fearey, B.L.; Nicholas, N.J.; Tape, J.W.; Kratzer, M.

    1997-01-01

    The international community in the post-Cold War period has suggested that the International Atomic Energy Agency (IAEA) utilize its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials, some of which are classified, under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring classified materials. A traditional safeguards approach, based on nuclear material accountancy, would seem unavoidably to reveal classified information. However, further analysis of the IAEA's safeguards approaches is warranted in order to understand fully the scope and nature of any problems. The issues are complex and difficult, and it is expected that common technical understandings will be essential for their resolution. Accordingly, this paper examines and compares traditional safeguards item accounting of fuel at a nuclear power station (especially spent fuel) with the challenges presented by inspections of classified materials. This analysis is intended to delineate more clearly the problems as well as reveal possible approaches, techniques, and technologies that could allow the adaptation of safeguards to the unprecedented task of inspecting classified materials. It is also hoped that a discussion of these issues can advance ongoing political-technical debates on international inspections of excess classified materials

  14. Accountable Accounting: Carbon-Based Management on Marginal Lands

    Directory of Open Access Journals (Sweden)

    Tara L. DiRocco

    2014-04-01

    Full Text Available Substantial discussion exists concerning the best land use options for mitigating greenhouse gas (GHG emissions on marginal land. Emissions-mitigating land use options include displacement of fossil fuels via biofuel production and afforestation. Comparing C recovery dynamics under these different options is crucial to assessing the efficacy of offset programs. In this paper, we focus on forest recovery on marginal land, and show that there is substantial inaccuracy and discrepancy in the literature concerning carbon accumulation. We find that uncertainty in carbon accumulation occurs in estimations of carbon stocks and models of carbon dynamics over time. We suggest that analyses to date have been largely unsuccessful at determining reliable trends in site recovery due to broad land use categories, a failure to consider the effect of current and post-restoration management, and problems with meta-analysis. Understanding of C recovery could be greatly improved with increased data collection on pre-restoration site quality, prior land use history, and management practices as well as increased methodological standardization. Finally, given the current and likely future uncertainty in C dynamics, we recommend carbon mitigation potential should not be the only environmental service driving land use decisions on marginal lands.

  15. Deep Convolutional Neural Networks for Classifying Body Constitution Based on Face Image.

    Science.gov (United States)

    Huan, Er-Yang; Wen, Gui-Hua; Zhang, Shi-Jun; Li, Dan-Yang; Hu, Yang; Chang, Tian-Yuan; Wang, Qing; Huang, Bing-Lin

    2017-01-01

    Body constitution classification is the basis and core content of traditional Chinese medicine constitution research. It is to extract the relevant laws from the complex constitution phenomenon and finally build the constitution classification system. Traditional identification methods have the disadvantages of inefficiency and low accuracy, for instance, questionnaires. This paper proposed a body constitution recognition algorithm based on deep convolutional neural network, which can classify individual constitution types according to face images. The proposed model first uses the convolutional neural network to extract the features of face image and then combines the extracted features with the color features. Finally, the fusion features are input to the Softmax classifier to get the classification result. Different comparison experiments show that the algorithm proposed in this paper can achieve the accuracy of 65.29% about the constitution classification. And its performance was accepted by Chinese medicine practitioners.

  16. A review and experimental study on the application of classifiers and evolutionary algorithms in EEG-based brain-machine interface systems

    Science.gov (United States)

    Tahernezhad-Javazm, Farajollah; Azimirad, Vahid; Shoaran, Maryam

    2018-04-01

    Objective. Considering the importance and the near-future development of noninvasive brain-machine interface (BMI) systems, this paper presents a comprehensive theoretical-experimental survey on the classification and evolutionary methods for BMI-based systems in which EEG signals are used. Approach. The paper is divided into two main parts. In the first part, a wide range of different types of the base and combinatorial classifiers including boosting and bagging classifiers and evolutionary algorithms are reviewed and investigated. In the second part, these classifiers and evolutionary algorithms are assessed and compared based on two types of relatively widely used BMI systems, sensory motor rhythm-BMI and event-related potentials-BMI. Moreover, in the second part, some of the improved evolutionary algorithms as well as bi-objective algorithms are experimentally assessed and compared. Main results. In this study two databases are used, and cross-validation accuracy (CVA) and stability to data volume (SDV) are considered as the evaluation criteria for the classifiers. According to the experimental results on both databases, regarding the base classifiers, linear discriminant analysis and support vector machines with respect to CVA evaluation metric, and naive Bayes with respect to SDV demonstrated the best performances. Among the combinatorial classifiers, four classifiers, Bagg-DT (bagging decision tree), LogitBoost, and GentleBoost with respect to CVA, and Bagging-LR (bagging logistic regression) and AdaBoost (adaptive boosting) with respect to SDV had the best performances. Finally, regarding the evolutionary algorithms, single-objective invasive weed optimization (IWO) and bi-objective nondominated sorting IWO algorithms demonstrated the best performances. Significance. We present a general survey on the base and the combinatorial classification methods for EEG signals (sensory motor rhythm and event-related potentials) as well as their optimization methods

  17. Effect of metal selection and porcelain firing on the marginal accuracy of titanium-based metal ceramic restorations.

    Science.gov (United States)

    Shokry, Tamer E; Attia, Mazen; Mosleh, Ihab; Elhosary, Mohamed; Hamza, Tamer; Shen, Chiayi

    2010-01-01

    Titanium is the most biocompatible metal used for dental casting; however, there is concern about its marginal accuracy after porcelain application since this aspect has direct influence on marginal fit. The purpose of this study was to determine the effect that metal selection and the porcelain firing procedure have on the marginal accuracy of metal ceramic prostheses. Cast CP Ti, milled CP Ti, cast Ti-6Al-7Nb, and cast Ni-Cr copings (n=5) were fired with compatible porcelains (Triceram for titanium-based metals and VITA VMK 95 for Ni-Cr alloy). The Ni-Cr alloy fired with its porcelain served as the control. Photographs of metal copings placed on a master die were made. Marginal discrepancy was determined on the photographs using an image processing program at 8 predetermined locations before airborne-particle abrasion for porcelain application, after firing of the opaque layer, and after firing of the dentin layer. Repeated-measures 2-way ANOVA was used to investigate the effect of metal selection and firing stage, and paired t tests were used to determine the effect of each firing stage within each material group (alpha=.05). ANOVA showed that both metal selection and firing stage significantly influenced the measured marginal discrepancy (Pcast Ti-6Al-7Nb alloy (P=.003). Titanium copings fabricated by CAD/CAM demonstrated the least marginal discrepancy among all groups, while the base metal (Ni-Cr) groups exhibited the most discrepancy of all groups tested. Copyright 2010 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  18. A Core Set Based Large Vector-Angular Region and Margin Approach for Novelty Detection

    Directory of Open Access Journals (Sweden)

    Jiusheng Chen

    2016-01-01

    Full Text Available A large vector-angular region and margin (LARM approach is presented for novelty detection based on imbalanced data. The key idea is to construct the largest vector-angular region in the feature space to separate normal training patterns; meanwhile, maximize the vector-angular margin between the surface of this optimal vector-angular region and abnormal training patterns. In order to improve the generalization performance of LARM, the vector-angular distribution is optimized by maximizing the vector-angular mean and minimizing the vector-angular variance, which separates the normal and abnormal examples well. However, the inherent computation of quadratic programming (QP solver takes O(n3 training time and at least O(n2 space, which might be computational prohibitive for large scale problems. By (1+ε  and  (1-ε-approximation algorithm, the core set based LARM algorithm is proposed for fast training LARM problem. Experimental results based on imbalanced datasets have validated the favorable efficiency of the proposed approach in novelty detection.

  19. Robust Combining of Disparate Classifiers Through Order Statistics

    Science.gov (United States)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  20. Can-Evo-Ens: Classifier stacking based evolutionary ensemble system for prediction of human breast cancer using amino acid sequences.

    Science.gov (United States)

    Ali, Safdar; Majid, Abdul

    2015-04-01

    The diagnostic of human breast cancer is an intricate process and specific indicators may produce negative results. In order to avoid misleading results, accurate and reliable diagnostic system for breast cancer is indispensable. Recently, several interesting machine-learning (ML) approaches are proposed for prediction of breast cancer. To this end, we developed a novel classifier stacking based evolutionary ensemble system "Can-Evo-Ens" for predicting amino acid sequences associated with breast cancer. In this paper, first, we selected four diverse-type of ML algorithms of Naïve Bayes, K-Nearest Neighbor, Support Vector Machines, and Random Forest as base-level classifiers. These classifiers are trained individually in different feature spaces using physicochemical properties of amino acids. In order to exploit the decision spaces, the preliminary predictions of base-level classifiers are stacked. Genetic programming (GP) is then employed to develop a meta-classifier that optimal combine the predictions of the base classifiers. The most suitable threshold value of the best-evolved predictor is computed using Particle Swarm Optimization technique. Our experiments have demonstrated the robustness of Can-Evo-Ens system for independent validation dataset. The proposed system has achieved the highest value of Area Under Curve (AUC) of ROC Curve of 99.95% for cancer prediction. The comparative results revealed that proposed approach is better than individual ML approaches and conventional ensemble approaches of AdaBoostM1, Bagging, GentleBoost, and Random Subspace. It is expected that the proposed novel system would have a major impact on the fields of Biomedical, Genomics, Proteomics, Bioinformatics, and Drug Development. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. ConSpeciFix: Classifying prokaryotic species based on gene flow.

    Science.gov (United States)

    Bobay, Louis-Marie; Ellis, Brian Shin-Hua; Ochman, Howard

    2018-05-16

    Classification of prokaryotic species is usually based on sequence similarity thresholds, which are easy to apply but lack a biologically-relevant foundation. Here, we present ConSpeciFix, a program that classifies prokaryotes into species using criteria set forth by the Biological Species Concept, thereby unifying species definition in all domains of life. ConSpeciFix's webserver is freely available at www.conspecifix.com. The local version of the program can be freely downloaded from https://github.com/Bobay-Ochman/ConSpeciFix. ConSpeciFix is written in Python 2.7 and requires the following dependencies: Usearch, MCL, MAFFT and RAxML. ljbobay@uncg.edu.

  2. Comparative analysis of instance selection algorithms for instance-based classifiers in the context of medical decision support

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Tourassi, Georgia D; Malof, Jordan M

    2011-01-01

    When constructing a pattern classifier, it is important to make best use of the instances (a.k.a. cases, examples, patterns or prototypes) available for its development. In this paper we present an extensive comparative analysis of algorithms that, given a pool of previously acquired instances, attempt to select those that will be the most effective to construct an instance-based classifier in terms of classification performance, time efficiency and storage requirements. We evaluate seven previously proposed instance selection algorithms and compare their performance to simple random selection of instances. We perform the evaluation using k-nearest neighbor classifier and three classification problems: one with simulated Gaussian data and two based on clinical databases for breast cancer detection and diagnosis, respectively. Finally, we evaluate the impact of the number of instances available for selection on the performance of the selection algorithms and conduct initial analysis of the selected instances. The experiments show that for all investigated classification problems, it was possible to reduce the size of the original development dataset to less than 3% of its initial size while maintaining or improving the classification performance. Random mutation hill climbing emerges as the superior selection algorithm. Furthermore, we show that some previously proposed algorithms perform worse than random selection. Regarding the impact of the number of instances available for the classifier development on the performance of the selection algorithms, we confirm that the selection algorithms are generally more effective as the pool of available instances increases. In conclusion, instance selection is generally beneficial for instance-based classifiers as it can improve their performance, reduce their storage requirements and improve their response time. However, choosing the right selection algorithm is crucial.

  3. Error minimizing algorithms for nearest eighbor classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory; Zimmer, G. Beate [TEXAS A& M

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.

  4. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    Directory of Open Access Journals (Sweden)

    Shehzad Khalid

    2014-01-01

    Full Text Available We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.

  5. Classifier Fusion With Contextual Reliability Evaluation.

    Science.gov (United States)

    Liu, Zhunga; Pan, Quan; Dezert, Jean; Han, Jun-Wei; He, You

    2018-05-01

    Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.

  6. A comparison of rule-based and machine learning approaches for classifying patient portal messages.

    Science.gov (United States)

    Cronin, Robert M; Fabbri, Daniel; Denny, Joshua C; Rosenbloom, S Trent; Jackson, Gretchen Purcell

    2017-09-01

    Secure messaging through patient portals is an increasingly popular way that consumers interact with healthcare providers. The increasing burden of secure messaging can affect clinic staffing and workflows. Manual management of portal messages is costly and time consuming. Automated classification of portal messages could potentially expedite message triage and delivery of care. We developed automated patient portal message classifiers with rule-based and machine learning techniques using bag of words and natural language processing (NLP) approaches. To evaluate classifier performance, we used a gold standard of 3253 portal messages manually categorized using a taxonomy of communication types (i.e., main categories of informational, medical, logistical, social, and other communications, and subcategories including prescriptions, appointments, problems, tests, follow-up, contact information, and acknowledgement). We evaluated our classifiers' accuracies in identifying individual communication types within portal messages with area under the receiver-operator curve (AUC). Portal messages often contain more than one type of communication. To predict all communication types within single messages, we used the Jaccard Index. We extracted the variables of importance for the random forest classifiers. The best performing approaches to classification for the major communication types were: logistic regression for medical communications (AUC: 0.899); basic (rule-based) for informational communications (AUC: 0.842); and random forests for social communications and logistical communications (AUCs: 0.875 and 0.925, respectively). The best performing classification approach of classifiers for individual communication subtypes was random forests for Logistical-Contact Information (AUC: 0.963). The Jaccard Indices by approach were: basic classifier, Jaccard Index: 0.674; Naïve Bayes, Jaccard Index: 0.799; random forests, Jaccard Index: 0.859; and logistic regression, Jaccard

  7. Ensemble Classifiers for Predicting HIV-1 Resistance from Three Rule-Based Genotypic Resistance Interpretation Systems.

    Science.gov (United States)

    Raposo, Letícia M; Nobre, Flavio F

    2017-08-30

    Resistance to antiretrovirals (ARVs) is a major problem faced by HIV-infected individuals. Different rule-based algorithms were developed to infer HIV-1 susceptibility to antiretrovirals from genotypic data. However, there is discordance between them, resulting in difficulties for clinical decisions about which treatment to use. Here, we developed ensemble classifiers integrating three interpretation algorithms: Agence Nationale de Recherche sur le SIDA (ANRS), Rega, and the genotypic resistance interpretation system from Stanford HIV Drug Resistance Database (HIVdb). Three approaches were applied to develop a classifier with a single resistance profile: stacked generalization, a simple plurality vote scheme and the selection of the interpretation system with the best performance. The strategies were compared with the Friedman's test and the performance of the classifiers was evaluated using the F-measure, sensitivity and specificity values. We found that the three strategies had similar performances for the selected antiretrovirals. For some cases, the stacking technique with naïve Bayes as the learning algorithm showed a statistically superior F-measure. This study demonstrates that ensemble classifiers can be an alternative tool for clinical decision-making since they provide a single resistance profile from the most commonly used resistance interpretation systems.

  8. Detecting and classifying method based on similarity matching of Android malware behavior with profile.

    Science.gov (United States)

    Jang, Jae-Wook; Yun, Jaesung; Mohaisen, Aziz; Woo, Jiyoung; Kim, Huy Kang

    2016-01-01

    Mass-market mobile security threats have increased recently due to the growth of mobile technologies and the popularity of mobile devices. Accordingly, techniques have been introduced for identifying, classifying, and defending against mobile threats utilizing static, dynamic, on-device, and off-device techniques. Static techniques are easy to evade, while dynamic techniques are expensive. On-device techniques are evasion, while off-device techniques need being always online. To address some of those shortcomings, we introduce Andro-profiler, a hybrid behavior based analysis and classification system for mobile malware. Andro-profiler main goals are efficiency, scalability, and accuracy. For that, Andro-profiler classifies malware by exploiting the behavior profiling extracted from the integrated system logs including system calls. Andro-profiler executes a malicious application on an emulator in order to generate the integrated system logs, and creates human-readable behavior profiles by analyzing the integrated system logs. By comparing the behavior profile of malicious application with representative behavior profile for each malware family using a weighted similarity matching technique, Andro-profiler detects and classifies it into malware families. The experiment results demonstrate that Andro-profiler is scalable, performs well in detecting and classifying malware with accuracy greater than 98 %, outperforms the existing state-of-the-art work, and is capable of identifying 0-day mobile malware samples.

  9. Action Recognition Using 3D Histograms of Texture and A Multi-Class Boosting Classifier.

    Science.gov (United States)

    Zhang, Baochang; Yang, Yun; Chen, Chen; Yang, Linlin; Han, Jungong; Shao, Ling

    2017-10-01

    Human action recognition is an important yet challenging task. This paper presents a low-cost descriptor called 3D histograms of texture (3DHoTs) to extract discriminant features from a sequence of depth maps. 3DHoTs are derived from projecting depth frames onto three orthogonal Cartesian planes, i.e., the frontal, side, and top planes, and thus compactly characterize the salient information of a specific action, on which texture features are calculated to represent the action. Besides this fast feature descriptor, a new multi-class boosting classifier (MBC) is also proposed to efficiently exploit different kinds of features in a unified framework for action classification. Compared with the existing boosting frameworks, we add a new multi-class constraint into the objective function, which helps to maintain a better margin distribution by maximizing the mean of margin, whereas still minimizing the variance of margin. Experiments on the MSRAction3D, MSRGesture3D, MSRActivity3D, and UTD-MHAD data sets demonstrate that the proposed system combining 3DHoTs and MBC is superior to the state of the art.

  10. Measuring the impact of marginal tax rate reform on the revenue base of South Africa using a microsimulation tax model

    Directory of Open Access Journals (Sweden)

    Yolande Jordaan

    2015-08-01

    Full Text Available This paper is primarily concerned with the revenue and tax efficiency effects of adjustments to marginal tax rates on individual income as an instrument of possible tax reform. The hypothesis is that changes to marginal rates affect not only the revenue base, but also tax efficiency and the optimum level of taxes that supports economic growth. Using an optimal revenue-maximising rate (based on Laffer analysis, the elasticity of taxable income is derived with respect to marginal tax rates for each taxable-income category. These elasticities are then used to quantify the impact of changes in marginal rates on the revenue base and tax efficiency using a microsimulation (MS tax model. In this first paper on the research results, much attention is paid to the structure of the model and the way in which the database has been compiled. The model allows for the dissemination of individual taxpayers by income groups, gender, educational level, age group, etc. Simulations include a scenario with higher marginal rates which is also more progressive (as in the 1998/1999 fiscal year, in which case tax revenue increases but the increase is overshadowed by a more than proportional decrease in tax efficiency as measured by its deadweight loss. On the other hand, a lowering of marginal rates (to bring South Africa’s marginal rates more in line with those of its peers improves tax efficiency but also results in a substantial revenue loss. The estimated optimal individual tax to gross domestic product (GDP ratio in order to maximise economic growth (6.7 per cent shows a strong response to changes in marginal rates, and the results from this research indicate that a lowering of marginal rates would also move the actual ratio closer to its optimum level. Thus, the trade-off between revenue collected and tax efficiency should be carefully monitored when personal income tax reform is being considered.

  11. Hyperspectral imaging based on compressive sensing to determine cancer margins in human pancreatic tissue ex vivo

    Science.gov (United States)

    Peller, Joseph; Thompson, Kyle J.; Siddiqui, Imran; Martinie, John; Iannitti, David A.; Trammell, Susan R.

    2017-02-01

    Pancreatic cancer is the fourth leading cause of cancer death in the US. Currently, surgery is the only treatment that offers a chance of cure, however, accurately identifying tumor margins in real-time is difficult. Research has demonstrated that optical spectroscopy can be used to distinguish between healthy and diseased tissue. The design of a single-pixel imaging system for cancer detection is discussed. The system differentiates between healthy and diseased tissue based on differences in the optical reflectance spectra of these regions. In this study, pancreatic tissue samples from 6 patients undergoing Whipple procedures are imaged with the system (total number of tissue sample imaged was N=11). Regions of healthy and unhealthy tissue are determined based on SAM analysis of these spectral images. Hyperspectral imaging results are then compared to white light imaging and histological analysis. Cancerous regions were clearly visible in the hyperspectral images. Margins determined via spectral imaging were in good agreement with margins identified by histology, indicating that hyperspectral imaging system can differentiate between healthy and diseased tissue. After imaging the system was able to detect cancerous regions with a sensitivity of 74.50±5.89% and a specificity of 75.53±10.81%. Possible applications of this imaging system include determination of tumor margins during surgery/biopsy and assistance with cancer diagnosis and staging.

  12. MARGINS: Toward a novel science plan

    Science.gov (United States)

    Mutter, John C.

    A science plan to study continental margins has been in the works for the past 3 years, with almost 200 Earth scientists from a wide variety of disciplines gathering at meetings and workshops. Most geological hazards and resources are found at continental margins, yet our understanding of the processes that shape the margins is meager.In formulating this MARGINS research initiative, fundamental issues concerning our understanding of basic Earth-forming processes have arisen. It is clear that a business-as-usual approach will not solve the class of problems defined by the MARGINS program; the solutions demand approaches different from those used in the past. In many cases, a different class of experiment will be required, one that is well beyond the capability of individual principle investigators to undertake on their own. In most cases, broadly based interdisciplinary studies will be needed.

  13. Classifying Microorganisms

    DEFF Research Database (Denmark)

    Sommerlund, Julie

    2006-01-01

    This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris......This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological...... characteristics. The coexistence of the classification systems does not lead to a conflict between them. Rather, the systems seem to co-exist in different configurations, through which they are complementary, contradictory and inclusive in different situations-sometimes simultaneously. The systems come...

  14. Communication Behaviour-Based Big Data Application to Classify and Detect HTTP Automated Software

    Directory of Open Access Journals (Sweden)

    Manh Cong Tran

    2016-01-01

    Full Text Available HTTP is recognized as the most widely used protocol on the Internet when applications are being transferred more and more by developers onto the web. Due to increasingly complex computer systems, diversity HTTP automated software (autoware thrives. Unfortunately, besides normal autoware, HTTP malware and greyware are also spreading rapidly in web environment. Consequently, network communication is not just rigorously controlled by users intention. This raises the demand for analyzing HTTP autoware communication behaviour to detect and classify malicious and normal activities via HTTP traffic. Hence, in this paper, based on many studies and analysis of the autoware communication behaviour through access graph, a new method to detect and classify HTTP autoware communication at network level is presented. The proposal system includes combination of MapReduce of Hadoop and MarkLogic NoSQL database along with xQuery to deal with huge HTTP traffic generated each day in a large network. The method is examined with real outbound HTTP traffic data collected through a proxy server of a private network. Experimental results obtained for proposed method showed that promised outcomes are achieved since 95.1% of suspicious autoware are classified and detected. This finding may assist network and system administrator in inspecting early the internal threats caused by HTTP autoware.

  15. RRHGE: A Novel Approach to Classify the Estrogen Receptor Based Breast Cancer Subtypes

    Directory of Open Access Journals (Sweden)

    Ashish Saini

    2014-01-01

    Full Text Available Background. Breast cancer is the most common type of cancer among females with a high mortality rate. It is essential to classify the estrogen receptor based breast cancer subtypes into correct subclasses, so that the right treatments can be applied to lower the mortality rate. Using gene signatures derived from gene interaction networks to classify breast cancers has proven to be more reproducible and can achieve higher classification performance. However, the interactions in the gene interaction network usually contain many false-positive interactions that do not have any biological meanings. Therefore, it is a challenge to incorporate the reliability assessment of interactions when deriving gene signatures from gene interaction networks. How to effectively extract gene signatures from available resources is critical to the success of cancer classification. Methods. We propose a novel method to measure and extract the reliable (biologically true or valid interactions from gene interaction networks and incorporate the extracted reliable gene interactions into our proposed RRHGE algorithm to identify significant gene signatures from microarray gene expression data for classifying ER+ and ER− breast cancer samples. Results. The evaluation on real breast cancer samples showed that our RRHGE algorithm achieved higher classification accuracy than the existing approaches.

  16. A Novel Approach for Multi Class Fault Diagnosis in Induction Machine Based on Statistical Time Features and Random Forest Classifier

    Science.gov (United States)

    Sonje, M. Deepak; Kundu, P.; Chowdhury, A.

    2017-08-01

    Fault diagnosis and detection is the important area in health monitoring of electrical machines. This paper proposes the recently developed machine learning classifier for multi class fault diagnosis in induction machine. The classification is based on random forest (RF) algorithm. Initially, stator currents are acquired from the induction machine under various conditions. After preprocessing the currents, fourteen statistical time features are estimated for each phase of the current. These parameters are considered as inputs to the classifier. The main scope of the paper is to evaluate effectiveness of RF classifier for individual and mixed fault diagnosis in induction machine. The stator, rotor and mixed faults (stator and rotor faults) are classified using the proposed classifier. The obtained performance measures are compared with the multilayer perceptron neural network (MLPNN) classifier. The results show the much better performance measures and more accurate than MLPNN classifier. For demonstration of planned fault diagnosis algorithm, experimentally obtained results are considered to build the classifier more practical.

  17. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  18. Deconvolution When Classifying Noisy Data Involving Transformations.

    Science.gov (United States)

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  19. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2012-01-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  20. Classifying adolescent attention-deficit/hyperactivity disorder (ADHD) based on functional and structural imaging.

    Science.gov (United States)

    Iannaccone, Reto; Hauser, Tobias U; Ball, Juliane; Brandeis, Daniel; Walitza, Susanne; Brem, Silvia

    2015-10-01

    Attention-deficit/hyperactivity disorder (ADHD) is a common disabling psychiatric disorder associated with consistent deficits in error processing, inhibition and regionally decreased grey matter volumes. The diagnosis is based on clinical presentation, interviews and questionnaires, which are to some degree subjective and would benefit from verification through biomarkers. Here, pattern recognition of multiple discriminative functional and structural brain patterns was applied to classify adolescents with ADHD and controls. Functional activation features in a Flanker/NoGo task probing error processing and inhibition along with structural magnetic resonance imaging data served to predict group membership using support vector machines (SVMs). The SVM pattern recognition algorithm correctly classified 77.78% of the subjects with a sensitivity and specificity of 77.78% based on error processing. Predictive regions for controls were mainly detected in core areas for error processing and attention such as the medial and dorsolateral frontal areas reflecting deficient processing in ADHD (Hart et al., in Hum Brain Mapp 35:3083-3094, 2014), and overlapped with decreased activations in patients in conventional group comparisons. Regions more predictive for ADHD patients were identified in the posterior cingulate, temporal and occipital cortex. Interestingly despite pronounced univariate group differences in inhibition-related activation and grey matter volumes the corresponding classifiers failed or only yielded a poor discrimination. The present study corroborates the potential of task-related brain activation for classification shown in previous studies. It remains to be clarified whether error processing, which performed best here, also contributes to the discrimination of useful dimensions and subtypes, different psychiatric disorders, and prediction of treatment success across studies and sites.

  1. Feature extraction for dynamic integration of classifiers

    NARCIS (Netherlands)

    Pechenizkiy, M.; Tsymbal, A.; Puuronen, S.; Patterson, D.W.

    2007-01-01

    Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. In this paper, we present an algorithm for the dynamic integration of classifiers in the space of extracted features (FEDIC). It is based on the technique

  2. Heterogeneity wavelet kinetics from DCE-MRI for classifying gene expression based breast cancer recurrence risk.

    Science.gov (United States)

    Mahrooghy, Majid; Ashraf, Ahmed B; Daye, Dania; Mies, Carolyn; Feldman, Michael; Rosen, Mark; Kontos, Despina

    2013-01-01

    Breast tumors are heterogeneous lesions. Intra-tumor heterogeneity presents a major challenge for cancer diagnosis and treatment. Few studies have worked on capturing tumor heterogeneity from imaging. Most studies to date consider aggregate measures for tumor characterization. In this work we capture tumor heterogeneity by partitioning tumor pixels into subregions and extracting heterogeneity wavelet kinetic (HetWave) features from breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to obtain the spatiotemporal patterns of the wavelet coefficients and contrast agent uptake from each partition. Using a genetic algorithm for feature selection, and a logistic regression classifier with leave one-out cross validation, we tested our proposed HetWave features for the task of classifying breast cancer recurrence risk. The classifier based on our features gave an ROC AUC of 0.78, outperforming previously proposed kinetic, texture, and spatial enhancement variance features which give AUCs of 0.69, 0.64, and 0.65, respectively.

  3. Influence of incorrect application of a water-based adhesive system on the marginal adaptation of Class V restorations.

    Science.gov (United States)

    Peschke, A; Blunck, U; Roulet, J F

    2000-10-01

    To determine the influence of incorrectly performed steps during the application of the water-based adhesive system OptiBond FL on the marginal adaptation of Class V composite restorations. In 96 extracted human teeth Class V cavities were prepared. Half of the margin length was situated in dentin. The teeth were randomly divided into 12 groups. The cavities were filled with Prodigy resin-based composite in combination with OptiBond FL according to the manufacturer's instructions (Group O) and including several incorrect application steps: Group A: prolonged etching (60 s); Group B: no etching of dentin; Group C: excessive drying after etching; Group D: short rewetting after excessive drying; Group E: air drying and rewetting; Group F: blot drying; Group G: saliva contamination; Group H: application of primer and immediate drying; group I: application of only primer; group J: application of only adhesive; Group K: no light curing of the adhesive before the application of composite. After thermocycling, replicas were taken and the margins were quantitatively analyzed in the SEM. Statistical analysis of the results was performed using non-parametric procedures. With exception of the "rewetting groups" (D and E) and the group with saliva contamination (G), all other application procedures showed a significantly higher amount of marginal openings in dentin compared to the control group (O). Margin quality in enamel was only affected when the primer was not applied.

  4. Naive Bayes as opinion classifier to evaluate students satisfaction based on student sentiment in Twitter Social Media

    Science.gov (United States)

    Candra Permana, Fahmi; Rosmansyah, Yusep; Setiawan Abdullah, Atje

    2017-10-01

    Students activity on social media can provide implicit knowledge and new perspectives for an educational system. Sentiment analysis is a part of text mining that can help to analyze and classify the opinion data. This research uses text mining and naive Bayes method as opinion classifier, to be used as an alternative methods in the process of evaluating studentss satisfaction for educational institution. Based on test results, this system can determine the opinion classification in Bahasa Indonesia using naive Bayes as opinion classifier with accuracy level of 84% correct, and the comparison between the existing system and the proposed system to evaluate students satisfaction in learning process, there is only a difference of 16.49%.

  5. Difference in the Set-up Margin between 2D Conventional and 3D CT Based Planning in Patients with Early Breast Cancer

    International Nuclear Information System (INIS)

    Jo, Sun Mi; Chun, Mi Sun; Kim, Mi Hwa; Oh, Young Taek; Noh, O Kyu; Kang, Seung Hee

    2010-01-01

    Simulation using computed tomography (CT) is now widely available for radiation treatment planning for breast cancer. It is an important tool to help define the tumor target and normal tissue based on anatomical features of an individual patient. In Korea, most patients have small sized breasts and the purpose of this study was to review the margin of treatment field between conventional two-dimensional (2D) planning and CT based three-dimensional (3D) planning in patients with small breasts. Twenty-five consecutive patients with early breast cancer undergoing breast conservation therapy were selected. All patients underwent 3D CT based planning with a conventional breast tangential field design. In 2D planning, the treatment field margins were determined by palpation of the breast parenchyma (In general, the superior: base of the clavicle, medial: midline, lateral: mid - axillary line, and inferior margin: 2 m below the inflamammary fold). In 3D planning, the clinical target volume (CTV) ought to comprise all glandular breast tissue, and the PTV was obtained by adding a 3D margin of 1 cm around the CTV except in the skin direction. The difference in the treatment field margin and equivalent field size between 2D and 3D planning were evaluated. The association between radiation field margins and factors such as body mass index, menopause status, and bra size was determined. Lung volume and heart volume were examined on the basis of the prescribed breast radiation dose and 3D dose distribution. The margins of the treatment field were smaller in the 3D planning except for two patients. The superior margin was especially variable (average, 2.5 cm; range, -2.5 to 4.5 cm; SD, 1.85). The margin of these targets did not vary equally across BMI class, menopause status, or bra size. The average irradiated lung volume was significantly lower for 3D planning. The average irradiated heart volume did not decrease significantly. The use of 3D CT based planning reduced the

  6. Difference in the Set-up Margin between 2D Conventional and 3D CT Based Planning in Patients with Early Breast Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Sun Mi; Chun, Mi Sun; Kim, Mi Hwa; Oh, Young Taek; Noh, O Kyu [Ajou University School of Medicine, Seoul (Korea, Republic of); Kang, Seung Hee [Inje University, Ilsan Paik Hospital, Ilsan (Korea, Republic of)

    2010-11-15

    Simulation using computed tomography (CT) is now widely available for radiation treatment planning for breast cancer. It is an important tool to help define the tumor target and normal tissue based on anatomical features of an individual patient. In Korea, most patients have small sized breasts and the purpose of this study was to review the margin of treatment field between conventional two-dimensional (2D) planning and CT based three-dimensional (3D) planning in patients with small breasts. Twenty-five consecutive patients with early breast cancer undergoing breast conservation therapy were selected. All patients underwent 3D CT based planning with a conventional breast tangential field design. In 2D planning, the treatment field margins were determined by palpation of the breast parenchyma (In general, the superior: base of the clavicle, medial: midline, lateral: mid - axillary line, and inferior margin: 2 m below the inflamammary fold). In 3D planning, the clinical target volume (CTV) ought to comprise all glandular breast tissue, and the PTV was obtained by adding a 3D margin of 1 cm around the CTV except in the skin direction. The difference in the treatment field margin and equivalent field size between 2D and 3D planning were evaluated. The association between radiation field margins and factors such as body mass index, menopause status, and bra size was determined. Lung volume and heart volume were examined on the basis of the prescribed breast radiation dose and 3D dose distribution. The margins of the treatment field were smaller in the 3D planning except for two patients. The superior margin was especially variable (average, 2.5 cm; range, -2.5 to 4.5 cm; SD, 1.85). The margin of these targets did not vary equally across BMI class, menopause status, or bra size. The average irradiated lung volume was significantly lower for 3D planning. The average irradiated heart volume did not decrease significantly. The use of 3D CT based planning reduced the

  7. Supervised linear dimensionality reduction with robust margins for object recognition

    Science.gov (United States)

    Dornaika, F.; Assoum, A.

    2013-01-01

    Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.

  8. Predicting protein subcellular locations using hierarchical ensemble of Bayesian classifiers based on Markov chains

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2006-06-01

    Full Text Available Abstract Background The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. Results A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. Conclusion This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.

  9. An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Xiong Jintao

    2016-01-01

    Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.

  10. Marginal leakage of two newer glass-ionomer-based sealant materials assessed using micro-CT.

    NARCIS (Netherlands)

    Chen, X.; Cuijpers, V.M.J.I.; Fan, M.; Frencken, J.E.F.M.

    2010-01-01

    OBJECTIVES: To test newer glass-ionomer-based materials as sealant materials. One glass-ionomer sealant was light-cured to obtain an early setting reaction. The null-hypothesis tested was: there is no difference in marginal leakage of sealants produced with high-viscosity glass-ionomer, with and

  11. Effect of Margin Designs on the Marginal Adaptation of Zirconia Copings.

    Science.gov (United States)

    Habib, Syed Rashid; Al Ajmi, Mohammed Ginan; Al Dhafyan, Mohammed; Jomah, Abdulrehman; Abualsaud, Haytham; Almashali, Mazen

    2017-09-01

    The aim of this in vitro study was to investigate the effect of Shoulder versus Chamfer margin design on the marginal adaptation of zirconia (Zr) copings. 40 extracted molar teeth were mounted in resin and prepared for zirconia crowns with two margin preparation designs (20=Shoulder and 20=Chamfer). The copings were manufactured by Cercon® (DeguDent GmbH, Germany) using the CAD/CAM system for each tooth. They were tried on each tooth, cemented, thermocycled, re-embedded in resin and were subsequently cross sectioned centrally into two equal mesial and distal halves. They were examined under electron microscope at 200 X magnification and the measurements were recorded at 5 predetermined points in micrometers (µm). The o verall mean marginal gap for the two groups was found to be 206.98+42.78µm with Shoulder margin design (Marginal Gap=199.50+40.72µm) having better adaptation compared to Chamfer (Marginal Gap=214.46+44.85µm). The independent-samples t-test showed a statistically non-significant difference (p=.113) between the means of marginal gap for Shoulder and Chamfer margin designs and the measurements were recorded at 5 predetermined points for the two groups. The Chamfer margin design appeared to offer the same adaptation results as the Shoulder margin design.

  12. Deep Classifiers-Based License Plate Detection, Localization and Recognition on GPU-Powered Mobile Platform

    Directory of Open Access Journals (Sweden)

    Syed Tahir Hussain Rizvi

    2017-10-01

    Full Text Available The realization of a deep neural architecture on a mobile platform is challenging, but can open up a number of possibilities for visual analysis applications. A neural network can be realized on a mobile platform by exploiting the computational power of the embedded GPU and simplifying the flow of a neural architecture trained on the desktop workstation or a GPU server. This paper presents an embedded platform-based Italian license plate detection and recognition system using deep neural classifiers. In this work, trained parameters of a highly precise automatic license plate recognition (ALPR system are imported and used to replicate the same neural classifiers on a Nvidia Shield K1 tablet. A CUDA-based framework is used to realize these neural networks. The flow of the trained architecture is simplified to perform the license plate recognition in real-time. Results show that the tasks of plate and character detection and localization can be performed in real-time on a mobile platform by simplifying the flow of the trained architecture. However, the accuracy of the simplified architecture would be decreased accordingly.

  13. Using Neural Networks to Classify Digitized Images of Galaxies

    Science.gov (United States)

    Goderya, S. N.; McGuire, P. C.

    2000-12-01

    Automated classification of Galaxies into Hubble types is of paramount importance to study the large scale structure of the Universe, particularly as survey projects like the Sloan Digital Sky Survey complete their data acquisition of one million galaxies. At present it is not possible to find robust and efficient artificial intelligence based galaxy classifiers. In this study we will summarize progress made in the development of automated galaxy classifiers using neural networks as machine learning tools. We explore the Bayesian linear algorithm, the higher order probabilistic network, the multilayer perceptron neural network and Support Vector Machine Classifier. The performance of any machine classifier is dependant on the quality of the parameters that characterize the different groups of galaxies. Our effort is to develop geometric and invariant moment based parameters as input to the machine classifiers instead of the raw pixel data. Such an approach reduces the dimensionality of the classifier considerably, and removes the effects of scaling and rotation, and makes it easier to solve for the unknown parameters in the galaxy classifier. To judge the quality of training and classification we develop the concept of Mathews coefficients for the galaxy classification community. Mathews coefficients are single numbers that quantify classifier performance even with unequal prior probabilities of the classes.

  14. Realistic respiratory motion margins for external beam partial breast irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Conroy, Leigh; Quirk, Sarah [Department of Medical Physics, Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Smith, Wendy L., E-mail: wendy.smith@albertahealthservices.ca [Department of Medical Physics, Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Department of Oncology, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)

    2015-09-15

    Purpose: Respiratory margins for partial breast irradiation (PBI) have been largely based on geometric observations, which may overestimate the margin required for dosimetric coverage. In this study, dosimetric population-based respiratory margins and margin formulas for external beam partial breast irradiation are determined. Methods: Volunteer respiratory data and anterior–posterior (AP) dose profiles from clinical treatment plans of 28 3D conformal radiotherapy (3DCRT) PBI patient plans were used to determine population-based respiratory margins. The peak-to-peak amplitudes (A) of realistic respiratory motion data from healthy volunteers were scaled from A = 1 to 10 mm to create respiratory motion probability density functions. Dose profiles were convolved with the respiratory probability density functions to produce blurred dose profiles accounting for respiratory motion. The required margins were found by measuring the distance between the simulated treatment and original dose profiles at the 95% isodose level. Results: The symmetric dosimetric respiratory margins to cover 90%, 95%, and 100% of the simulated treatment population were 1.5, 2, and 4 mm, respectively. With patient set up at end exhale, the required margins were larger in the anterior direction than the posterior. For respiratory amplitudes less than 5 mm, the population-based margins can be expressed as a fraction of the extent of respiratory motion. The derived formulas in the anterior/posterior directions for 90%, 95%, and 100% simulated population coverage were 0.45A/0.25A, 0.50A/0.30A, and 0.70A/0.40A. The differences in formulas for different population coverage criteria demonstrate that respiratory trace shape and baseline drift characteristics affect individual respiratory margins even for the same average peak-to-peak amplitude. Conclusions: A methodology for determining population-based respiratory margins using real respiratory motion patterns and dose profiles in the AP direction was

  15. Realistic respiratory motion margins for external beam partial breast irradiation

    International Nuclear Information System (INIS)

    Conroy, Leigh; Quirk, Sarah; Smith, Wendy L.

    2015-01-01

    Purpose: Respiratory margins for partial breast irradiation (PBI) have been largely based on geometric observations, which may overestimate the margin required for dosimetric coverage. In this study, dosimetric population-based respiratory margins and margin formulas for external beam partial breast irradiation are determined. Methods: Volunteer respiratory data and anterior–posterior (AP) dose profiles from clinical treatment plans of 28 3D conformal radiotherapy (3DCRT) PBI patient plans were used to determine population-based respiratory margins. The peak-to-peak amplitudes (A) of realistic respiratory motion data from healthy volunteers were scaled from A = 1 to 10 mm to create respiratory motion probability density functions. Dose profiles were convolved with the respiratory probability density functions to produce blurred dose profiles accounting for respiratory motion. The required margins were found by measuring the distance between the simulated treatment and original dose profiles at the 95% isodose level. Results: The symmetric dosimetric respiratory margins to cover 90%, 95%, and 100% of the simulated treatment population were 1.5, 2, and 4 mm, respectively. With patient set up at end exhale, the required margins were larger in the anterior direction than the posterior. For respiratory amplitudes less than 5 mm, the population-based margins can be expressed as a fraction of the extent of respiratory motion. The derived formulas in the anterior/posterior directions for 90%, 95%, and 100% simulated population coverage were 0.45A/0.25A, 0.50A/0.30A, and 0.70A/0.40A. The differences in formulas for different population coverage criteria demonstrate that respiratory trace shape and baseline drift characteristics affect individual respiratory margins even for the same average peak-to-peak amplitude. Conclusions: A methodology for determining population-based respiratory margins using real respiratory motion patterns and dose profiles in the AP direction was

  16. A novel approach for fire recognition using hybrid features and manifold learning-based classifier

    Science.gov (United States)

    Zhu, Rong; Hu, Xueying; Tang, Jiajun; Hu, Sheng

    2018-03-01

    Although image/video based fire recognition has received growing attention, an efficient and robust fire detection strategy is rarely explored. In this paper, we propose a novel approach to automatically identify the flame or smoke regions in an image. It is composed to three stages: (1) a block processing is applied to divide an image into several nonoverlapping image blocks, and these image blocks are identified as suspicious fire regions or not by using two color models and a color histogram-based similarity matching method in the HSV color space, (2) considering that compared to other information, the flame and smoke regions have significant visual characteristics, so that two kinds of image features are extracted for fire recognition, where local features are obtained based on the Scale Invariant Feature Transform (SIFT) descriptor and the Bags of Keypoints (BOK) technique, and texture features are extracted based on the Gray Level Co-occurrence Matrices (GLCM) and the Wavelet-based Analysis (WA) methods, and (3) a manifold learning-based classifier is constructed based on two image manifolds, which is designed via an improve Globular Neighborhood Locally Linear Embedding (GNLLE) algorithm, and the extracted hybrid features are used as input feature vectors to train the classifier, which is used to make decision for fire images or non fire images. Experiments and comparative analyses with four approaches are conducted on the collected image sets. The results show that the proposed approach is superior to the other ones in detecting fire and achieving a high recognition accuracy and a low error rate.

  17. Neural network classifier of attacks in IP telephony

    Science.gov (United States)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  18. Carbon classified?

    DEFF Research Database (Denmark)

    Lippert, Ingmar

    2012-01-01

    . Using an actor- network theory (ANT) framework, the aim is to investigate the actors who bring together the elements needed to classify their carbon emission sources and unpack the heterogeneous relations drawn on. Based on an ethnographic study of corporate agents of ecological modernisation over...... a period of 13 months, this paper provides an exploration of three cases of enacting classification. Drawing on ANT, we problematise the silencing of a range of possible modalities of consumption facts and point to the ontological ethics involved in such performances. In a context of global warming...

  19. Tolerance to missing data using a likelihood ratio based classifier for computer-aided classification of breast cancer

    International Nuclear Information System (INIS)

    Bilska-Wolak, Anna O; Floyd, Carey E Jr

    2004-01-01

    While mammography is a highly sensitive method for detecting breast tumours, its ability to differentiate between malignant and benign lesions is low, which may result in as many as 70% of unnecessary biopsies. The purpose of this study was to develop a highly specific computer-aided diagnosis algorithm to improve classification of mammographic masses. A classifier based on the likelihood ratio was developed to accommodate cases with missing data. Data for development included 671 biopsy cases (245 malignant), with biopsy-proved outcome. Sixteen features based on the BI-RADS TM lexicon and patient history had been recorded for the cases, with 1.3 ± 1.1 missing feature values per case. Classifier evaluation methods included receiver operating characteristic and leave-one-out bootstrap sampling. The classifier achieved 32% specificity at 100% sensitivity on the 671 cases with 16 features that had missing values. Utilizing just the seven features present for all cases resulted in decreased performance at 100% sensitivity with average 19% specificity. No cases and no feature data were omitted during classifier development, showing that it is more beneficial to utilize cases with missing values than to discard incomplete cases that cannot be handled by many algorithms. Classification of mammographic masses was commendable at high sensitivity levels, indicating that benign cases could be potentially spared from biopsy

  20. Marginal Shape Deep Learning: Applications to Pediatric Lung Field Segmentation.

    Science.gov (United States)

    Mansoor, Awais; Cerrolaza, Juan J; Perez, Geovanny; Biggs, Elijah; Nino, Gustavo; Linguraru, Marius George

    2017-02-11

    Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, localization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0.927 using only the four highest modes of variation (compared to 0.888 with classical ASM 1 (p-value=0.01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects.

  1. Consistency Analysis of Nearest Subspace Classifier

    OpenAIRE

    Wang, Yi

    2015-01-01

    The Nearest subspace classifier (NSS) finds an estimation of the underlying subspace within each class and assigns data points to the class that corresponds to its nearest subspace. This paper mainly studies how well NSS can be generalized to new samples. It is proved that NSS is strongly consistent under certain assumptions. For completeness, NSS is evaluated through experiments on various simulated and real data sets, in comparison with some other linear model based classifiers. It is also ...

  2. Asymptotic performance of regularized quadratic discriminant analysis based classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-12-13

    This paper carries out a large dimensional analysis of the standard regularized quadratic discriminant analysis (QDA) classifier designed on the assumption that data arise from a Gaussian mixture model. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that depends only on the covariances and means associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized QDA and can be used to determine the optimal regularization parameter that minimizes the misclassification error probability. Despite being valid only for Gaussian data, our theoretical findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from popular real data bases, thereby making an interesting connection between theory and practice.

  3. Three data partitioning strategies for building local classifiers (Chapter 14)

    NARCIS (Netherlands)

    Zliobaite, I.; Okun, O.; Valentini, G.; Re, M.

    2011-01-01

    Divide-and-conquer approach has been recognized in multiple classifier systems aiming to utilize local expertise of individual classifiers. In this study we experimentally investigate three strategies for building local classifiers that are based on different routines of sampling data for training.

  4. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  5. Fuzziness-based active learning framework to enhance hyperspectral image classification performance for discriminative and generative classifiers.

    Directory of Open Access Journals (Sweden)

    Muhammad Ahmad

    Full Text Available Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF, in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN. The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods.

  6. Development of the system based code. v. 5. Method of margin exchange. pt. 2. Determination of quality assurance index based on a 'Vector Method'

    International Nuclear Information System (INIS)

    Asayama, Tai

    2003-03-01

    For the commercialization of fast breeder reactors, 'System Based Code', a completely new scheme of a code on structural integrity, is being developed. One of the distinguished features of the System Based Code is that it is able to determine a reasonable total margin on a structural of system, by allowing the exchanges of margins between various technical items. Detailed estimation of failure probability of a given combination of technical items and its comparison with a target value is one way to achieve this. However, simpler and easier methods that allow margin exchange without detailed calculation of failure probability are desirable in design. The authors have developed a simplified method such as a 'design factor method' from this viewpoint. This report describes a 'Vector Method', which was been newly developed. Following points are reported: 1) The Vector Method allows margin exchange evaluation on an 'equi-quality assurance plane' using vector calculation. Evaluation is easy and sufficient accuracy is achieved. The equi-quality assurance plane is obtained by a projection of an 'equi-failure probability surface in a n-dimensional space, which is calculated beforehand for typical combinations of design variables. 2) The Vector Method is considered to give the 'Quality Assurance Index Method' a probabilistic interpretation. 3) An algebraic method was proposed for the calculation of failure probabilities, which is necessary to obtain a equi-failure probability surface. This method calculates failure probabilities without using numerical methods such as Monte Carlo simulation or numerical integration. Under limited conditions, this method is quite effective compared to numerical methods. 4) An illustration of the procedure of margin exchange evaluation is given. It may be possible to use this method to optimize ISI plans; even it is not fully implemented in the System Based Code. (author)

  7. PORTRAIT GRAFFITI IN MARGINS OF ANTIQUE LITHUANIAN BOOKS

    Directory of Open Access Journals (Sweden)

    Burba, Domininkas

    2006-12-01

    discharge. The marginal portraits in personal books are more artistical and their composition is more relaxed. Overall the GDL marginal portraits reveal quite a few similarities to the graffiti (in italian scarabocchi left in the documents by the workers of Naples bank archive. They were properly examined and classified by the artist and archivist Giuseppe Zevola. According to him this documental graffiti was born out of opposition to the grey everyday routine and experience of “pleasure of anxiety”.

  8. Neural Network Classifiers for Local Wind Prediction.

    Science.gov (United States)

    Kretzschmar, Ralf; Eckert, Pierre; Cattani, Daniel; Eggimann, Fritz

    2004-05-01

    This paper evaluates the quality of neural network classifiers for wind speed and wind gust prediction with prediction lead times between +1 and +24 h. The predictions were realized based on local time series and model data. The selection of appropriate input features was initiated by time series analysis and completed by empirical comparison of neural network classifiers trained on several choices of input features. The selected input features involved day time, yearday, features from a single wind observation device at the site of interest, and features derived from model data. The quality of the resulting classifiers was benchmarked against persistence for two different sites in Switzerland. The neural network classifiers exhibited superior quality when compared with persistence judged on a specific performance measure, hit and false-alarm rates.

  9. A scaling transformation for classifier output based on likelihood ratio: Applications to a CAD workstation for diagnosis of breast cancer

    International Nuclear Information System (INIS)

    Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang Yulei

    2012-01-01

    Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the ''scale'' of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists' rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on ''matching'' classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist's ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic.

  10. Intelligent Garbage Classifier

    Directory of Open Access Journals (Sweden)

    Ignacio Rodríguez Novelle

    2008-12-01

    Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  11. Limitations of ''margin'' in qualification tests

    International Nuclear Information System (INIS)

    Clough, R.L.; Gillen, K.T.

    1984-01-01

    We have carried out investigations of polymer radiation degradation behaviors which have brought to light a number of reasons why this concept of margin can break down. First of all, we have found that dose-rate effects vary greatly in magnitude. Thus, based on high dose-rate testing, poor materials with large dose-rate effects may be selected over better materials with small effects. Also, in certain cases, material properties have been found to level out (as with PVC) or reverse trend (as with buna-n) at high doses, so that ''margin'' may be ineffective, misleading, or counterproductive. For Viton, the material properties were found to change in opposite directions at high and low dose rates, making ''margin'' inappropriate. The underlying problem with the concept of ''margin'' is that differences in aging conditions can lead to fundamental differences in degradation mechanisms

  12. Contemporary approaches to reducing the risks of central counterparties based on the use of marginal contributions

    Directory of Open Access Journals (Sweden)

    Utkin Viktor Sergeyevich

    2012-07-01

    Full Text Available To protect their own interests central counterparties has developed a number of procedures, including payment of guarantee margin by trading members as a means to ensure their positions. This article discusses a number of approaches, which attempt to simulate the risks of the Central Committee, as well as calculating the amount of margin and other resources in the event of insolvency. These approaches are based on the simulation of the three main types: (a statistical modeling; (b optimization modeling, and (c model of option pricing. The author incorporates the basic provisions of models.

  13. FEATURE SELECTION METHODS BASED ON MUTUAL INFORMATION FOR CLASSIFYING HETEROGENEOUS FEATURES

    Directory of Open Access Journals (Sweden)

    Ratri Enggar Pawening

    2016-06-01

    Full Text Available Datasets with heterogeneous features can affect feature selection results that are not appropriate because it is difficult to evaluate heterogeneous features concurrently. Feature transformation (FT is another way to handle heterogeneous features subset selection. The results of transformation from non-numerical into numerical features may produce redundancy to the original numerical features. In this paper, we propose a method to select feature subset based on mutual information (MI for classifying heterogeneous features. We use unsupervised feature transformation (UFT methods and joint mutual information maximation (JMIM methods. UFT methods is used to transform non-numerical features into numerical features. JMIM methods is used to select feature subset with a consideration of the class label. The transformed and the original features are combined entirely, then determine features subset by using JMIM methods, and classify them using support vector machine (SVM algorithm. The classification accuracy are measured for any number of selected feature subset and compared between UFT-JMIM methods and Dummy-JMIM methods. The average classification accuracy for all experiments in this study that can be achieved by UFT-JMIM methods is about 84.47% and Dummy-JMIM methods is about 84.24%. This result shows that UFT-JMIM methods can minimize information loss between transformed and original features, and select feature subset to avoid redundant and irrelevant features.

  14. Seismic Margin Assessment for Research Reactor using Fragility based Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwag, Shinyoung; Oh, Jinho; Lee, Jong-Min; Ryu, Jeong-Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The research reactor has been often subjected to external hazards during the design lifetime. Especially, a seismic event can be one of significant threats to the failure of structure system of the research reactor. This failure is possibly extended to the direct core damage of the reactor. For this purpose, the fault tree for structural system failure leading to the core damage under an earthquake accident is developed. The failure probabilities of basic events are evaluated as fragility curves of log-normal distributions. Finally, the plant-level seismic margin is investigated by the fault tree analysis combining with fragility data and the critical path is identified. The plant-level probabilistic seismic margin assessment using the fragility based fault tree analysis was performed for quantifying the safety of research reactor to a seismic hazard. For this, the fault tree for structural system failure leading to the core damage of the reactor under a seismic accident was developed. The failure probabilities of basic events were evaluated as fragility curves of log-normal distributions.

  15. Asymmetric rifting, breakup and magmatism across conjugate margin pairs: insights from Newfoundland to Ireland

    Science.gov (United States)

    Peace, Alexander L.; Welford, J. Kim; Foulger, Gillian R.; McCaffrey, Ken J. W.

    2017-04-01

    Continental extension, subsequent rifting and eventual breakup result in the development of passive margins with transitional crust between extended continental crust and newly created oceanic crust. Globally, passive margins are typically classified as either magma-rich or magma-poor. Despite this simple classification, magma-poor margins like the West Orphan Basin, offshore Newfoundland, do exhibit some evidence of localized magmatism, as magmatism to some extent invariably accompanies all continental breakup. For example, on the Newfoundland margin, a small volcanic province has been interpreted near the termination of the Charlie Gibbs Fracture Zone, whereas on the conjugate Irish margin within the Rockall Basin, magmatism appears to be more widespread and has been documented both in the north and in the south. The broader region over which volcanism has been identified on the Irish margin is suggestive of magmatic asymmetry across this conjugate margin pair and this may have direct implications for the mechanisms governing the nature of rifting and breakup. Possible causes of the magmatic asymmetry include asymmetric rifting (simple shear), post-breakup thermal anomalies in the mantle, or pre-existing compositional zones in the crust that predispose one of the margins to more melting than its conjugate. A greater understanding of the mechanisms leading to conjugate margin asymmetry will enhance our fundamental understanding of rifting processes and will also reduce hydrocarbon exploration risk by better characterizing the structural and thermal evolution of hydrocarbon bearing basins on magma-poor margins where evidence of localized magmatism exists. Here, the latest results of a conjugate margin study of the Newfoundland-Ireland pair utilizing seismic interpretation integrated with other geological and geophysical datasets are presented. Our analysis has begun to reveal the nature and timing of rift-related magmatism and the degree to which magmatic asymmetry

  16. The effect of gingival wall location on the marginal seal of class ii restorations prepared with a flowable bulk-fill resin-based composite.

    Science.gov (United States)

    Segal, P; Candotto, V; Ben-Amar, A; Eger, M; Matalon, S; Lauritano, D; Ormianer, Z

    2018-01-01

    SureFil SDR is a flowable resin-based composite that allows a single incremental bulk placement. The marginal seal of SureFil SDR at the gingival margins of class II restorations located apical to the cemento-enamel-junction (CEJ) has not been adequately evaluated compared to those located occlusal to the CEJ. Forty class II cavities were prepared in human molars. The gingival margins of 20 preparations were located 0.5 mm occlusal to the CEJ, and the other 20 preparations were located 0.5 mm apical to the CEJ. The cavities surfaces were bonded with XenoV dental adhesive and filled with SDR in one bulk increment up to 4 mm, after which they were covered with CeramX. The teeth were subjected to thermo-and load-cycling, and their gingival margins were exposed to 0.5% basic-fuchsin solution. The specimens were sectioned mesio-distally and scored for microleakage. A Wilcoxon test for pairwise comparison was performed to determine significance. Dye penetration was observed in 30% of the 20 restorations with cavo-surface margins located occlusal to the CEJ and in 55% of the 20 restorations with cavo-surface margins located apical to the CEJ. The bulk-fill flowable resin base SureFil SDR with XenoV dental adhesive provided a better marginal seal in class II restorations with gingival margins above the CEJ compared to restorations with gingival margins below the CEJ. SDR should not be recommended for class II cavity preparations with gingival margins located below the CEJ.

  17. Reducing variability in the output of pattern classifiers using histogram shaping

    International Nuclear Information System (INIS)

    Gupta, Shalini; Kan, Chih-Wen; Markey, Mia K.

    2010-01-01

    Purpose: The authors present a novel technique based on histogram shaping to reduce the variability in the output and (sensitivity, specificity) pairs of pattern classifiers with identical ROC curves, but differently distributed outputs. Methods: The authors identify different sources of variability in the output of linear pattern classifiers with identical ROC curves, which also result in classifiers with differently distributed outputs. They theoretically develop a novel technique based on the matching of the histograms of these differently distributed pattern classifier outputs to reduce the variability in their (sensitivity, specificity) pairs at fixed decision thresholds, and to reduce the variability in their actual output values. They empirically demonstrate the efficacy of the proposed technique by means of analyses on the simulated data and real world mammography data. Results: For the simulated data, with three different known sources of variability, and for the real world mammography data with unknown sources of variability, the proposed classifier output calibration technique significantly reduced the variability in the classifiers' (sensitivity, specificity) pairs at fixed decision thresholds. Furthermore, for classifiers with monotonically or approximately monotonically related output variables, the histogram shaping technique also significantly reduced the variability in their actual output values. Conclusions: Classifier output calibration based on histogram shaping can be successfully employed to reduce the variability in the output values and (sensitivity, specificity) pairs of pattern classifiers with identical ROC curves, but differently distributed outputs.

  18. Margins in breast conserving surgery: The financial cost & potential savings associated with the new margin guidelines.

    Science.gov (United States)

    Singer, Lauren; Brown, Eric; Lanni, Thomas

    2016-08-01

    In this study, we compare the indications for re-excision, the findings of additional tumor in the re-excision specimen as they relate to margin status, and costs associated with re-excision based on recent new consensus statements. A retrospective analysis was performed on 462 patients with invasive breast carcinoma who underwent at least one lumpectomy between January 2011 and December 2013. Postoperative data was analyzed based on where additional disease was found, as it relates to the margin status of the initial lumpectomy and the additional direct costs associated with additional procedures. Of the 462 patients sampled, 149 underwent a re-excision surgery (32.2%). Four patients underwent mastectomy as their second operation. In the 40 patients with additional disease found on re-excision, 36 (90.0%) of them had a positive margin on their initial lumpectomy. None of the four mastectomy patients had residual disease. The mean cost of the initial lumpectomy for all 462 patients was $2118.01 plus an additional $1801.92 for those who underwent re-excision. A positive margin was most predictive of finding residual tumor on re-excision as would be expected. Using old criteria only 0.07% (4/61) of patients who had undergone re-excision with a 'clear' margin, had additional tumor found, at a total cost of $106,354.11. Thus, the new consensus guidelines will lead to less overall cost, at no clinical risk to patients while reducing a patient's surgical risk and essentially eliminating delays in adjuvant care. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Construction of Pancreatic Cancer Classifier Based on SVM Optimized by Improved FOA

    Science.gov (United States)

    Ma, Xiaoqi

    2015-01-01

    A novel method is proposed to establish the pancreatic cancer classifier. Firstly, the concept of quantum and fruit fly optimal algorithm (FOA) are introduced, respectively. Then FOA is improved by quantum coding and quantum operation, and a new smell concentration determination function is defined. Finally, the improved FOA is used to optimize the parameters of support vector machine (SVM) and the classifier is established by optimized SVM. In order to verify the effectiveness of the proposed method, SVM and other classification methods have been chosen as the comparing methods. The experimental results show that the proposed method can improve the classifier performance and cost less time. PMID:26543867

  20. Heterogeneous classifier fusion for ligand-based virtual screening: or, how decision making by committee can be a good thing.

    Science.gov (United States)

    Riniker, Sereina; Fechner, Nikolas; Landrum, Gregory A

    2013-11-25

    The concept of data fusion - the combination of information from different sources describing the same object with the expectation to generate a more accurate representation - has found application in a very broad range of disciplines. In the context of ligand-based virtual screening (VS), data fusion has been applied to combine knowledge from either different active molecules or different fingerprints to improve similarity search performance. Machine-learning (ML) methods based on fusion of multiple homogeneous classifiers, in particular random forests, have also been widely applied in the ML literature. The heterogeneous version of classifier fusion - fusing the predictions from different model types - has been less explored. Here, we investigate heterogeneous classifier fusion for ligand-based VS using three different ML methods, RF, naïve Bayes (NB), and logistic regression (LR), with four 2D fingerprints, atom pairs, topological torsions, RDKit fingerprint, and circular fingerprint. The methods are compared using a previously developed benchmarking platform for 2D fingerprints which is extended to ML methods in this article. The original data sets are filtered for difficulty, and a new set of challenging data sets from ChEMBL is added. Data sets were also generated for a second use case: starting from a small set of related actives instead of diverse actives. The final fused model consistently outperforms the other approaches across the broad variety of targets studied, indicating that heterogeneous classifier fusion is a very promising approach for ligand-based VS. The new data sets together with the adapted source code for ML methods are provided in the Supporting Information .

  1. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  2. Demand Response Design and Use Based on Network Locational Marginal Prices

    DEFF Research Database (Denmark)

    Morais, Hugo; Faria, Pedro; Vale, Zita

    2014-01-01

    Power systems have been experiencing huge changes mainly due to the substantial increase of distributed generation (DG) and the operation in competitive environments. Virtual Power Players (VPP) can aggregate several players, namely a diversity of energy resources, including distributed generation...... (DG) based on several technologies, electric storage systems (ESS) and demand response (DR). Energy resources management gains an increasing relevance in this competitive context. This makes the DR use more interesting and flexible, giving place to a wide range of new opportunities. This paper...... proposes a methodology to support VPPs in the DR programs’ management, considering all the existing energy resources (generation and storage units) and the distribution network. The proposed method is based on locational marginal prices (LMP) values. The evaluation of the impact of using DR specific...

  3. The EB factory project. I. A fast, neural-net-based, general purpose light curve classifier optimized for eclipsing binaries

    International Nuclear Information System (INIS)

    Paegert, Martin; Stassun, Keivan G.; Burger, Dan M.

    2014-01-01

    We describe a new neural-net-based light curve classifier and provide it with documentation as a ready-to-use tool for the community. While optimized for identification and classification of eclipsing binary stars, the classifier is general purpose, and has been developed for speed in the context of upcoming massive surveys such as the Large Synoptic Survey Telescope. A challenge for classifiers in the context of neural-net training and massive data sets is to minimize the number of parameters required to describe each light curve. We show that a simple and fast geometric representation that encodes the overall light curve shape, together with a chi-square parameter to capture higher-order morphology information results in efficient yet robust light curve classification, especially for eclipsing binaries. Testing the classifier on the ASAS light curve database, we achieve a retrieval rate of 98% and a false-positive rate of 2% for eclipsing binaries. We achieve similarly high retrieval rates for most other periodic variable-star classes, including RR Lyrae, Mira, and delta Scuti. However, the classifier currently has difficulty discriminating between different sub-classes of eclipsing binaries, and suffers a relatively low (∼60%) retrieval rate for multi-mode delta Cepheid stars. We find that it is imperative to train the classifier's neural network with exemplars that include the full range of light curve quality to which the classifier will be expected to perform; the classifier performs well on noisy light curves only when trained with noisy exemplars. The classifier source code, ancillary programs, a trained neural net, and a guide for use, are provided.

  4. Prevalence, risk factors and outcomes of velamentous and marginal cord insertions: a population-based study of 634,741 pregnancies.

    Directory of Open Access Journals (Sweden)

    Cathrine Ebbing

    Full Text Available OBJECTIVES: To determine the prevalence of, and risk factors for anomalous insertions of the umbilical cord, and the risk for adverse outcomes of these pregnancies. DESIGN: Population-based registry study. SETTING: Medical Birth Registry of Norway 1999-2009. POPULATION: All births (gestational age >16 weeks to <45 weeks in Norway (623,478 singletons and 11,263 pairs of twins. METHODS: Descriptive statistics and odds ratios (ORs for risk factors and adverse outcomes based on logistic regressions adjusted for confounders. MAIN OUTCOME MEASURES: Velamentous or marginal cord insertion. Abruption of the placenta, placenta praevia, pre-eclampsia, preterm birth, operative delivery, low Apgar score, transferral to neonatal intensive care unit (NICU, malformations, birthweight, and perinatal death. RESULTS: The prevalence of abnormal cord insertion was 7.8% (1.5% velamentous, 6.3% marginal in singleton pregnancies and 16.9% (6% velamentous, 10.9% marginal in twins. The two conditions shared risk factors; twin gestation and pregnancies conceived with the aid of assisted reproductive technology were the most important, while bleeding in pregnancy, advanced maternal age, maternal chronic disease, female foetus and previous pregnancy with anomalous cord insertion were other risk factors. Velamentous and marginal insertion was associated with an increased risk of adverse outcomes such as placenta praevia (OR = 3.7, (95% CI = 3.1-4.6, and placental abruption (OR = 2.6, (95% CI = 2.1-3.2. The risk of pre-eclampsia, preterm birth and delivery by acute caesarean was doubled, as was the risk of low Apgar score, transferral to NICU, low birthweight and malformations. For velamentous insertion the risk of perinatal death at term was tripled, OR = 3.3 (95% CI = 2.5-4.3. CONCLUSION: The prevalence of velamentous and marginal insertions of the umbilical cord was 7.8% in singletons and 16.9% in twin gestations, with marginal insertion being more

  5. An ensemble self-training protein interaction article classifier.

    Science.gov (United States)

    Chen, Yifei; Hou, Ping; Manderick, Bernard

    2014-01-01

    Protein-protein interaction (PPI) is essential to understand the fundamental processes governing cell biology. The mining and curation of PPI knowledge are critical for analyzing proteomics data. Hence it is desired to classify articles PPI-related or not automatically. In order to build interaction article classification systems, an annotated corpus is needed. However, it is usually the case that only a small number of labeled articles can be obtained manually. Meanwhile, a large number of unlabeled articles are available. By combining ensemble learning and semi-supervised self-training, an ensemble self-training interaction classifier called EST_IACer is designed to classify PPI-related articles based on a small number of labeled articles and a large number of unlabeled articles. A biological background based feature weighting strategy is extended using the category information from both labeled and unlabeled data. Moreover, a heuristic constraint is put forward to select optimal instances from unlabeled data to improve the performance further. Experiment results show that the EST_IACer can classify the PPI related articles effectively and efficiently.

  6. A fixed incore based system for an on line core margin monitoring

    International Nuclear Information System (INIS)

    Mourlevat, J. L.; Carrasco, M.

    2002-01-01

    In order to comply with the needs of Utilities for improvements in the economic competitiveness of nuclear energy, one of the solutions proposed is to reduce the cost of the fuel cycle. To this aim, increasing the lifetime of cycles by introducing so-called low leakage fuel loading patterns to the reactor is a rather promising solution. However, these loading patterns lead to an increase in the core hostspot factors and therefore to a reduction in the core operating margins. For many years FRAMATOME-ANP has developed and proposed solutions aiming at increasing and therefore restoring these margins, namely; the improvement in design methods based on three-dimensional modelling of the core,on kinetic representation of transients and on neutron-thermohydraulic coupling, or the improvement in the fuel with the introduction of intermediate mixing girds. A third approach is to improve the core instrumentation associated with the system for monitoring the core operating limits: it is this approach that is described in this presentation. The core operating limits monitoring function calls on realtime knowledge of the power distribution. At present time, for most of the PWRs operated in the world, this knowledge is based on the measurement of the axial power distribution made by two-section neutron detectors located outside the pressure vessel. This kind of detectors is only able to provide the operators with a rustic picture of the axial power distribution through the axial dissymmetry index so called axial-offset. During normal core operation operators have to control the axial power distribution that means to keep the axial-offset value inside a pre-determined domain of which the width is a function of the mean power level. This pre-determined domain is calculated or checked during the nuclear design phase of the reload and due to th emethodology used to calculate it, a consderable potential for improving the core operating margin does ewxist. This the reason why

  7. Generalization in the XCSF classifier system: analysis, improvement, and extension.

    Science.gov (United States)

    Lanzi, Pier Luca; Loiacono, Daniele; Wilson, Stewart W; Goldberg, David E

    2007-01-01

    We analyze generalization in XCSF and introduce three improvements. We begin by showing that the types of generalizations evolved by XCSF can be influenced by the input range. To explain these results we present a theoretical analysis of the convergence of classifier weights in XCSF which highlights a broader issue. In XCSF, because of the mathematical properties of the Widrow-Hoff update, the convergence of classifier weights in a given subspace can be slow when the spread of the eigenvalues of the autocorrelation matrix associated with each classifier is large. As a major consequence, the system's accuracy pressure may act before classifier weights are adequately updated, so that XCSF may evolve piecewise constant approximations, instead of the intended, and more efficient, piecewise linear ones. We propose three different ways to update classifier weights in XCSF so as to increase the generalization capabilities of XCSF: one based on a condition-based normalization of the inputs, one based on linear least squares, and one based on the recursive version of linear least squares. Through a series of experiments we show that while all three approaches significantly improve XCSF, least squares approaches appear to be best performing and most robust. Finally we show how XCSF can be extended to include polynomial approximations.

  8. Optimal beam margins in linac-based VMAT stereotactic ablative body radiotherapy: a Pareto front analysis for liver metastases.

    Science.gov (United States)

    Cilla, Savino; Ianiro, Anna; Deodato, Francesco; Macchia, Gabriella; Digesù, Cinzia; Valentini, Vincenzo; Morganti, Alessio G

    2017-11-27

    We explored the Pareto fronts mathematical strategy to determine the optimal block margin and prescription isodose for stereotactic body radiotherapy (SBRT) treatments of liver metastases using the volumetric-modulated arc therapy (VMAT) technique. Three targets (planning target volumes [PTVs] = 20, 55, and 101 cc) were selected. A single fraction dose of 26 Gy was prescribed (prescription dose [PD]). VMAT plans were generated for 3 different beam energies. Pareto fronts based on (1) different multileaf collimator (MLC) block margin around PTV and (2) different prescription isodose lines (IDL) were produced. For each block margin, the greatest IDL fulfilling the criteria (95% of PTV reached 100%) was considered as providing the optimal clinical plan for PTV coverage. Liver D mean , V7Gy, and V12Gy were used against the PTV coverage to generate the fronts. Gradient indexes (GI and mGI), homogeneity index (HI), and healthy liver irradiation in terms of D mean , V7Gy, and V12Gy were calculated to compare different plans. In addition, each target was also optimized with a full-inverse planning engine to obtain a direct comparison with anatomy-based treatment planning system (TPS) results. About 900 plans were calculated to generate the fronts. GI and mGI show a U-shaped behavior as a function of beam margin with minimal values obtained with a +1 mm MLC margin. For these plans, the IDL ranges from 74% to 86%. GI and mGI show also a V-shaped behavior with respect to HI index, with minimum values at 1 mm for all metrics, independent of tumor dimensions and beam energy. Full-inversed optimized plans reported worse results with respect to Pareto plans. In conclusion, Pareto fronts provide a rigorous strategy to choose clinical optimal plans in SBRT treatments. We show that a 1-mm MLC block margin provides the best results with regard to healthy liver tissue irradiation and steepness of dose fallout. Copyright © 2017 American Association of Medical Dosimetrists

  9. Adaptation in P300 braincomputer interfaces: A two-classifier cotraining approach

    DEFF Research Database (Denmark)

    Panicker, Rajesh C.; Sun, Ying; Puthusserypady, Sadasivan

    2010-01-01

    A cotraining-based approach is introduced for constructing high-performance classifiers for P300-based braincomputer interfaces (BCIs), which were trained from very little data. It uses two classifiers: Fishers linear discriminant analysis and Bayesian linear discriminant analysis progressively...

  10. New neural network classifier of fall-risk based on the Mahalanobis distance and kinematic parameters assessed by a wearable device

    International Nuclear Information System (INIS)

    Giansanti, Daniele; Macellari, Velio; Maccioni, Giovanni

    2008-01-01

    Fall prevention lacks easy, quantitative and wearable methods for the classification of fall-risk (FR). Efforts must be thus devoted to the choice of an ad hoc classifier both to reduce the size of the sample used to train the classifier and to improve performances. A new methodology that uses a neural network (NN) and a wearable device are hereby proposed for this purpose. The NN uses kinematic parameters assessed by a wearable device with accelerometers and rate gyroscopes during a posturography protocol. The training of the NN was based on the Mahalanobis distance and was carried out on two groups of 30 elderly subjects with varying fall-risk Tinetti scores. The validation was done on two groups of 100 subjects with different fall-risk Tinetti scores and showed that, both in terms of specificity and sensitivity, the NN performed better than other classifiers (naive Bayes, Bayes net, multilayer perceptron, support vector machines, statistical classifiers). In particular, (i) the proposed NN methodology improved the specificity and sensitivity by a mean of 3% when compared to the statistical classifier based on the Mahalanobis distance (SCMD) described in Giansanti (2006 Physiol. Meas. 27 1081–90); (ii) the assessed specificity was 97%, the assessed sensitivity was 98% and the area under receiver operator characteristics was 0.965. (note)

  11. Memory-Based Specification of Verbal Features for Classifying Animals into Super-Ordinate and Sub-Ordinate Categories

    OpenAIRE

    Takahiro Soshi; Norio Fujimaki; Atsushi Matsumoto; Aya S. Ihara

    2017-01-01

    Accumulating evidence suggests that category representations are based on features. Distinguishing features are considered to define categories, because of all-or-none responses for objects in different categories; however, it is unclear how distinguishing features actually classify objects at various category levels. The present study included 75 animals within three classes (mammal, bird, and fish), along with 195 verbal features. Healthy adults participated in memory-based feature-animal m...

  12. Reconstructing Rodinia by Fitting Neoproterozoic Continental Margins

    Science.gov (United States)

    Stewart, John H.

    2009-01-01

    Reconstructions of Phanerozoic tectonic plates can be closely constrained by lithologic correlations across conjugate margins by paleontologic information, by correlation of orogenic belts, by paleomagnetic location of continents, and by ocean floor magmatic stripes. In contrast, Proterozoic reconstructions are hindered by the lack of some of these tools or the lack of their precision. To overcome some of these difficulties, this report focuses on a different method of reconstruction, namely the use of the shape of continents to assemble the supercontinent of Rodinia, much like a jigsaw puzzle. Compared to the vast amount of information available for Phanerozoic systems, such a limited approach for Proterozoic rocks, may seem suspect. However, using the assembly of the southern continents (South America, Africa, India, Arabia, Antarctica, and Australia) as an example, a very tight fit of the continents is apparent and illustrates the power of the jigsaw puzzle method. This report focuses on Neoproterozoic rocks, which are shown on two new detailed geologic maps that constitute the backbone of the study. The report also describes the Neoproterozoic, but younger or older rocks are not discussed or not discussed in detail. The Neoproterozoic continents and continental margins are identified based on the distribution of continental-margin sedimentary and magmatic rocks that define the break-up margins of Rodinia. These Neoproterozoic continental exposures, as well as critical Neo- and Meso-Neoproterozoic tectonic features shown on the two new map compilations, are used to reconstruct the Mesoproterozoic supercontinent of Rodinia. This approach differs from the common approach of using fold belts to define structural features deemed important in the Rodinian reconstruction. Fold belts are difficult to date, and many are significantly younger than the time frame considered here (1,200 to 850 Ma). Identifying Neoproterozoic continental margins, which are primarily

  13. Three-Phase AC Optimal Power Flow Based Distribution Locational Marginal Price: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Rui; Zhang, Yingchen

    2017-05-17

    Designing market mechanisms for electricity distribution systems has been a hot topic due to the increased presence of smart loads and distributed energy resources (DERs) in distribution systems. The distribution locational marginal pricing (DLMP) methodology is one of the real-time pricing methods to enable such market mechanisms and provide economic incentives to active market participants. Determining the DLMP is challenging due to high power losses, the voltage volatility, and the phase imbalance in distribution systems. Existing DC Optimal Power Flow (OPF) approaches are unable to model power losses and the reactive power, while single-phase AC OPF methods cannot capture the phase imbalance. To address these challenges, in this paper, a three-phase AC OPF based approach is developed to define and calculate DLMP accurately. The DLMP is modeled as the marginal cost to serve an incremental unit of demand at a specific phase at a certain bus, and is calculated using the Lagrange multipliers in the three-phase AC OPF formulation. Extensive case studies have been conducted to understand the impact of system losses and the phase imbalance on DLMPs as well as the potential benefits of flexible resources.

  14. Time Safety Margin: Theory and Practice

    Science.gov (United States)

    2016-09-01

    Air Education and Training Command Handbook 99-107, T-38 Road to Wings, Randolph Air Force Base, Texas, July 2013. 65 This page was intentionally left...412TW-TIH-16-01 TIME SAFETY MARGIN: THEORY AND PRACTICE WILLIAM R. GRAY, III Chief Test Pilot USAF Test Pilot School SEPTEMBER 2016... Safety Margin: The01y and Practice) was submitted by the Commander, 4 I 2th Test Wing, Edwards AFB, Ca lifornia 93524-6843. Foreign announcement and

  15. The effect of repeated preheating of dimethacrylate and silorane-based composite resins on marginal gap of class V restorations.

    Science.gov (United States)

    Alizadeh Oskoee, Parnian; Pournaghi Azar, Fatemeh; Jafari Navimipour, Elmira; Ebrahimi Chaharom, Mohammad Esmaeel; Naser Alavi, Fereshteh; Salari, Ashkan

    2017-01-01

    Background. One of the problems with composite resin restorations is gap formation at resin‒tooth interface. The present study evaluated the effect of preheating cycles of silorane- and dimethacrylate-based composite resins on gap formation at the gingival margins of Class V restorations. Methods. In this in vitro study, standard Class V cavities were prepared on the buccal surfaces of 48 bovine incisors. For restorative procedure, the samples were randomly divided into 2 groups based on the type of composite resin (group 1: di-methacrylate composite [Filtek Z250]; group 2: silorane composite [Filtek P90]) and each group was randomly divided into 2 subgroups based on the composite temperature (A: room temperature; B: after 40 preheating cycles up to 55°C). Marginal gaps were measured using a stereomicroscope at ×40 and analyzed with two-way ANOVA. Inter- and intra-group comparisons were analyzed with post-hoc Tukey tests. Significance level was defined at P composite resin type, preheating and interactive effect of these variables on gap formation were significant (Pcomposite resins (Pcomposite resins at room temperature compared to composite resins after 40 preheating cycles (Pcomposite re-sins. Preheating of silorane-based composites can result in the best marginal adaptation.

  16. Gene expression-based classifiers identify Staphylococcus aureus infection in mice and humans.

    Directory of Open Access Journals (Sweden)

    Sun Hee Ahn

    Full Text Available Staphylococcus aureus causes a spectrum of human infection. Diagnostic delays and uncertainty lead to treatment delays and inappropriate antibiotic use. A growing literature suggests the host's inflammatory response to the pathogen represents a potential tool to improve upon current diagnostics. The hypothesis of this study is that the host responds differently to S. aureus than to E. coli infection in a quantifiable way, providing a new diagnostic avenue. This study uses Bayesian sparse factor modeling and penalized binary regression to define peripheral blood gene-expression classifiers of murine and human S. aureus infection. The murine-derived classifier distinguished S. aureus infection from healthy controls and Escherichia coli-infected mice across a range of conditions (mouse and bacterial strain, time post infection and was validated in outbred mice (AUC>0.97. A S. aureus classifier derived from a cohort of 94 human subjects distinguished S. aureus blood stream infection (BSI from healthy subjects (AUC 0.99 and E. coli BSI (AUC 0.84. Murine and human responses to S. aureus infection share common biological pathways, allowing the murine model to classify S. aureus BSI in humans (AUC 0.84. Both murine and human S. aureus classifiers were validated in an independent human cohort (AUC 0.95 and 0.92, respectively. The approach described here lends insight into the conserved and disparate pathways utilized by mice and humans in response to these infections. Furthermore, this study advances our understanding of S. aureus infection; the host response to it; and identifies new diagnostic and therapeutic avenues.

  17. Marginal Generation Technology in the Chinese Power Market towards 2030 Based on Consequential Life Cycle Assessment

    Directory of Open Access Journals (Sweden)

    Guangling Zhao

    2016-09-01

    Full Text Available Electricity consumption is often the hotspot of life cycle assessment (LCA of products, industrial activities, or services. The objective of this paper is to provide a consistent, scientific, region-specific electricity-supply-based inventory of electricity generation technology for national and regional power grids. Marginal electricity generation technology is pivotal in assessing impacts related to additional consumption of electricity. China covers a large geographical area with regional supply grids; these are arguably equally or less integrated. Meanwhile, it is also a country with internal imbalances in regional energy supply and demand. Therefore, we suggest an approach to achieve a geographical subdivision of the Chinese electricity grid, corresponding to the interprovincial regional power grids, namely the North, the Northeast, the East, the Central, the Northwest, and the Southwest China Grids, and the China Southern Power Grid. The approach combines information from the Chinese national plans on for capacity changes in both production and distribution grids, and knowledge of resource availability. The results show that nationally, marginal technology is coal-fired electricity generation, which is the same scenario in the North and Northwest China Grid. In the Northeast, East, and Central China Grid, nuclear power gradually replaces coal-fired electricity and becomes the marginal technology. In the Southwest China Grid and the China Southern Power Grid, the marginal electricity is hydropower towards 2030.

  18. Application of Alkenone 14C-Based chronostratigraphy in carbonate barren sediments on the Peru Margin.

    Science.gov (United States)

    Higginson, M. J.; Altabet, M. A.; Herbert, T. D.

    2003-04-01

    Despite the availability of high-quality sediment cores in key locations, little paleoclimatic information exists for the Peru margin largely because poor carbonate preservation severely restricts the use of traditional carbonate-based proxies for stratigraphy, dating, and paleo-environmental reconstruction. Many sites also include hiatuses produced by the variable influence of undercurrents on sediment accumulation. To overcome these difficulties, we have developed (in collaboration with T. Eglinton, WHOI) a laboratory facility to successfully extract and purify haptophyte-derived alkenones for compound specific 14C AMS dating (modified from OHKOUCHI et al., 2002). This avoids potential problems with dating bulk organic carbon which we assume, even in an upwelling environment as highly productive as the Peru margin, is not a priori solely of marine origin. In a recently collected, mid-Peru Margin core (ODP Leg 201 Site 1228D), comparison of our alkenone 14C dates with bulk sediment organic carbon dates and known stratigraphic markers produces a very well constrained, curvilinear age-depth relationship for at least the last 14 Kyr. A discrete ash layer at Site 1228D with an adjacent alkenone 14C age of 3890 ± 350 yr, is within error identical to the 14C age of a prominent ash layer (3800 ± 50 yr) found west of the large Peruvian El Misti volcano (16^o18'S, 71^o24'W). In summary, these results show that the Peru margin alkenones are autochthonous (i.e. not from an older, distant source) and provide sufficient dating precision to permit, for the first time, high-resolution paleoceanographic studies in this highly important marine province. Based upon this new chronology, synchronous changes in alkenone-derived SST estimates in two of our independently-dated records are the first to record at high-resolution (a) a large LGM-Holocene SST range in the Tropics (up to 7.8 ^oC during brief events in this upwelling location); and (b) sharp coolings (4 ^oC) consistent with

  19. Marginal and happy? The need for uniqueness predicts the adjustment of marginal immigrants.

    Science.gov (United States)

    Debrosse, Régine; de la Sablonnière, Roxane; Rossignac-Milon, Maya

    2015-12-01

    Marginalization is often presented as the strategy associated with the worst adjustment for immigrants. This study identifies a critical variable that buffers marginal immigrants from the negative effects of marginalization on adjustment: The need for uniqueness. In three studies, we surveyed immigrants recruited on university campuses (n = 119, n = 116) and in the field (n = 61). Among marginal immigrants, a higher need for uniqueness predicted higher self-esteem (Study 1), affect (Study 2), and life satisfaction (Study 3), and marginally higher happiness (Study 2) and self-esteem (Study 3). No relationship between the need for uniqueness and adjustment was found among non-marginal immigrants. The adaptive value of the need for uniqueness for marginal immigrants is discussed. © 2015 The British Psychological Society.

  20. Oblique decision trees using embedded support vector machines in classifier ensembles

    NARCIS (Netherlands)

    Menkovski, V.; Christou, I.; Efremidis, S.

    2008-01-01

    Classifier ensembles have emerged in recent years as a promising research area for boosting pattern recognition systems' performance. We present a new base classifier that utilizes oblique decision tree technology based on support vector machines for the construction of oblique (non-axis parallel)

  1. Buccal mucosa carcinoma: surgical margin less than 3 mm, not 5 mm, predicts locoregional recurrence

    Directory of Open Access Journals (Sweden)

    Chiou Wen-Yen

    2010-09-01

    Full Text Available Abstract Background Most treatment failure of buccal mucosal cancer post surgery is locoregional recurrence. We tried to figure out how close the surgical margin being unsafe and needed further adjuvant treatment. Methods Between August 2000 and June 2008, a total of 110 patients with buccal mucosa carcinoma (25 with stage I, 31 with stage II, 11 with stage III, and 43 with Stage IV classified according to the American Joint Committee on Cancer 6th edition were treated with surgery alone (n = 32, surgery plus postoperative radiotherapy (n = 38 or surgery plus adjuvant concurrent chemoradiotherapy (n = 40. Main outcome measures: The primary endpoint was locoregional disease control. Results The median follow-up time at analysis was 25 months (range, 4-104 months. The 3-year locoregional control rates were significantly different when a 3-mm surgical margin (≤3 versus >3 mm, 71% versus 95%, p = 0.04 but not a 5-mm margin (75% versus 92%, p = 0.22 was used as the cut-off level. We also found a quantitative correlation between surgical margin and locoregional failure (hazard ratio, 2.16; 95% confidence interval, 1.14 - 4.11; p = 0.019. Multivariate analysis identified pN classification and surgical margin as independent factors affecting disease-free survival and locoregional control. Conclusions Narrow surgical margin ≤3 mm, but not 5 mm, is associated with high risk for locoregional recurrence of buccal mucosa carcinoma. More aggressive treatment after surgery is suggested.

  2. Buccal mucosa carcinoma: surgical margin less than 3 mm, not 5 mm, predicts locoregional recurrence

    International Nuclear Information System (INIS)

    Chiou, Wen-Yen; Hung, Shih-Kai; Lin, Hon-Yi; Hsu, Feng-Chun; Lee, Moon-Sing; Ho, Hsu-Chueh; Su, Yu-Chieh; Lee, Ching-Chih; Hsieh, Chen-Hsi; Wang, Yao-Ching

    2010-01-01

    Most treatment failure of buccal mucosal cancer post surgery is locoregional recurrence. We tried to figure out how close the surgical margin being unsafe and needed further adjuvant treatment. Between August 2000 and June 2008, a total of 110 patients with buccal mucosa carcinoma (25 with stage I, 31 with stage II, 11 with stage III, and 43 with Stage IV classified according to the American Joint Committee on Cancer 6 th edition) were treated with surgery alone (n = 32), surgery plus postoperative radiotherapy (n = 38) or surgery plus adjuvant concurrent chemoradiotherapy (n = 40). Main outcome measures: The primary endpoint was locoregional disease control. The median follow-up time at analysis was 25 months (range, 4-104 months). The 3-year locoregional control rates were significantly different when a 3-mm surgical margin (≤3 versus >3 mm, 71% versus 95%, p = 0.04) but not a 5-mm margin (75% versus 92%, p = 0.22) was used as the cut-off level. We also found a quantitative correlation between surgical margin and locoregional failure (hazard ratio, 2.16; 95% confidence interval, 1.14 - 4.11; p = 0.019). Multivariate analysis identified pN classification and surgical margin as independent factors affecting disease-free survival and locoregional control. Narrow surgical margin ≤3 mm, but not 5 mm, is associated with high risk for locoregional recurrence of buccal mucosa carcinoma. More aggressive treatment after surgery is suggested

  3. Classifying MCI Subtypes in Community-Dwelling Elderly Using Cross-Sectional and Longitudinal MRI-Based Biomarkers

    Directory of Open Access Journals (Sweden)

    Hao Guan

    2017-09-01

    Full Text Available Amnestic MCI (aMCI and non-amnestic MCI (naMCI are considered to differ in etiology and outcome. Accurately classifying MCI into meaningful subtypes would enable early intervention with targeted treatment. In this study, we employed structural magnetic resonance imaging (MRI for MCI subtype classification. This was carried out in a sample of 184 community-dwelling individuals (aged 73–85 years. Cortical surface based measurements were computed from longitudinal and cross-sectional scans. By introducing a feature selection algorithm, we identified a set of discriminative features, and further investigated the temporal patterns of these features. A voting classifier was trained and evaluated via 10 iterations of cross-validation. The best classification accuracies achieved were: 77% (naMCI vs. aMCI, 81% (aMCI vs. cognitively normal (CN and 70% (naMCI vs. CN. The best results for differentiating aMCI from naMCI were achieved with baseline features. Hippocampus, amygdala and frontal pole were found to be most discriminative for classifying MCI subtypes. Additionally, we observed the dynamics of classification of several MRI biomarkers. Learning the dynamics of atrophy may aid in the development of better biomarkers, as it may track the progression of cognitive impairment.

  4. Qualification of class 1e equipment: regulation, technological margins and test experience

    International Nuclear Information System (INIS)

    Pasco, Y.; Le Meur, M.; Henry, J.Y.; Droger, J.P.; Morange, E.; Roubault, J.

    1986-10-01

    French regulation requires licensee to qualify electrical equipment important to safety for service in nuclear power plants to ensure that the equipment can perform its safety function under the set of plausible operating conditions. The French regulatory texts entitled Fundamental safety rules have classified safety related electrical equipment in three main categories: k1, k2, k3, according to their location and operating conditions. The definition of a design basis accident test profile must account for margins applied to thermal hydraulic code outputs. Specific safety margins was added to cover uncertainties in qualification test representativity. Up to now, accidental sequence studies have shown the validity of such a qualification test profile. On the other hand, the results from post accident simulation tests have shown that it is useful not only to validate post accident operating life but also to reveal failures initiated during previous tests [fr

  5. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    Science.gov (United States)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  6. Task Group on Safety Margins Action Plan (SMAP). Safety Margins Action Plan - Final Report

    International Nuclear Information System (INIS)

    Hrehor, Miroslav; Gavrilas, Mirela; Belac, Josef; Sairanen, Risto; Bruna, Giovanni; Reocreux, Michel; Touboul, Francoise; Krzykacz-Hausmann, B.; Park, Jong Seuk; Prosek, Andrej; Hortal, Javier; Sandervaag, Odbjoern; Zimmerman, Martin

    2007-01-01

    . Chapter 3 looks at techniques for the deterministic calculation of safety margins and discusses the complementary probabilistic risk assessment techniques needed to generalize safety margins beyond design basis accidents. Chapter 4 examines the definition of safety margin, which is noted to take different meanings in different fields. For example, in civil engineering and applications that deal with the load-strength interference concept, safety margin describes the distance between the means of the load and strength probability density functions with regard to the standard deviation in both. However, in the nuclear industry, the term safety margin evolved to describe the goal of assuring the existence of adequate safety margin in deterministic calculations. Specifically, safety margin refers to keeping the value of a given safety variable under a pre-established safety limit in design basis accidents. Implicitly, safety margin in the nuclear industry is the distance from the safety limit to onset of damage. The SMAP task group fulfilled its first objective by adopting a methodology for quantifying safety margins that merges the deterministic and probabilistic approaches. The methodology described in Chapter 5 is consistent with the definition of safety margin commonly used in the nuclear industry. The metrics of this methodology quantify the change in safety over a range of accident sequences that extend beyond the design bases. However, the methodology is not described in this report to a level that would meet guidance document requirements. This is in part because methods and techniques needed to quantify safety margins in a global manner are evolving, and thus specific guidance rendered at this time would shortly become obsolete. This report presents the framework in sufficient detail to serve as the basis of an analysis and, thus, this report meets the second objective established for the SMAP group. A proof-of-concept application to further aid potential applicants

  7. Distributed Classification of Localization Attacks in Sensor Networks Using Exchange-Based Feature Extraction and Classifier

    Directory of Open Access Journals (Sweden)

    Su-Zhe Wang

    2016-01-01

    Full Text Available Secure localization under different forms of attack has become an essential task in wireless sensor networks. Despite the significant research efforts in detecting the malicious nodes, the problem of localization attack type recognition has not yet been well addressed. Motivated by this concern, we propose a novel exchange-based attack classification algorithm. This is achieved by a distributed expectation maximization extractor integrated with the PECPR-MKSVM classifier. First, the mixed distribution features based on the probabilistic modeling are extracted using a distributed expectation maximization algorithm. After feature extraction, by introducing the theory from support vector machine, an extensive contractive Peaceman-Rachford splitting method is derived to build the distributed classifier that diffuses the iteration calculation among neighbor sensors. To verify the efficiency of the distributed recognition scheme, four groups of experiments were carried out under various conditions. The average success rate of the proposed classification algorithm obtained in the presented experiments for external attacks is excellent and has achieved about 93.9% in some cases. These testing results demonstrate that the proposed algorithm can produce much greater recognition rate, and it can be also more robust and efficient even in the presence of excessive malicious scenario.

  8. Effect of placement of droop based generators in distribution network on small signal stability margin and network loss

    DEFF Research Database (Denmark)

    Dheer, D.K.; Doolla, S.; Bandyopadhyay, S.

    2017-01-01

    , small signal stability margin is on the fore. The present research studied the effect of location of droop-controlled DGs on small signal stability margin and network loss on a modified IEEE 13 bus system, an IEEE 33-bus distribution system and a practical 22-bus radial distribution network. A complete...... loss and stability margin is further investigated by identifying the Pareto fronts for modified IEEE 13 bus, IEEE 33 and practical 22-bus radial distribution network with application of Reference point based Non-dominated Sorting Genetic Algorithm (R-NSGA). Results were validated by time domain......For a utility-connected system, issues related to small signal stability with Distributed Generators (DGs) are insignificant due to the presence of a very strong grid. Optimally placed sources in utility connected microgrid system may not be optimal/stable in islanded condition. Among others issues...

  9. Representative Vector Machines: A Unified Framework for Classical Classifiers.

    Science.gov (United States)

    Gui, Jie; Liu, Tongliang; Tao, Dacheng; Sun, Zhenan; Tan, Tieniu

    2016-08-01

    Classifier design is a fundamental problem in pattern recognition. A variety of pattern classification methods such as the nearest neighbor (NN) classifier, support vector machine (SVM), and sparse representation-based classification (SRC) have been proposed in the literature. These typical and widely used classifiers were originally developed from different theory or application motivations and they are conventionally treated as independent and specific solutions for pattern classification. This paper proposes a novel pattern classification framework, namely, representative vector machines (or RVMs for short). The basic idea of RVMs is to assign the class label of a test example according to its nearest representative vector. The contributions of RVMs are twofold. On one hand, the proposed RVMs establish a unified framework of classical classifiers because NN, SVM, and SRC can be interpreted as the special cases of RVMs with different definitions of representative vectors. Thus, the underlying relationship among a number of classical classifiers is revealed for better understanding of pattern classification. On the other hand, novel and advanced classifiers are inspired in the framework of RVMs. For example, a robust pattern classification method called discriminant vector machine (DVM) is motivated from RVMs. Given a test example, DVM first finds its k -NNs and then performs classification based on the robust M-estimator and manifold regularization. Extensive experimental evaluations on a variety of visual recognition tasks such as face recognition (Yale and face recognition grand challenge databases), object categorization (Caltech-101 dataset), and action recognition (Action Similarity LAbeliNg) demonstrate the advantages of DVM over other classifiers.

  10. Effect of gingival fluid on marginal adaptation of Class II resin-based composite restorations.

    Science.gov (United States)

    Spahr, A; Schön, F; Haller, B

    2000-10-01

    To evaluate in vitro the marginal quality of Class II composite restorations at the gingival enamel margins as affected by contamination of the cavities with gingival fluid (GF) during different steps of resin bonding procedures. 70 Class II cavities were prepared in extracted human molars and restored with composite using a multi-component bonding system (OptiBond FL/Herculite XRV; OPTI) or a single-bottle adhesive (Syntac Sprint/Tetric Ceram; SYN). The cavities were contaminated with human GF: C1 after acid etching, C2 after application of the primer (OPTI) or light-curing of the primer-adhesive (SYN), and C3 after light-curing of the resin adhesive (OPTI). Uncontaminated cavities were used as the control (C0). The restored teeth were subjected to thermocycling (TC) and replicated for SEM analysis of marginal gap formation. Microleakage at the gingival margins was determined by dye penetration with basic fuchsin. non-parametric tests (Kruskal-Wallis test, Mann-Whitney test with Bonferroni correction). In both bonding systems, contamination with GF after acid etching (C1) did not impair the marginal quality; the mean percentages of continuous margin/mean depths of dye penetration were: OPTI: C0: 88.5%/0.10 mm, C1: 95.6%/0.04 mm; SYN: C0: 90.9%/0.08 mm, C1: 97.0%/0.05 mm. Marginal adaptation was adversely affected when GF contamination was performed after

  11. Oil inventories should be based on margins, supply reliability

    International Nuclear Information System (INIS)

    Waguespack, K.; Cantor, B.D.

    1996-01-01

    US oil inventories have plummeted to their lowest recorded levels this year, leading industry observers to conclude that refiners have adopted new just-in-time (JIT) inventory policies. Total crude oil inventories are about 300 million bbl -- 8% below the 10-year average. Distillate inventories posted similar declines this year because of unusually cold winter temperatures and refiners' reluctance to build sufficient stocks in the autumn months. Gasoline stocks are 20% below the 10-year average at 200 million bbl, despite forecasts of record-high gasoline demand this summer. The sudden drop in crude and product inventories this year is widely considered a sign that refiners have implemented JIT, signaling a permanent shift to reduced stocks. The authors submit that the shift towards reduced oil inventories is not related to a concerted adoption of JIT by US refiners, and that oil inventory management decisions should instead be based on refining margins and supply reliability. The paper discusses the JIT revolution and the optimal-inventory model

  12. Naive Bayes classifiers for verbal autopsies: comparison to physician-based classification for 21,000 child and adult deaths.

    Science.gov (United States)

    Miasnikof, Pierre; Giannakeas, Vasily; Gomes, Mireille; Aleksandrowicz, Lukasz; Shestopaloff, Alexander Y; Alam, Dewan; Tollman, Stephen; Samarikhalaj, Akram; Jha, Prabhat

    2015-11-25

    Verbal autopsies (VA) are increasingly used in low- and middle-income countries where most causes of death (COD) occur at home without medical attention, and home deaths differ substantially from hospital deaths. Hence, there is no plausible "standard" against which VAs for home deaths may be validated. Previous studies have shown contradictory performance of automated methods compared to physician-based classification of CODs. We sought to compare the performance of the classic naive Bayes classifier (NBC) versus existing automated classifiers, using physician-based classification as the reference. We compared the performance of NBC, an open-source Tariff Method (OTM), and InterVA-4 on three datasets covering about 21,000 child and adult deaths: the ongoing Million Death Study in India, and health and demographic surveillance sites in Agincourt, South Africa and Matlab, Bangladesh. We applied several training and testing splits of the data to quantify the sensitivity and specificity compared to physician coding for individual CODs and to test the cause-specific mortality fractions at the population level. The NBC achieved comparable sensitivity (median 0.51, range 0.48-0.58) to OTM (median 0.50, range 0.41-0.51), with InterVA-4 having lower sensitivity (median 0.43, range 0.36-0.47) in all three datasets, across all CODs. Consistency of CODs was comparable for NBC and InterVA-4 but lower for OTM. NBC and OTM achieved better performance when using a local rather than a non-local training dataset. At the population level, NBC scored the highest cause-specific mortality fraction accuracy across the datasets (median 0.88, range 0.87-0.93), followed by InterVA-4 (median 0.66, range 0.62-0.73) and OTM (median 0.57, range 0.42-0.58). NBC outperforms current similar COD classifiers at the population level. Nevertheless, no current automated classifier adequately replicates physician classification for individual CODs. There is a need for further research on automated

  13. On probabilistically defined margins in radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Papiez, Lech; Langer, Mark [Department of Radiation Oncology, Indiana University, Indianapolis, IN (United States)

    2006-08-21

    Margins about a target volume subject to external beam radiation therapy are designed to assure that the target volume of tissue to be sterilized by treatment is adequately covered by a lethal dose. Thus, margins are meant to guarantee that all potential variation in tumour position relative to beams allows the tumour to stay within the margin. Variation in tumour position can be broken into two types of dislocations, reducible and irreducible. Reducible variations in tumour position are those that can be accommodated with the use of modern image-guided techniques that derive parameters for compensating motions of patient bodies and/or motions of beams relative to patient bodies. Irreducible variations in tumour position are those random dislocations of a target that are related to errors intrinsic in the design and performance limitations of the software and hardware, as well as limitations of human perception and decision making. Thus, margins in the era of image-guided treatments will need to accommodate only random errors residual in patient setup accuracy (after image-guided setup corrections) and in the accuracy of systems designed to track moving and deforming tissues of the targeted regions of the patient's body. Therefore, construction of these margins will have to be based on purely statistical data. The characteristics of these data have to be determined through the central limit theorem and Gaussian properties of limiting error distributions. In this paper, we show how statistically determined margins are to be designed in the general case of correlated distributions of position errors in three-dimensional space. In particular, we show how the minimal margins for a given level of statistical confidence are found. Then, how they are to be used to determine geometrically minimal PTV that provides coverage of GTV at the assumed level of statistical confidence. Our results generalize earlier recommendations for statistical, central limit theorem-based

  14. On probabilistically defined margins in radiation therapy

    International Nuclear Information System (INIS)

    Papiez, Lech; Langer, Mark

    2006-01-01

    Margins about a target volume subject to external beam radiation therapy are designed to assure that the target volume of tissue to be sterilized by treatment is adequately covered by a lethal dose. Thus, margins are meant to guarantee that all potential variation in tumour position relative to beams allows the tumour to stay within the margin. Variation in tumour position can be broken into two types of dislocations, reducible and irreducible. Reducible variations in tumour position are those that can be accommodated with the use of modern image-guided techniques that derive parameters for compensating motions of patient bodies and/or motions of beams relative to patient bodies. Irreducible variations in tumour position are those random dislocations of a target that are related to errors intrinsic in the design and performance limitations of the software and hardware, as well as limitations of human perception and decision making. Thus, margins in the era of image-guided treatments will need to accommodate only random errors residual in patient setup accuracy (after image-guided setup corrections) and in the accuracy of systems designed to track moving and deforming tissues of the targeted regions of the patient's body. Therefore, construction of these margins will have to be based on purely statistical data. The characteristics of these data have to be determined through the central limit theorem and Gaussian properties of limiting error distributions. In this paper, we show how statistically determined margins are to be designed in the general case of correlated distributions of position errors in three-dimensional space. In particular, we show how the minimal margins for a given level of statistical confidence are found. Then, how they are to be used to determine geometrically minimal PTV that provides coverage of GTV at the assumed level of statistical confidence. Our results generalize earlier recommendations for statistical, central limit theorem-based

  15. Testing for marginal linear effects in quantile regression

    KAUST Repository

    Wang, Huixia Judy

    2017-10-23

    The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set.

  16. Testing for marginal linear effects in quantile regression

    KAUST Repository

    Wang, Huixia Judy; McKeague, Ian W.; Qian, Min

    2017-01-01

    The paper develops a new marginal testing procedure to detect significant predictors that are associated with the conditional quantiles of a scalar response. The idea is to fit the marginal quantile regression on each predictor one at a time, and then to base the test on the t-statistics that are associated with the most predictive predictors. A resampling method is devised to calibrate this test statistic, which has non-regular limiting behaviour due to the selection of the most predictive variables. Asymptotic validity of the procedure is established in a general quantile regression setting in which the marginal quantile regression models can be misspecified. Even though a fixed dimension is assumed to derive the asymptotic results, the test proposed is applicable and computationally feasible for large dimensional predictors. The method is more flexible than existing marginal screening test methods based on mean regression and has the added advantage of being robust against outliers in the response. The approach is illustrated by using an application to a human immunodeficiency virus drug resistance data set.

  17. Rats classified as low or high cocaine locomotor responders: A unique model involving striatal dopamine transporters that predicts cocaine addiction-like behaviors

    Science.gov (United States)

    Yamamoto, Dorothy J.; Nelson, Anna M.; Mandt, Bruce H.; Larson, Gaynor A.; Rorabaugh, Jacki M.; Ng, Christopher M.C.; Barcomb, Kelsey M.; Richards, Toni L.; Allen, Richard M.; Zahniser, Nancy R.

    2013-01-01

    Individual differences are a hallmark of drug addiction. Here, we describe a rat model based on differential initial responsiveness to low dose cocaine. Despite similar brain cocaine levels, individual outbred Sprague-Dawley rats exhibit markedly different magnitudes of acute cocaine-induced locomotor activity and, thereby, can be classified as low or high cocaine responders (LCRs or HCRs). LCRs and HCRs differ in drug-induced, but not novelty-associated, hyperactivity. LCRs have higher basal numbers of striatal dopamine transporters (DATs) than HCRs and exhibit marginal cocaine inhibition of in vivo DAT activity and cocaine-induced increases in extracellular DA. Importantly, lower initial cocaine response predicts greater locomotor sensitization, conditioned place preference and greater motivation to self-administer cocaine following low dose acquisition. Further, outbred Long-Evans rats classified as LCRs, versus HCRs, are more sensitive to cocaine’s discriminative stimulus effects. Overall, results to date with the LCR/HCR model underscore the contribution of striatal DATs to individual differences in initial cocaine responsiveness and the value of assessing the influence of initial drug response on subsequent expression of addiction-like behaviors. PMID:23850581

  18. Classifier-ensemble incremental-learning procedure for nuclear transient identification at different operational conditions

    Energy Technology Data Exchange (ETDEWEB)

    Baraldi, Piero, E-mail: piero.baraldi@polimi.i [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Razavi-Far, Roozbeh [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Zio, Enrico [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Ecole Centrale Paris-Supelec, Paris (France)

    2011-04-15

    An important requirement for the practical implementation of empirical diagnostic systems is the capability of classifying transients in all plant operational conditions. The present paper proposes an approach based on an ensemble of classifiers for incrementally learning transients under different operational conditions. New classifiers are added to the ensemble where transients occurring in new operational conditions are not satisfactorily classified. The construction of the ensemble is made by bagging; the base classifier is a supervised Fuzzy C Means (FCM) classifier whose outcomes are combined by majority voting. The incremental learning procedure is applied to the identification of simulated transients in the feedwater system of a Boiling Water Reactor (BWR) under different reactor power levels.

  19. Classifier-ensemble incremental-learning procedure for nuclear transient identification at different operational conditions

    International Nuclear Information System (INIS)

    Baraldi, Piero; Razavi-Far, Roozbeh; Zio, Enrico

    2011-01-01

    An important requirement for the practical implementation of empirical diagnostic systems is the capability of classifying transients in all plant operational conditions. The present paper proposes an approach based on an ensemble of classifiers for incrementally learning transients under different operational conditions. New classifiers are added to the ensemble where transients occurring in new operational conditions are not satisfactorily classified. The construction of the ensemble is made by bagging; the base classifier is a supervised Fuzzy C Means (FCM) classifier whose outcomes are combined by majority voting. The incremental learning procedure is applied to the identification of simulated transients in the feedwater system of a Boiling Water Reactor (BWR) under different reactor power levels.

  20. Participatory democracy and Hillary Clinton’s marginalized fandom

    Directory of Open Access Journals (Sweden)

    Abigail De Kosnik

    2008-09-01

    Full Text Available After the drawn-out, heated contest for the Democratic Party presidential nomination and Senator Obama's victory over Senator Clinton, a segment of Clinton's supporters are threatening to leave the party rather than fall in line behind the nominee. This essay argues that the battle between Clinton's and Obama's followers is best understood as a war between fan bases, with Obama enthusiasts constituting as the dominant fandom and Clinton voters occupying the position of marginalized fandom. Marginalized fandoms tend to blame the opposing fan base, intermediaries, and The Powers That Be for their fan campaigns' losses, and Clinton's fans are adhering to this pattern. However, the Clinton marginalized fandom's complaints can be regarded as valuable critiques that, if noted rather than dismissed, could greatly strengthen participatory democracy in the United States.

  1. Marginal adaptation, fracture load and macroscopic failure mode of adhesively luted PMMA-based CAD/CAM inlays.

    Science.gov (United States)

    Ender, Andreas; Bienz, Stefan; Mörmann, Werner; Mehl, Albert; Attin, Thomas; Stawarczyk, Bogna

    2016-02-01

    To evaluate marginal adaptation, fracture load and failure types of CAD/CAM polymeric inlays. Standardized prepared human molars (48) were divided into four groups (n=12): (A) PCG (positive control group); adhesively luted glass-ceramic inlays, (B) TRX; CAD/CAM polymeric inlays luted using a self-adhesive resin cement, (C) TAC; CAD/CAM polymeric inlays luted using a conventional resin cement, and (D) NCG (negative control group); direct-filled resin-based composite restorations. All specimens were subjected to a chewing simulator. Before and after chewing fatigue, marginal adaptation was assessed at two interfaces: (1) between dental hard tissues and luting cement and (2) between luting cement and restoration. Thereafter, the specimens were loaded and the fracture loads, as well as the failure types, were determined. The data were analysed using three- and one-way ANOVA with post hoc Scheffé test, two sample Student's t-test (pmarginal adaptation for interface 1 showed significantly better results for TRX and PCG than for TAC (p=0.001-0.02) and NCG (p=0.001-0.047). For interface 2, marginal adaptation for TAC was significantly inferior to TRX (pmarginal adaptation of TAC and NCG. No significant differences in fracture load were found between all tested groups. Self-adhesive luted polymeric CAD/CAM inlays showed similar marginal adaptation and fracture load values compared to adhesively luted glass-ceramic inlays. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  2. Current Directional Protection of Series Compensated Line Using Intelligent Classifier

    Directory of Open Access Journals (Sweden)

    M. Mollanezhad Heydarabadi

    2016-12-01

    Full Text Available Current inversion condition leads to incorrect operation of current based directional relay in power system with series compensated device. Application of the intelligent system for fault direction classification has been suggested in this paper. A new current directional protection scheme based on intelligent classifier is proposed for the series compensated line. The proposed classifier uses only half cycle of pre-fault and post fault current samples at relay location to feed the classifier. A lot of forward and backward fault simulations under different system conditions upon a transmission line with a fixed series capacitor are carried out using PSCAD/EMTDC software. The applicability of decision tree (DT, probabilistic neural network (PNN and support vector machine (SVM are investigated using simulated data under different system conditions. The performance comparison of the classifiers indicates that the SVM is a best suitable classifier for fault direction discriminating. The backward faults can be accurately distinguished from forward faults even under current inversion without require to detect of the current inversion condition.

  3. Evaluating a k-nearest neighbours-based classifier for locating faulty areas in power systems

    Directory of Open Access Journals (Sweden)

    Juan José Mora Flórez

    2008-09-01

    Full Text Available This paper reports a strategy for identifying and locating faults in a power distribution system. The strategy was based on the K-nearest neighbours technique. This technique simply helps to estimate a distance from the features used for describing a particu-lar fault being classified to the faults presented during the training stage. If new data is presented to the proposed fault locator, it is classified according to the nearest example recovered. A characterisation of the voltage and current measurements obtained at one single line end is also presented in this document for assigning the area in the case of a fault in a power system. The pro-posed strategy was tested in a real power distribution system, average 93% confidence indexes being obtained which gives a good indicator of the proposal’s high performance. The results showed how a fault could be located by using features obtained from voltage and current, improving utility response and thereby improving system continuity indexes in power distribution sys-tems.

  4. Parameterization of a fuzzy classifier for the diagnosis of an industrial process

    International Nuclear Information System (INIS)

    Toscano, R.; Lyonnet, P.

    2002-01-01

    The aim of this paper is to present a classifier based on a fuzzy inference system. For this classifier, we propose a parameterization method, which is not necessarily based on an iterative training. This approach can be seen as a pre-parameterization, which allows the determination of the rules base and the parameters of the membership functions. We also present a continuous and derivable version of the previous classifier and suggest an iterative learning algorithm based on a gradient method. An example using the learning basis IRIS, which is a benchmark for classification problems, is presented showing the performances of this classifier. Finally this classifier is applied to the diagnosis of a DC motor showing the utility of this method. However in many cases the total knowledge necessary to the synthesis of the fuzzy diagnosis system (FDS) is not, in general, directly available. It must be extracted from an often-considerable mass of information. For this reason, a general methodology for the design of a FDS is presented and illustrated on a non-linear plant

  5. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Directory of Open Access Journals (Sweden)

    M. Al-Rousan

    2005-08-01

    Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  6. Comparison of Classifier Architectures for Online Neural Spike Sorting.

    Science.gov (United States)

    Saeed, Maryam; Khan, Amir Ali; Kamboh, Awais Mehmood

    2017-04-01

    High-density, intracranial recordings from micro-electrode arrays need to undergo Spike Sorting in order to associate the recorded neuronal spikes to particular neurons. This involves spike detection, feature extraction, and classification. To reduce the data transmission and power requirements, on-chip real-time processing is becoming very popular. However, high computational resources are required for classifiers in on-chip spike-sorters, making scalability a great challenge. In this review paper, we analyze several popular classifiers to propose five new hardware architectures using the off-chip training with on-chip classification approach. These include support vector classification, fuzzy C-means classification, self-organizing maps classification, moving-centroid K-means classification, and Cosine distance classification. The performance of these architectures is analyzed in terms of accuracy and resource requirement. We establish that the neural networks based Self-Organizing Maps classifier offers the most viable solution. A spike sorter based on the Self-Organizing Maps classifier, requires only 7.83% of computational resources of the best-reported spike sorter, hierarchical adaptive means, while offering a 3% better accuracy at 7 dB SNR.

  7. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    Directory of Open Access Journals (Sweden)

    Mansour Alsaleh

    Full Text Available Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  8. A supervised contextual classifier based on a region-growth algorithm

    DEFF Research Database (Denmark)

    Lira, Jorge; Maletti, Gabriela Mariel

    2002-01-01

    A supervised classification scheme to segment optical multi-spectral images has been developed. In this classifier, an automated region-growth algorithm delineates the training sets. This algorithm handles three parameters: an initial pixel seed, a window size and a threshold for each class. A su...

  9. A heuristic ranking approach on capacity benefit margin determination using Pareto-based evolutionary programming technique.

    Science.gov (United States)

    Othman, Muhammad Murtadha; Abd Rahman, Nurulazmi; Musirin, Ismail; Fotuhi-Firuzabad, Mahmud; Rajabi-Ghahnavieh, Abbas

    2015-01-01

    This paper introduces a novel multiobjective approach for capacity benefit margin (CBM) assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE) to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP) technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE) in various conditions. Eventually, the power transfer based available transfer capability (ATC) is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

  10. A Heuristic Ranking Approach on Capacity Benefit Margin Determination Using Pareto-Based Evolutionary Programming Technique

    Directory of Open Access Journals (Sweden)

    Muhammad Murtadha Othman

    2015-01-01

    Full Text Available This paper introduces a novel multiobjective approach for capacity benefit margin (CBM assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE in various conditions. Eventually, the power transfer based available transfer capability (ATC is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

  11. Reliability-Based Marginal Cost Pricing Problem Case with Both Demand Uncertainty and Travelers’ Perception Errors

    Directory of Open Access Journals (Sweden)

    Shaopeng Zhong

    2013-01-01

    Full Text Available Focusing on the first-best marginal cost pricing (MCP in a stochastic network with both travel demand uncertainty and stochastic perception errors within the travelers’ route choice decision processes, this paper develops a perceived risk-based stochastic network marginal cost pricing (PRSN-MCP model. Numerical examples based on an integrated method combining the moment analysis approach, the fitting distribution method, and the reliability measures are also provided to demonstrate the importance and properties of the proposed model. The main finding is that ignoring the effect of travel time reliability and travelers’ perception errors may significantly reduce the performance of the first-best MCP tolls, especially under high travelers’ confidence and network congestion levels. The analysis result could also enhance our understanding of (1 the effect of stochastic perception error (SPE on the perceived travel time distribution and the components of road toll; (2 the effect of road toll on the actual travel time distribution and its reliability measures; (3 the effect of road toll on the total network travel time distribution and its statistics; and (4 the effect of travel demand level and the value of reliability (VoR level on the components of road toll.

  12. Impact of organ shape variations on margin concepts for cervix cancer ART.

    Science.gov (United States)

    Seppenwoolde, Yvette; Stock, Markus; Buschmann, Martin; Georg, Dietmar; Bauer-Novotny, Kwei-Yuang; Pötter, Richard; Georg, Petra

    2016-09-01

    Target and organ movement motivate adaptive radiotherapy for cervix cancer patients. We investigated the dosimetric impact of margin concepts with different levels of complexity on both organ at risk (OAR) sparing and PTV coverage. Weekly CT and daily CBCT scans were delineated for 10 patients. The dosimetric impact of organ shape variations were evaluated for four (isotropic) margin concepts: two static PTVs (PTV 6mm and PTV 15mm ), a PTV based on ITV of the planning CT and CBCTs of the first treatment week (PTV ART ITV ) and an adaptive PTV based on a library approach (PTV ART Library ). Using static concepts, OAR doses increased with large margins, while smaller margins compromised target coverage. ART PTVs resulted in comparable target coverage and better sparing of bladder (V40Gy: 15% and 7% less), rectum (V40Gy: 18 and 6cc less) and bowel (V40Gy: 106 and 15cc less) compared to PTV 15mm . Target coverage evaluation showed that for elective fields a static 5mm margin sufficed. PTV ART Library achieved the best dosimetric results. However when weighing clinical benefit against workload, ITV margins based on repetitive movement evaluation during the first week also provide improvements over static margin concepts. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Automated Detection of Driver Fatigue Based on AdaBoost Classifier with EEG Signals

    Directory of Open Access Journals (Sweden)

    Jianfeng Hu

    2017-08-01

    fatigue through the classification of EEG signals.Conclusion: By using combination of FE features and AdaBoost classifier to detect EEG-based driver fatigue, this paper ensured confidence in exploring the inherent physiological mechanisms and wearable application.

  14. Super resolution reconstruction of infrared images based on classified dictionary learning

    Science.gov (United States)

    Liu, Fei; Han, Pingli; Wang, Yi; Li, Xuan; Bai, Lu; Shao, Xiaopeng

    2018-05-01

    Infrared images always suffer from low-resolution problems resulting from limitations of imaging devices. An economical approach to combat this problem involves reconstructing high-resolution images by reasonable methods without updating devices. Inspired by compressed sensing theory, this study presents and demonstrates a Classified Dictionary Learning method to reconstruct high-resolution infrared images. It classifies features of the samples into several reasonable clusters and trained a dictionary pair for each cluster. The optimal pair of dictionaries is chosen for each image reconstruction and therefore, more satisfactory results is achieved without the increase in computational complexity and time cost. Experiments and results demonstrated that it is a viable method for infrared images reconstruction since it improves image resolution and recovers detailed information of targets.

  15. "We call ourselves marginalized"

    DEFF Research Database (Denmark)

    Jørgensen, Nanna Jordt

    2014-01-01

    of the people we refer to as marginalized. In this paper, I discuss how young secondary school graduates from a pastoralist community in Kenya use and negotiate indigeneity, marginal identity, and experiences of marginalization in social navigations aimed at broadening their current and future opportunities. I...

  16. Reinforcing marginality? Maternal health interventions in rural Nicaragua.

    Science.gov (United States)

    Kvernflaten, Birgit

    2017-06-23

    To achieve Millennium Development Goal 5 on maternal health, many countries have focused on marginalized women who lack access to care. Promoting facility-based deliveries to ensure skilled birth attendance and emergency obstetric care has become a main measure for preventing maternal deaths, so women who opt for home births are often considered 'marginal' and in need of targeted intervention. Drawing upon ethnographic data from Nicaragua, this paper critically examines the concept of marginality in the context of official efforts to increase institutional delivery amongst the rural poor, and discusses lack of access to health services among women living in peripheral areas as a process of marginalization. The promotion of facility birth as the new norm, in turn, generates a process of 're-marginalization', whereby public health officials morally disapprove of women who give birth at home, viewing them as non-compliers and a problem to the system. In rural Nicaragua, there is a discrepancy between the public health norm and women's own preferences and desires for home birth. These women live at the margins also in spatial and societal terms, and must relate to a health system they find incapable of providing good, appropriate care. Strong public pressure for institutional delivery makes them feel distressed and pressured. Paradoxically then, the aim of including marginal groups in maternal health programmes engenders resistance to facility birth.

  17. Network Intrusion Detection System (NIDS in Cloud Environment based on Hidden Naïve Bayes Multiclass Classifier

    Directory of Open Access Journals (Sweden)

    Hafza A. Mahmood

    2018-04-01

    Full Text Available Cloud Environment is next generation internet based computing system that supplies customiza-ble services to the end user to work or access to the various cloud applications. In order to provide security and decrease the damage of information system, network and computer system it is im-portant to provide intrusion detection system (IDS. Now Cloud environment are under threads from network intrusions, as one of most prevalent and offensive means Denial of Service (DoS attacks that cause dangerous impact on cloud computing systems. This paper propose Hidden naïve Bayes (HNB Classifier to handle DoS attacks which is a data mining (DM model used to relaxes the conditional independence assumption of Naïve Bayes classifier (NB, proposed sys-tem used HNB Classifier supported with discretization and feature selection where select the best feature enhance the performance of the system and reduce consuming time. To evaluate the per-formance of proposal system, KDD 99 CUP and NSL KDD Datasets has been used. The experi-mental results show that the HNB classifier improves the performance of NIDS in terms of accu-racy and detecting DoS attacks, where the accuracy of detect DoS is 100% in three test KDD cup 99 dataset by used only 12 feature that selected by use gain ratio while in NSL KDD Dataset the accuracy of detect DoS attack is 90 % in three Experimental NSL KDD dataset by select 10 fea-ture only.

  18. Dynamic integration of classifiers in the space of principal components

    NARCIS (Netherlands)

    Tsymbal, A.; Pechenizkiy, M.; Puuronen, S.; Patterson, D.W.; Kalinichenko, L.A.; Manthey, R.; Thalheim, B.; Wloka, U.

    2003-01-01

    Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. It was shown that, for an ensemble to be successful, it should consist of accurate and diverse base classifiers. However, it is also important that the

  19. The value of breast lumpectomy margin assessment as a predictor of residual tumor burden

    International Nuclear Information System (INIS)

    Wazer, David E.; Schmidt-Ullrich, Rupert K.; Schmid, Christopher H.; Ruthazer, Robin; Kramer, Bradley; Safaii, Homa; Graham, Roger

    1997-01-01

    Purpose: Margin assessment is commonly used as a guide to the relative aggressiveness of therapy for breast conserving treatment (BCT), though its value as a predictor of the presence, type, or extent of residual tumor has not been conclusively studied. Controversy continues to exist as to what constitutes a margin that is 'positive', 'close', or 'negative'. We attempt to address these issues through an analysis of re-excision specimens. Patients and Methods: As part of an institutional prospective practice approach for BCT, 265 cases with AJCC Stage I/II carcinoma with an initial excision margin that was ≤2 mm or indeterminate were subjected to re-excision. The probability of residual tumor (+RE) was evaluated with respect to tumor size, histopathologic subtype, relative closeness of the measured margin, the extent of margin positivity graded as focal, minimal, moderate, or extensive, and the extent of specimen processing as reflected in the number of cut sections per specimen volume (S:V ratio). The amount of residual tumor was graded as microscopic, small, medium, or large. The histopathologic subtype of tumor in the re-excision specimen was classified as having an invasive component (ICa) or pure DCIS (DCIS). Results: The primary excision margin was positive, >0≤1 mm, 1.1-2 mm, and indeterminate in 60%, 18%, 5%, and 17%, respectively. The predominant histopathologies in the initial excision specimens were invasive ductal (IDC) (50%) and tumors with an extensive intraductal component (EIC) (43%). The histopathology of the initial excision specimen was highly predictive of the histopathology of tumor found on re-excision, as residual DCIS was found in 60% of +RE specimens with initial histopathology of EIC compared to 26% for IDC (p 0.001). Neither the extent of margin positivity nor the extent of tumor in the re-excision were significantly related to the initial histopathologic subtype; however, a +RE was seen in 59% of EIC, 43% of IDC, and 32% of invasive

  20. Deregulated model and locational marginal pricing

    International Nuclear Information System (INIS)

    Sood, Yog Raj; Padhy, N.P.; Gupta, H.O.

    2007-01-01

    This paper presents a generalized optimal model that dispatches the pool in combination with privately negotiated bilateral and multilateral contracts while maximizing social benefit has been proposed. This model determines the locational marginal pricing (LMP) based on marginal cost theory. It also determines the size of non-firm transactions as well as pool demand and generations. Both firms as well as non-firm transactions are considered in this model. The proposed model has been applied to IEEE-30 bus test system. In this test system different types of transactions are added for analysis of the proposed model. (author)

  1. Geological constraints on the evolution of the Angolan margin based on reflection and refraction seismic data (ZaïAngo project)

    Science.gov (United States)

    Moulin, Maryline; Aslanian, Daniel; Olivet, Jean-Louis; Contrucci, Isabelle; Matias, Luis; Géli, Louis; Klingelhoefer, Frauke; Nouzé, Hervé; Réhault, Jean-Pierre; Unternehr, Patrick

    2005-09-01

    Deep penetration multichannel reflection and Ocean Bottom Seismometer wide-angle seismic data from the Congo-Angola margin were collected in 2000 during the ZaïAngo cruise. These data help constrain the deep structure of the continental margin, the geometry of the pre-salt sediment layers and the geometry of the Aptian salt layer. Dating the deposition of the salt relative to the chronology of the margin formation is an issue of fundamental importance for reconstructing the evolution of the margin and for the understanding of the crustal thinning processes. The data show that the crust thins abruptly, from a 30-40 km thickness to less than 10 km, over a lateral distance of less than 50 km. The transitional domain is a 180-km-wide basin. The pre-salt sediment layering within this basin is parallel to the base of the salt and hardly affected by tectonic deformation. In addition, the presence of a continuous salt cover, from the continental platform down to the presumed oceanic boundary, provides indications on the conditions of salt deposition that constrain the geometry of the margin at that time. These crucial observations imply shallow deposition environments during the rifting and suggest that vertical motions prevailed-compared to horizontal motions-during the formation of the basin.

  2. Marginal Matter

    Science.gov (United States)

    van Hecke, Martin

    2013-03-01

    All around us, things are falling apart. The foam on our cappuccinos appears solid, but gentle stirring irreversibly changes its shape. Skin, a biological fiber network, is firm when you pinch it, but soft under light touch. Sand mimics a solid when we walk on the beach but a liquid when we pour it out of our shoes. Crucially, a marginal point separates the rigid or jammed state from the mechanical vacuum (freely flowing) state - at their marginal points, soft materials are neither solid nor liquid. Here I will show how the marginal point gives birth to a third sector of soft matter physics: intrinsically nonlinear mechanics. I will illustrate this with shock waves in weakly compressed granular media, the nonlinear rheology of foams, and the nonlinear mechanics of weakly connected elastic networks.

  3. Biological impact of geometric uncertainties: what margin is needed for intra-hepatic tumors?

    International Nuclear Information System (INIS)

    Kuo, Hsiang-Chi; Liu, Wen-Shan; Wu, Andrew; Mah, Dennis; Chuang, Keh-Shih; Hong, Linda; Yaparpalvi, Ravi; Guha, Chandan; Kalnicki, Shalom

    2010-01-01

    To evaluate and compare the biological impact on different proposed margin recipes for the same geometric uncertainties for intra-hepatic tumors with different tumor cell types or clinical stages. Three different margin recipes based on tumor motion were applied to sixteen IMRT plans with a total of twenty two intra-hepatic tumors. One recipe used the full amplitude of motion measured from patients to generate margins. A second used 70% of the full amplitude of motion, while the third had no margin for motion. The biological effects of geometric uncertainty in these three situations were evaluated with Equivalent Uniform Doses (EUD) for various survival fractions at 2 Gy (SF 2 ). There was no significant difference in the biological impact between the full motion margin and the 70% motion margin. Also, there was no significant difference between different tumor cell types. When the margin for motion was eliminated, the difference of the biological impact was significant among different cell types due to geometric uncertainties. Elimination of the motion margin requires dose escalation to compensate for the biological dose reduction due to the geometric misses during treatment. Both patient-based margins of full motion and of 70% motion are sufficient to prevent serious dosimetric error. Clinical implementation of margin reduction should consider the tumor sensitivity to radiation

  4. Energy-Efficient Neuromorphic Classifiers.

    Science.gov (United States)

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

  5. Convexity and Marginal Vectors

    NARCIS (Netherlands)

    van Velzen, S.; Hamers, H.J.M.; Norde, H.W.

    2002-01-01

    In this paper we construct sets of marginal vectors of a TU game with the property that if the marginal vectors from these sets are core elements, then the game is convex.This approach leads to new upperbounds on the number of marginal vectors needed to characterize convexity.An other result is that

  6. Contributions to knowledge of the continental margin of Uruguay. Uruguayan continental margin: Physiographic and seismic analysis

    International Nuclear Information System (INIS)

    Preciozzi, F

    2014-01-01

    This work is about the kind of continental margins such as a )Atlantic type passive margins which can be hard or soft b) An active or Pacific margins that because of the very frequent earthquakes develop a morphology dominated by tectonic processes. The Uruguayan continental margin belongs to a soft Atlantic margin

  7. Fixing soft margins

    NARCIS (Netherlands)

    P. Kofman (Paul); A. Vaal, de (Albert); C.G. de Vries (Casper)

    1993-01-01

    textabstractNon-parametric tolerance limits are employed to calculate soft margins such as advocated in Williamson's target zone proposal. In particular, the tradeoff between softness and zone width is quantified. This may be helpful in choosing appropriate margins. Furthermore, it offers

  8. Comparison of histologic margin status in low-grade cutaneous and subcutaneous canine mast cell tumours examined by radial and tangential sections.

    Science.gov (United States)

    Dores, C B; Milovancev, M; Russell, D S

    2018-03-01

    Radial sections are widely used to estimate adequacy of excision in canine cutaneous mast cell tumours (MCTs); however, this sectioning technique estimates only a small fraction of total margin circumference. This study aimed to compare histologic margin status in grade II/low grade MCTs sectioned using both radial and tangential sectioning techniques. A total of 43 circumferential margins were evaluated from 21 different tumours. Margins were first sectioned radially, followed by tangential sections. Tissues were examined by routine histopathology. Tangential margin status differed in 10 of 43 (23.3%) margins compared with their initial status on radial section. Of 39 margins, 9 (23.1%) categorized as histologic tumour-free margin (HTFM) >0 mm were positive on tangential sectioning. Tangential sections detected a significantly higher proportion of positive margins relative to radial sections (exact 2-tailed P-value = .0215). The HTFM was significantly longer in negative tangential margins than positive tangential margins (mean 10.1 vs 3.2 mm; P = .0008). A receiver operating characteristic curve comparing HTFM and tangentially negative margins found an area under the curve of 0.83 (95% confidence interval: 0.71-0.96). Although correct classification peaked at the sixth cut-point of HTFM ≥1 mm, radial sections still incorrectly classified 50% of margins as lacking tumour cells. Radial sections had 100% specificity for predicting negative tangential margins at a cut-point of 10.9 mm. These data indicate that for low grade MCTs, HTFMs >0 mm should not be considered completely excised, particularly when HTFM is <10.9 mm. This will inform future studies that use HTFM and overall excisional status as dependent variables in multivariable prognostic models. © 2017 John Wiley & Sons Ltd.

  9. SOCIAL MARGINALIZATION AND HEALTH

    Directory of Open Access Journals (Sweden)

    Marjana Bogdanović

    2007-04-01

    Full Text Available The 20th century was characterized by special improvement in health. The aim of WHO’s policy EQUITY IN HEALTH is to enable equal accessibility and equal high quality of health care for all citizens. More or less some social groups have stayed out of many social systems even out of health care system in the condition of social marginalization. Phenomenon of social marginalization is characterized by dynamics. Marginalized persons have lack of control over their life and available resources. Social marginalization stands for a stroke on health and makes the health status worse. Low socio-economic level dramatically influences people’s health status, therefore, poverty and illness work together. Characteristic marginalized groups are: Roma people, people with AIDS, prisoners, persons with development disorders, persons with mental health disorders, refugees, homosexual people, delinquents, prostitutes, drug consumers, homeless…There is a mutual responsibility of community and marginalized individuals in trying to resolve the problem. Health and other problems could be solved only by multisector approach to well-designed programs.

  10. Pickering seismic safety margin

    International Nuclear Information System (INIS)

    Ghobarah, A.; Heidebrecht, A.C.; Tso, W.K.

    1992-06-01

    A study was conducted to recommend a methodology for the seismic safety margin review of existing Canadian CANDU nuclear generating stations such as Pickering A. The purpose of the seismic safety margin review is to determine whether the nuclear plant has sufficient seismic safety margin over its design basis to assure plant safety. In this review process, it is possible to identify the weak links which might limit the seismic performance of critical structures, systems and components. The proposed methodology is a modification the EPRI (Electric Power Research Institute) approach. The methodology includes: the characterization of the site margin earthquake, the definition of the performance criteria for the elements of a success path, and the determination of the seismic withstand capacity. It is proposed that the margin earthquake be established on the basis of using historical records and the regional seismo-tectonic and site specific evaluations. The ability of the components and systems to withstand the margin earthquake is determined by database comparisons, inspection, analysis or testing. An implementation plan for the application of the methodology to the Pickering A NGS is prepared

  11. Quantum ensembles of quantum classifiers.

    Science.gov (United States)

    Schuld, Maria; Petruccione, Francesco

    2018-02-09

    Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.

  12. Pixel Classification of SAR ice images using ANFIS-PSO Classifier

    Directory of Open Access Journals (Sweden)

    G. Vasumathi

    2016-12-01

    Full Text Available Synthetic Aperture Radar (SAR is playing a vital role in taking extremely high resolution radar images. It is greatly used to monitor the ice covered ocean regions. Sea monitoring is important for various purposes which includes global climate systems and ship navigation. Classification on the ice infested area gives important features which will be further useful for various monitoring process around the ice regions. Main objective of this paper is to classify the SAR ice image that helps in identifying the regions around the ice infested areas. In this paper three stages are considered in classification of SAR ice images. It starts with preprocessing in which the speckled SAR ice images are denoised using various speckle removal filters; comparison is made on all these filters to find the best filter in speckle removal. Second stage includes segmentation in which different regions are segmented using K-means and watershed segmentation algorithms; comparison is made between these two algorithms to find the best in segmenting SAR ice images. The last stage includes pixel based classification which identifies and classifies the segmented regions using various supervised learning classifiers. The algorithms includes Back propagation neural networks (BPN, Fuzzy Classifier, Adaptive Neuro Fuzzy Inference Classifier (ANFIS classifier and proposed ANFIS with Particle Swarm Optimization (PSO classifier; comparison is made on all these classifiers to propose which classifier is best suitable for classifying the SAR ice image. Various evaluation metrics are performed separately at all these three stages.

  13. Indian Ocean margins

    Digital Repository Service at National Institute of Oceanography (India)

    Naqvi, S.W.A

    in the latter two areas. Some of these fluxes are expected to be substantial in the case of Indonesian continental margins and probably also across the eastern coasts of Africa not covered in this chapter. However, a dearth of information makes these margins...

  14. A Hierarchical Method for Transient Stability Prediction of Power Systems Using the Confidence of a SVM-Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Yanzhen Zhou

    2016-09-01

    Full Text Available Machine learning techniques have been widely used in transient stability prediction of power systems. When using the post-fault dynamic responses, it is difficult to draw a definite conclusion about how long the duration of response data used should be in order to balance the accuracy and speed. Besides, previous studies have the problem of lacking consideration for the confidence level. To solve these problems, a hierarchical method for transient stability prediction based on the confidence of ensemble classifier using multiple support vector machines (SVMs is proposed. Firstly, multiple datasets are generated by bootstrap sampling, then features are randomly picked up to compress the datasets. Secondly, the confidence indices are defined and multiple SVMs are built based on these generated datasets. By synthesizing the probabilistic outputs of multiple SVMs, the prediction results and confidence of the ensemble classifier will be obtained. Finally, different ensemble classifiers with different response times are built to construct different layers of the proposed hierarchical scheme. The simulation results show that the proposed hierarchical method can balance the accuracy and rapidity of the transient stability prediction. Moreover, the hierarchical method can reduce the misjudgments of unstable instances and cooperate with the time domain simulation to insure the security and stability of power systems.

  15. A novel ultrasound based technique for classifying gas bubble sizes in liquids

    International Nuclear Information System (INIS)

    Hussein, Walid; Khan, Muhammad Salman; Zamorano, Juan; Espic, Felipe; Yoma, Nestor Becerra

    2014-01-01

    Characterizing gas bubbles in liquids is crucial to many biomedical, environmental and industrial applications. In this paper a novel method is proposed for the classification of bubble sizes using ultrasound analysis, which is widely acknowledged for being non-invasive, non-contact and inexpensive. This classification is based on 2D templates, i.e. the average spectrum of events representing the trace of bubbles when they cross an ultrasound field. The 2D patterns are obtained by capturing ultrasound signals reflected by bubbles. Frequency-domain based features are analyzed that provide discrimination between bubble sizes. These features are then fed to an artificial neural network, which is designed and trained to classify bubble sizes. The benefits of the proposed method are that it facilitates the processing of multiple bubbles simultaneously, the issues concerning masking interference among bubbles are potentially reduced and using a single sinusoidal component makes the transmitter–receiver electronics relatively simpler. Results from three bubble sizes indicate that the proposed scheme can achieve an accuracy in their classification that is as high as 99%. (paper)

  16. The stability margin on EAST tokamak

    International Nuclear Information System (INIS)

    Jin-Ping, Qian; Bao-Nian, Wan; Biao, Shen; Bing-Jia, Xiao; Walker, M.L.; Humphreys, D.A.

    2009-01-01

    The experimental advanced superconducting tokamak (EAST) is the first full superconducting tokamak with a D-shaped cross-sectional plasma presently in operation. Its poloidal coils are relatively far from the plasma due to the necessary thermal isolation from the superconducting magnets, which leads to relatively weaker coupling between plasma and poloidal field. This may cause more difficulties in controlling the vertical instability by using the poloidal coils. The measured growth rates of vertical stability are compared with theoretical calculations, based on a rigid plasma model. Poloidal beta and internal inductance are varied to investigate their effects on the stability margin by changing the values of parameters α n and γ n (Howl et al 1992 Phys. Fluids B 4 1724), with plasma shape fixed to be a configuration with k = 1.9 and δ = 0.5. A number of ways of studying the stability margin are investigated. Among them, changing the values of parameters κ and l i is shown to be the most effective way to increase the stability margin. Finally, a guideline of stability margin M s (κ, l i , A) to a new discharge scenario showing whether plasmas can be stabilized is also presented in this paper

  17. Locating and classifying defects using an hybrid data base

    Science.gov (United States)

    Luna-Avilés, A.; Hernández-Gómez, L. H.; Durodola, J. F.; Urriolagoitia-Calderón, G.; Urriolagoitia-Sosa, G.; Beltrán Fernández, J. A.; Díaz Pineda, A.

    2011-07-01

    A computational inverse technique was used in the localization and classification of defects. Postulated voids of two different sizes (2 mm and 4 mm diameter) were introduced in PMMA bars with and without a notch. The bar dimensions are 200×20×5 mm. One half of them were plain and the other half has a notch (3 mm × 4 mm) which is close to the defect area (19 mm × 16 mm).This analysis was done with an Artificial Neural Network (ANN) and its optimization was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). A hybrid data base was developed with numerical and experimental results. Synthetic data was generated with the finite element method using SOLID95 element of ANSYS code. A parametric analysis was carried out. Only one defect in such bars was taken into account and the first five natural frequencies were calculated. 460 cases were evaluated. Half of them were plain and the other half has a notch. All the input data was classified in two groups. Each one has 230 cases and corresponds to one of the two sort of voids mentioned above. On the other hand, experimental analysis was carried on with PMMA specimens of the same size. The first two natural frequencies of 40 cases were obtained with one void. The other three frequencies were obtained numerically. 20 of these bars were plain and the others have a notch. These experimental results were introduced in the synthetic data base. 400 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. In the next stage of this work, the ANN output was optimized with ANFIS. Previous papers showed that localization and classification of defects was reduced as notches were introduced in such bars. In the case of this paper, improved results were obtained when a hybrid data base was used.

  18. Locating and classifying defects using an hybrid data base

    Energy Technology Data Exchange (ETDEWEB)

    Luna-Aviles, A; Diaz Pineda, A [Tecnologico de Estudios Superiores de Coacalco. Av. 16 de Septiembre 54, Col. Cabecera Municipal. C.P. 55700 (Mexico); Hernandez-Gomez, L H; Urriolagoitia-Calderon, G; Urriolagoitia-Sosa, G [Instituto Politecnico Nacional. ESIME-SEPI. Unidad Profesional ' Adolfo Lopez Mateos' Edificio 5, 30 Piso, Colonia Lindavista. Gustavo A. Madero. 07738 Mexico D.F. (Mexico); Durodola, J F [School of Technology, Oxford Brookes University, Headington Campus, Gipsy Lane, Oxford OX3 0BP (United Kingdom); Beltran Fernandez, J A, E-mail: alelunaav@hotmail.com, E-mail: luishector56@hotmail.com, E-mail: jdurodola@brookes.ac.uk

    2011-07-19

    A computational inverse technique was used in the localization and classification of defects. Postulated voids of two different sizes (2 mm and 4 mm diameter) were introduced in PMMA bars with and without a notch. The bar dimensions are 200x20x5 mm. One half of them were plain and the other half has a notch (3 mm x 4 mm) which is close to the defect area (19 mm x 16 mm).This analysis was done with an Artificial Neural Network (ANN) and its optimization was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). A hybrid data base was developed with numerical and experimental results. Synthetic data was generated with the finite element method using SOLID95 element of ANSYS code. A parametric analysis was carried out. Only one defect in such bars was taken into account and the first five natural frequencies were calculated. 460 cases were evaluated. Half of them were plain and the other half has a notch. All the input data was classified in two groups. Each one has 230 cases and corresponds to one of the two sort of voids mentioned above. On the other hand, experimental analysis was carried on with PMMA specimens of the same size. The first two natural frequencies of 40 cases were obtained with one void. The other three frequencies were obtained numerically. 20 of these bars were plain and the others have a notch. These experimental results were introduced in the synthetic data base. 400 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. In the next stage of this work, the ANN output was optimized with ANFIS. Previous papers showed that localization and classification of defects was reduced as notches were introduced in such bars. In the case of this paper, improved results were obtained when a hybrid data base was used.

  19. Locating and classifying defects using an hybrid data base

    International Nuclear Information System (INIS)

    Luna-Aviles, A; Diaz Pineda, A; Hernandez-Gomez, L H; Urriolagoitia-Calderon, G; Urriolagoitia-Sosa, G; Durodola, J F; Beltran Fernandez, J A

    2011-01-01

    A computational inverse technique was used in the localization and classification of defects. Postulated voids of two different sizes (2 mm and 4 mm diameter) were introduced in PMMA bars with and without a notch. The bar dimensions are 200x20x5 mm. One half of them were plain and the other half has a notch (3 mm x 4 mm) which is close to the defect area (19 mm x 16 mm).This analysis was done with an Artificial Neural Network (ANN) and its optimization was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). A hybrid data base was developed with numerical and experimental results. Synthetic data was generated with the finite element method using SOLID95 element of ANSYS code. A parametric analysis was carried out. Only one defect in such bars was taken into account and the first five natural frequencies were calculated. 460 cases were evaluated. Half of them were plain and the other half has a notch. All the input data was classified in two groups. Each one has 230 cases and corresponds to one of the two sort of voids mentioned above. On the other hand, experimental analysis was carried on with PMMA specimens of the same size. The first two natural frequencies of 40 cases were obtained with one void. The other three frequencies were obtained numerically. 20 of these bars were plain and the others have a notch. These experimental results were introduced in the synthetic data base. 400 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. In the next stage of this work, the ANN output was optimized with ANFIS. Previous papers showed that localization and classification of defects was reduced as notches were introduced in such bars. In the case of this paper, improved results were obtained when a hybrid data base was used.

  20. Marginal kidney donor

    Directory of Open Access Journals (Sweden)

    Ganesh Gopalakrishnan

    2007-01-01

    Full Text Available Renal transplantation is the treatment of choice for a medically eligible patient with end stage renal disease. The number of renal transplants has increased rapidly over the last two decades. However, the demand for organs has increased even more. This disparity between the availability of organs and waitlisted patients for transplants has forced many transplant centers across the world to use marginal kidneys and donors. We performed a Medline search to establish the current status of marginal kidney donors in the world. Transplant programs using marginal deceased renal grafts is well established. The focus is now on efforts to improve their results. Utilization of non-heart-beating donors is still in a plateau phase and comprises a minor percentage of deceased donations. The main concern is primary non-function of the renal graft apart from legal and ethical issues. Transplants with living donors outnumbered cadaveric transplants at many centers in the last decade. There has been an increased use of marginal living kidney donors with some acceptable medical risks. Our primary concern is the safety of the living donor. There is not enough scientific data available to quantify the risks involved for such donation. The definition of marginal living donor is still not clear and there are no uniform recommendations. The decision must be tailored to each donor who in turn should be actively involved at all levels of the decision-making process. In the current circumstances, our responsibility is very crucial in making decisions for either accepting or rejecting a marginal living donor.

  1. Marginal adaptation of newer root canal sealers to dentin: A SEM study.

    Science.gov (United States)

    Polineni, Swapnika; Bolla, Nagesh; Mandava, Pragna; Vemuri, Sayesh; Mallela, Madhusudana; Gandham, Vijaya Madhuri

    2016-01-01

    This in vitro study evaluated and compared the marginal adaptation of three newer root canal sealers to root dentin. Thirty freshly extracted human single-rooted teeth with completely formed apices were taken. Teeth were decoronated, and root canals were instrumented. The specimens were randomly divided into three groups (n = 10) based upon the sealer used. Group 1 - teeth were obturated with epoxy resin sealer (MM-Seal). Group 2 - teeth were obturated with mineral trioxide aggregate (MTA) based sealer (MTA Fillapex), Group 3 - teeth were obturated with bioceramic sealer (EndoSequence BC sealer). Later samples were vertically sectioned using hard tissue microtome and marginal adaptation of sealers to root dentin was evaluated under coronal and apical halves using scanning electron microscopy (SEM) and marginal gap values were recorded. The data were statistically analyzed by two-way ANOVA and Tukey's multiple post hoc test. The highest marginal gap was seen in Group 2 (apical-16680.00 nm, coronal-10796 nm) and the lowest marginal gap was observed in Group 1 (apical-599.42 nm, coronal-522.72 nm). Coronal halves showed superior adaptation compared to apical halves in all the groups under SEM. Within the limitations of this study epoxy resin-based MM-Seal showed good marginal adaptation than other materials tested.

  2. On-line detection of apnea/hypopnea events using SpO2 signal: a rule-based approach employing binary classifier models.

    Science.gov (United States)

    Koley, Bijoy Laxmi; Dey, Debangshu

    2014-01-01

    This paper presents an online method for automatic detection of apnea/hypopnea events, with the help of oxygen saturation (SpO2) signal, measured at fingertip by Bluetooth nocturnal pulse oximeter. Event detection is performed by identifying abnormal data segments from the recorded SpO2 signal, employing a binary classifier model based on a support vector machine (SVM). Thereafter the abnormal segment is further analyzed to detect different states within the segment, i.e., steady, desaturation, and resaturation, with the help of another SVM-based binary ensemble classifier model. Finally, a heuristically obtained rule-based system is used to identify the apnea/hypopnea events from the time-sequenced decisions of these classifier models. In the developmental phase, a set of 34 time domain-based features was extracted from the segmented SpO2 signal using an overlapped windowing technique. Later, an optimal set of features was selected on the basis of recursive feature elimination technique. A total of 34 subjects were included in the study. The results show average event detection accuracies of 96.7% and 93.8% for the offline and the online tests, respectively. The proposed system provides direct estimation of the apnea/hypopnea index with the help of a relatively inexpensive and widely available pulse oximeter. Moreover, the system can be monitored and accessed by physicians through LAN/WAN/Internet and can be extended to deploy in Bluetooth-enabled mobile phones.

  3. Snoring classified: The Munich-Passau Snore Sound Corpus.

    Science.gov (United States)

    Janott, Christoph; Schmitt, Maximilian; Zhang, Yue; Qian, Kun; Pandit, Vedhas; Zhang, Zixing; Heiser, Clemens; Hohenhorst, Winfried; Herzog, Michael; Hemmert, Werner; Schuller, Björn

    2018-03-01

    Snoring can be excited in different locations within the upper airways during sleep. It was hypothesised that the excitation locations are correlated with distinct acoustic characteristics of the snoring noise. To verify this hypothesis, a database of snore sounds is developed, labelled with the location of sound excitation. Video and audio recordings taken during drug induced sleep endoscopy (DISE) examinations from three medical centres have been semi-automatically screened for snore events, which subsequently have been classified by ENT experts into four classes based on the VOTE classification. The resulting dataset containing 828 snore events from 219 subjects has been split into Train, Development, and Test sets. An SVM classifier has been trained using low level descriptors (LLDs) related to energy, spectral features, mel frequency cepstral coefficients (MFCC), formants, voicing, harmonic-to-noise ratio (HNR), spectral harmonicity, pitch, and microprosodic features. An unweighted average recall (UAR) of 55.8% could be achieved using the full set of LLDs including formants. Best performing subset is the MFCC-related set of LLDs. A strong difference in performance could be observed between the permutations of train, development, and test partition, which may be caused by the relatively low number of subjects included in the smaller classes of the strongly unbalanced data set. A database of snoring sounds is presented which are classified according to their sound excitation location based on objective criteria and verifiable video material. With the database, it could be demonstrated that machine classifiers can distinguish different excitation location of snoring sounds in the upper airway based on acoustic parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Performances of the likelihood-ratio classifier based on different data modelings

    NARCIS (Netherlands)

    Chen, C.; Veldhuis, Raymond N.J.

    2008-01-01

    The classical likelihood ratio classifier easily collapses in many biometric applications especially with independent training-test subjects. The reason lies in the inaccurate estimation of the underlying user-specific feature density. Firstly, the feature density estimation suffers from

  5. A Customizable Text Classifier for Text Mining

    Directory of Open Access Journals (Sweden)

    Yun-liang Zhang

    2007-12-01

    Full Text Available Text mining deals with complex and unstructured texts. Usually a particular collection of texts that is specified to one or more domains is necessary. We have developed a customizable text classifier for users to mine the collection automatically. It derives from the sentence category of the HNC theory and corresponding techniques. It can start with a few texts, and it can adjust automatically or be adjusted by user. The user can also control the number of domains chosen and decide the standard with which to choose the texts based on demand and abundance of materials. The performance of the classifier varies with the user's choice.

  6. Seismic margin assessment and earthquake experience based methods for WWER-440/213 type NPPs

    International Nuclear Information System (INIS)

    Masopust, R.

    1996-01-01

    This report covers the review of the already completed studies, namely, safe shutdown system identification and classification for Bohunice NPP and the comparative study of standards and criteria. It contains a report on currently ongoing studies concerning seismic margin assessment and earthquake experience based methods in application for seismic evaluation and verification of structures and equipment components of the operating WWER-440/213 type NPPs. This is based on experiences obtained from Paks NPP. The work plan for the remaining period of Benchmark CRP and the new proposals are included. These are concerned with seismic evaluation of selected safety related mechanical equipment and pipes of Paks NPP, and the actual seismic issues of the Temelin WWER-1000 type NPP

  7. Marginal and Internal Discrepancies of Posterior Zirconia-Based Crowns Fabricated with Three Different CAD/CAM Systems Versus Metal-Ceramic.

    Science.gov (United States)

    Ortega, Rocio; Gonzalo, Esther; Gomez-Polo, Miguel; Suárez, María J

    2015-01-01

    The aim of this study was to analyze the marginal and internal fit of metalceramic and zirconia-based crowns. Forty standardized steel specimens were prepared to receive posterior crowns and randomly divided into four groups (n = 10): (1) metal-ceramic, (2) NobelProcera Zirconia, (3) Lava Zirconia, and (4) VITA In-Ceram YZ. All crowns were cemented with glass-ionomer agent and sectioned buccolingually. A scanning electron microscope was used for measurements. Kruskal-Wallis and Wilcoxon signed rank test (α = .05) statistical analyses were conducted. Significant differences (P < .0001) in marginal discrepancies were observed between metal-ceramic and zirconia groups. No differences were found for the axial wall fit (P = .057). Significant differences were shown among the groups in discrepancies at the occlusal cusp (P = .0012) and at the fossa (P = .0062). No differences were observed between surfaces. All zirconia groups showed better values of marginal discrepancies than the metal-ceramic group. Procera Zirconia showed the lowest gaps.

  8. How to make offshore marginal fields work for everyone

    International Nuclear Information System (INIS)

    Blandford, P.R.

    1995-01-01

    Marginal fields make positive impact on certain oil and gas companies' financial performances. These developments are integrated into the operator's operational and philosophical mindset, so that they optimize return and establish a reasonable reserve base for the company. Having a portfolio of marginal field developments is definitely a part of the offshore business, and oil field suppliers and subcontractors will continue to develop technology and methods to ensure the fields are exploited. It goes without saying that the continued production of marginal fields helps a lot of consumers and the companies that make up the energy chain that gets it to them. There are marginal fields all over the world and the market can only grow as more and more of the resources decline and industrialization expands demand. The projections for 2020 state that fossil fuels will remain the major supply link to dependable and affordable energy, particularly as additional oil and gas infrastructures are built and installed. Likened to the commodity, oil and gas companies and the energy industry have slowly evolved to the point where they are making a difference for people worldwide. As long as there is product to produce, most companies and consumers do not really care what type of reservoir started it all. More often then not, it probably started out as a marginal prospect. The paper discusses the energy picture today, marginal field update, offshore marginal field geography, and independent and marginal field developments

  9. On marginal regeneration

    NARCIS (Netherlands)

    Stein, H.N.

    1991-01-01

    On applying the marginal regeneration concept to the drainage of free liquid films, problems are encountered: the films do not show a "neck" of minimum thickness at the film/border transition; and the causes of the direction dependence of the marginal regeneration are unclear. Both problems can be

  10. Identifying Different Transportation Modes from Trajectory Data Using Tree-Based Ensemble Classifiers

    Directory of Open Access Journals (Sweden)

    Zhibin Xiao

    2017-02-01

    Full Text Available Recognition of transportation modes can be used in different applications including human behavior research, transport management and traffic control. Previous work on transportation mode recognition has often relied on using multiple sensors or matching Geographic Information System (GIS information, which is not possible in many cases. In this paper, an approach based on ensemble learning is proposed to infer hybrid transportation modes using only Global Position System (GPS data. First, in order to distinguish between different transportation modes, we used a statistical method to generate global features and extract several local features from sub-trajectories after trajectory segmentation, before these features were combined in the classification stage. Second, to obtain a better performance, we used tree-based ensemble models (Random Forest, Gradient Boosting Decision Tree, and XGBoost instead of traditional methods (K-Nearest Neighbor, Decision Tree, and Support Vector Machines to classify the different transportation modes. The experiment results on the later have shown the efficacy of our proposed approach. Among them, the XGBoost model produced the best performance with a classification accuracy of 90.77% obtained on the GEOLIFE dataset, and we used a tree-based ensemble method to ensure accurate feature selection to reduce the model complexity.

  11. A margin-based analysis of the dosimetric impact of motion on step-and-shoot IMRT lung plans

    International Nuclear Information System (INIS)

    Waghorn, Benjamin J; Shah, Amish P; Rineer, Justin M; Langen, Katja M; Meeks, Sanford L

    2014-01-01

    Intrafraction motion during step-and-shoot (SNS) IMRT is known to affect the target dosimetry by a combination of dose blurring and interplay effects. These effects are typically managed by adding a margin around the target. A quantitative analysis was performed, assessing the relationship between target motion, margin size, and target dosimetry with the goal of introducing new margin recipes. A computational algorithm was used to calculate 1,174 motion-encoded dose distributions and DVHs within the patient’s CT dataset. Sinusoidal motion tracks were used simulating intrafraction motion for nine lung tumor patients, each with multiple margin sizes. D 95% decreased by less than 3% when the maximum target displacement beyond the margin experienced motion less than 5 mm in the superior-inferior direction and 15 mm in the anterior-posterior direction. For target displacements greater than this, D 95% decreased rapidly. Targets moving in excess of 5 mm outside the margin can cause significant changes to the target. D 95% decreased by up to 20% with target motion 10 mm outside the margin, with underdosing primarily limited to the target periphery. Multi-fractionated treatments were found to exacerbate target under-coverage. Margins several millimeters smaller than the maximum target displacement provided acceptable motion protection, while also allowing for reduced normal tissue morbidity

  12. Marginal adaptation of a low-shrinkage silorane-based composite: A SEM-analysis

    DEFF Research Database (Denmark)

    Schmidt, Malene; Bindslev, Preben Hørsted; Poulsen, Sven

    2012-01-01

    shrinkage, has been marketed. Objective. To investigate whether reduced polymerization shrinkage improves the marginal adaptation of composite restorations. Material and methods. A total of 156 scanning electron microscopy (SEM) pictures (78 baseline, 78 follow-up) of the occlusal part of Class II......-casts of the restorations were used for SEM pictures at x 16 magnification. Pictures from baseline and follow-up (398 days, SD 29 days) were randomized and the examiner was blinded to the material and the age of the restoration. Stereologic measurements were used to calculate the length and the width of the marginal...

  13. Neutropenia Prediction Based on First-Cycle Blood Counts Using a FOS-3NN Classifier

    Directory of Open Access Journals (Sweden)

    Elize A. Shirdel

    2011-01-01

    Full Text Available Background. Delivery of full doses of adjuvant chemotherapy on schedule is key to optimal breast cancer outcomes. Neutropenia is a serious complication of chemotherapy and a common barrier to this goal, leading to dose reductions or delays in treatment. While past research has observed correlations between complete blood count data and neutropenic events, a reliable method of classifying breast cancer patients into low- and high-risk groups remains elusive. Patients and Methods. Thirty-five patients receiving adjuvant chemotherapy for early-stage breast cancer under the care of a single oncologist are examined in this study. FOS-3NN stratifies patient risk based on complete blood count data after the first cycle of treatment. All classifications are independent of breast cancer subtype and clinical markers, with risk level determined by the kinetics of patient blood count response to the first cycle of treatment. Results. In an independent test set of patients unseen by FOS-3NN, 19 out of 21 patients were correctly classified (Fisher’s exact test probability P<0.00023 [2 tailed], Matthews’ correlation coefficient +0.83. Conclusions. We have developed a model that accurately predicts neutropenic events in a population treated with adjuvant chemotherapy in the first cycle of a 6-cycle treatment.

  14. A three-parameter model for classifying anurans into four genera based on advertisement calls.

    Science.gov (United States)

    Gingras, Bruno; Fitch, William Tecumseh

    2013-01-01

    The vocalizations of anurans are innate in structure and may therefore contain indicators of phylogenetic history. Thus, advertisement calls of species which are more closely related phylogenetically are predicted to be more similar than those of distant species. This hypothesis was evaluated by comparing several widely used machine-learning algorithms. Recordings of advertisement calls from 142 species belonging to four genera were analyzed. A logistic regression model, using mean values for dominant frequency, coefficient of variation of root-mean square energy, and spectral flux, correctly classified advertisement calls with regard to genus with an accuracy above 70%. Similar accuracy rates were obtained using these parameters with a support vector machine model, a K-nearest neighbor algorithm, and a multivariate Gaussian distribution classifier, whereas a Gaussian mixture model performed slightly worse. In contrast, models based on mel-frequency cepstral coefficients did not fare as well. Comparable accuracy levels were obtained on out-of-sample recordings from 52 of the 142 original species. The results suggest that a combination of low-level acoustic attributes is sufficient to discriminate efficiently between the vocalizations of these four genera, thus supporting the initial premise and validating the use of high-throughput algorithms on animal vocalizations to evaluate phylogenetic hypotheses.

  15. Case based reasoning applied to medical diagnosis using multi-class classifier: A preliminary study

    Directory of Open Access Journals (Sweden)

    D. Viveros-Melo

    2017-02-01

    Full Text Available Case-based reasoning (CBR is a process used for computer processing that tries to mimic the behavior of a human expert in making decisions regarding a subject and learn from the experience of past cases. CBR has demonstrated to be appropriate for working with unstructured domains data or difficult knowledge acquisition situations, such as medical diagnosis, where it is possible to identify diseases such as: cancer diagnosis, epilepsy prediction and appendicitis diagnosis. Some of the trends that may be developed for CBR in the health science are oriented to reduce the number of features in highly dimensional data. An important contribution may be the estimation of probabilities of belonging to each class for new cases. In this paper, in order to adequately represent the database and to avoid the inconveniences caused by the high dimensionality, noise and redundancy, a number of algorithms are used in the preprocessing stage for performing both variable selection and dimension reduction procedures. Also, a comparison of the performance of some representative multi-class classifiers is carried out to identify the most effective one to include within a CBR scheme. Particularly, four classification techniques and two reduction techniques are employed to make a comparative study of multiclass classifiers on CBR

  16. Diagnostics of synchronous motor based on analysis of acoustic signals with application of MFCC and Nearest Mean classifier

    OpenAIRE

    Adam Głowacz; Witold Głowacz; Andrzej Głowacz

    2010-01-01

    The paper presents method of diagnostics of imminent failure conditions of synchronous motor. This method is based on a study ofacoustic signals generated by synchronous motor. Sound recognition system is based on algorithms of data processing, such as MFCC andNearest Mean classifier with cosine distance. Software to recognize the sounds of synchronous motor was implemented. The studies werecarried out for four imminent failure conditions of synchronous motor. The results confirm that the sys...

  17. Marginal-cost pricing for Hydro-Quebec residential customers

    International Nuclear Information System (INIS)

    Paquin, C.

    1994-02-01

    An option available to governments and to utilities such as Hydro-Quebec for responding to objectives of energy efficiency is the adoption of marginal cost pricing. Compared to currently used price structures, marginal cost pricing will allow improvement of price signals and assure an optimal utilization of the resource. That type of pricing could be economically beneficial but may not be desirable from the point of view of revenue distribution. Taking account of Hydro-Quebec's cost structure, pure marginal cost pricing would generate an income that would be strongly contested on equity grounds. For example, it would raise prices 60% for residential customers. Faced with this possibility, an analysis is presented of the impact of a peak-offpeak pricing (or pure marginal cost pricing) on Hydro-Quebec's residential customer energy bills. The marginal costs of Hydro-Quebec are calculated by the method of Bernard and Chatel (1985) and analysis of the results is based on Friedman and Weare (1993). A sample of 28,417 residential customers from a 1989 Hydro-Quebec survey is used in the study. Two scenarios are analyzed; the first allowing comparison of the energy bill only on the basis of marginal costs and of average costs, and the second allowing comparison of the impact of marginal cost pricing on the total bill. In the first scenario, the impact translates into a 31% increase in energy bills for the entire customer class considered; in addition, this impact is inversely proportional to the revenue class. In the second scenario, the increase is 24%. 33 refs., 10 figs., 53 tabs

  18. Indigenous women's voices: marginalization and health.

    Science.gov (United States)

    Dodgson, Joan E; Struthers, Roxanne

    2005-10-01

    Marginalization may affect health care delivery. Ways in which indigenous women experienced marginalization were examined. Data from 57 indigenous women (18 to 65 years) were analyzed for themes. Three themes emerged: historical trauma as lived marginalization, biculturalism experienced as marginalization, and interacting within a complex health care system. Experienced marginalization reflected participants' unique perspective and were congruent with previous research. It is necessary for health care providers to assess the detrimental impact of marginalization on the health status of individuals and/or communities.

  19. Pricing hospital care: Global budgets and marginal pricing strategies.

    Science.gov (United States)

    Sutherland, Jason M

    2015-08-01

    The Canadian province of British Columbia (BC) is adding financial incentives to increase the volume of surgeries provided by hospitals using a marginal pricing approach. The objective of this study is to calculate marginal costs of surgeries based on assumptions regarding hospitals' availability of labor and equipment. This study is based on observational clinical, administrative and financial data generated by hospitals. Hospital inpatient and outpatient discharge summaries from the province are linked with detailed activity-based costing information, stratified by assigned case mix categorizations. To reflect a range of operating constraints governing hospitals' ability to increase their volume of surgeries, a number of scenarios are proposed. Under these scenarios, estimated marginal costs are calculated and compared to prices being offered as incentives to hospitals. Existing data can be used to support alternative strategies for pricing hospital care. Prices for inpatient surgeries do not generate positive margins under a range of operating scenarios. Hip and knee surgeries generate surpluses for hospitals even under the most costly labor conditions and are expected to generate additional volume. In health systems that wish to fine-tune financial incentives, setting prices that create incentives for additional volume should reflect knowledge of hospitals' underlying cost structures. Possible implications of mis-pricing include no response to the incentives or uneven increases in supply. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  20. Major structural response methods used in the seismic safety margins research program

    International Nuclear Information System (INIS)

    Chou, C.K.; Lo, T.; Vagliente, V.

    1979-01-01

    In order to evaluate the conservatisms in present nuclear power plant seismic safety requirements, a probabilistic based systems model is being developed. This model will also be used to develop improved requirements. In Phase I of the Seismic Safety Margins Research Program (SSMRP), this methodology will be developed for a specific nuclear power plant and used to perform probabilistic sensitivity studies to gain engineering insights into seismic safety requirements. Random variables in the structural response analysis area, or parameters which cause uncertainty in the response, are discussed and classified into three categories; i.e., material properties, structural dynamic characteristics and related modeling techniques, and analytical methods. The sensitivity studies are grouped into two categories; deterministic and probabilistic. In a system analysis, transfer functions in simple form are needed since there are too many responses which have to be calculated in a Monte Carlo simulation to use the usual straightforward calculation approach. Therefore, the development of these simple transfer functions is one of the important tasks in SSMRP. Simplified as well as classical transfer functions are discussed

  1. Refining margins and prospects

    International Nuclear Information System (INIS)

    Baudouin, C.; Favennec, J.P.

    1997-01-01

    Refining margins throughout the world have remained low in 1996. In Europe, in spite of an improvement, particularly during the last few weeks, they are still not high enough to finance new investments. Although the demand for petroleum products is increasing, experts are still sceptical about any rapid recovery due to prevailing overcapacity and to continuing capacity growth. After a historical review of margins and an analysis of margins by regions, we analyse refining over-capacities in Europe and the unbalances between production and demand. Then we discuss the current situation concerning barriers to the rationalization, agreements between oil companies, and the consequences on the future of refining capacities and margins. (author)

  2. Safety margins in deterministic safety analysis

    International Nuclear Information System (INIS)

    Viktorov, A.

    2011-01-01

    The concept of safety margins has acquired certain prominence in the attempts to demonstrate quantitatively the level of the nuclear power plant safety by means of deterministic analysis, especially when considering impacts from plant ageing and discovery issues. A number of international or industry publications exist that discuss various applications and interpretations of safety margins. The objective of this presentation is to bring together and examine in some detail, from the regulatory point of view, the safety margins that relate to deterministic safety analysis. In this paper, definitions of various safety margins are presented and discussed along with the regulatory expectations for them. Interrelationships of analysis input and output parameters with corresponding limits are explored. It is shown that the overall safety margin is composed of several components each having different origins and potential uses; in particular, margins associated with analysis output parameters are contrasted with margins linked to the analysis input. While these are separate, it is possible to influence output margins through the analysis input, and analysis method. Preserving safety margins is tantamount to maintaining safety. At the same time, efficiency of operation requires optimization of safety margins taking into account various technical and regulatory considerations. For this, basic definitions and rules for safety margins must be first established. (author)

  3. Marginal Models for Categorial Data

    NARCIS (Netherlands)

    Bergsma, W.P.; Rudas, T.

    2002-01-01

    Statistical models defined by imposing restrictions on marginal distributions of contingency tables have received considerable attention recently. This paper introduces a general definition of marginal log-linear parameters and describes conditions for a marginal log-linear parameter to be a smooth

  4. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  5. Development of System Based Code: Case Study of Life-Cycle Margin Evaluation

    International Nuclear Information System (INIS)

    Tai Asayama; Masaki Morishita; Masanori Tashimo

    2006-01-01

    For a leap of progress in structural deign of nuclear plant components, The late Professor Emeritus Yasuhide Asada proposed the System Based Code. The key concepts of the System Based Code are; (1) life-cycle margin optimization, (2) expansion of technical options as well as combinations of technical options beyond the current codes and standards, and (3) designing to clearly defined target reliabilities. Those concepts are very new to most of the nuclear power plant designers who are naturally obliged to design to current codes and standards; the application of the concepts of the System Based Code to design will lead to entire change of practices that designers have long been accustomed to. On the other hand, experienced designers are supposed to have expertise that can support and accelerate the development of the System Based Code. Therefore, interfacing with experienced designers is of crucial importance for the development of the System Based Code. The authors conducted a survey on the acceptability of the System Based Code concept. The results were analyzed from the possibility of improving structural design both in terms of reliability and cost effectiveness by the introduction of the System Based Code concept. It was concluded that the System Based Code is beneficial for those purposes. Also described is the expertise elicited from the results of the survey that can be reflected to the development of the System Based Code. (authors)

  6. Continental transform margins : state of art and future milestones

    Science.gov (United States)

    Basile, Christophe

    2010-05-01

    Transform faults were defined 45 years ago as ‘a new class of fault' (Wilson, 1965), and transform margins were consequently individualized as a new class of continental margins. While transform margins represent 20 to 25 % of the total length of continent-ocean transitions, they were poorly studied, especially when compared with the amount of data, interpretations, models and conceptual progress accumulated on divergent or convergent continental margins. The best studied examples of transform margins are located in the northern part of Norway, south of South Africa, in the gulf of California and on both sides of the Equatorial Atlantic. Here is located the Côte d'Ivoire - Ghana margin, where the more complete data set was acquired, based on numerous geological and geophysical cruises, including ODP Leg 159. The first models that encompassed the structure and evolution of transform margins were mainly driven by plate kinematic reconstructions, and evidenced the diachronic end of tectonic activity and the non-cylindrical character of these margins, with a decreasing strike-slip deformation from the convex to the concave divergent-transform intersections. Further thermo-mechanical models were more specifically designed to explain the vertical displacements along transform margins, and especially the occurrence of high-standing marginal ridges. These thermo-mechanical models involved either heat transfer from oceanic to continental lithospheres across the transform faults or tectonically- or gravity-driven mass transfer in the upper crust. These models were far from fully fit observations, and were frequently dedicated to specific example, and not easily generalizable. Future work on transform continental margins may be expected to fill some scientific gaps, and the definition of working directions can benefit from the studies dedicated to other types of margins. At regional scale the structural and sedimentological variability of transform continental margins has

  7. A Constrained Multi-Objective Learning Algorithm for Feed-Forward Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    M. Njah

    2017-06-01

    Full Text Available This paper proposes a new approach to address the optimal design of a Feed-forward Neural Network (FNN based classifier. The originality of the proposed methodology, called CMOA, lie in the use of a new constraint handling technique based on a self-adaptive penalty procedure in order to direct the entire search effort towards finding only Pareto optimal solutions that are acceptable. Neurons and connections of the FNN Classifier are dynamically built during the learning process. The approach includes differential evolution to create new individuals and then keeps only the non-dominated ones as the basis for the next generation. The designed FNN Classifier is applied to six binary classification benchmark problems, obtained from the UCI repository, and results indicated the advantages of the proposed approach over other existing multi-objective evolutionary neural networks classifiers reported recently in the literature.

  8. Study on structural seismic margin and probabilistic seismic risk. Development of a structural capacity-seismic risk diagram

    International Nuclear Information System (INIS)

    Nakajima, Masato; Ohtori, Yasuki; Hirata, Kazuta

    2010-01-01

    Seismic margin is extremely important index and information when we evaluate and account seismic safety of critical structures, systems and components quantitatively. Therefore, it is required that electric power companies evaluate the seismic margin of each plant in back-check of nuclear power plants in Japan. The seismic margin of structures is usually defined as a structural capacity margin corresponding to design earthquake ground motion. However, there is little agreement as to the definition of the seismic margin and we have no knowledge about a relationship between the seismic margin and seismic risk (annual failure probability) which is obtained in PSA (Probabilistic Safety Assessment). The purpose of this report is to discuss a definition of structural seismic margin and to develop a diagram which can identify a relation between seismic margin and seismic risk. The main results of this paper are described as follows: (1) We develop seismic margin which is defined based on the fact that intensity of earthquake ground motion is more appropriate than the conventional definition (i.e., the response-based seismic margin) for the following reasons: -seismic margin based on earthquake ground motion is invariant where different typed structures are considered, -stakeholders can understand the seismic margin based on the earthquake ground motion better than the response-based one. (2) The developed seismic margin-risk diagram facilitates us to judge easily whether we need to perform detailed probabilistic risk analysis or only deterministic analysis, given that the reference risk level although information on the uncertainty parameter beta is not obtained. (3) We have performed numerical simulations based on the developed method for four sites in Japan. The structural capacity-risk diagram differs depending on each location because the diagram is greatly influenced by seismic hazard information for a target site. Furthermore, the required structural capacity

  9. SVM classifier on chip for melanoma detection.

    Science.gov (United States)

    Afifi, Shereen; GholamHosseini, Hamid; Sinha, Roopak

    2017-07-01

    Support Vector Machine (SVM) is a common classifier used for efficient classification with high accuracy. SVM shows high accuracy for classifying melanoma (skin cancer) clinical images within computer-aided diagnosis systems used by skin cancer specialists to detect melanoma early and save lives. We aim to develop a medical low-cost handheld device that runs a real-time embedded SVM-based diagnosis system for use in primary care for early detection of melanoma. In this paper, an optimized SVM classifier is implemented onto a recent FPGA platform using the latest design methodology to be embedded into the proposed device for realizing online efficient melanoma detection on a single system on chip/device. The hardware implementation results demonstrate a high classification accuracy of 97.9% and a significant acceleration factor of 26 from equivalent software implementation on an embedded processor, with 34% of resources utilization and 2 watts for power consumption. Consequently, the implemented system meets crucial embedded systems constraints of high performance and low cost, resources utilization and power consumption, while achieving high classification accuracy.

  10. Digital Margins : How spatially and socially marginalized communities deal with digital exclusion

    NARCIS (Netherlands)

    Salemink, Koen

    2016-01-01

    The increasing importance of the Internet as a means of communication has transformed economies and societies. For spatially and socially marginalized communities, this transformation has resulted in digital exclusion and further marginalization. This book presents a study of two kinds of

  11. Volcanic passive margins: another way to break up continents.

    Science.gov (United States)

    Geoffroy, L; Burov, E B; Werner, P

    2015-10-07

    Two major types of passive margins are recognized, i.e. volcanic and non-volcanic, without proposing distinctive mechanisms for their formation. Volcanic passive margins are associated with the extrusion and intrusion of large volumes of magma, predominantly mafic, and represent distinctive features of Larges Igneous Provinces, in which regional fissural volcanism predates localized syn-magmatic break-up of the lithosphere. In contrast with non-volcanic margins, continentward-dipping detachment faults accommodate crustal necking at both conjugate volcanic margins. These faults root on a two-layer deformed ductile crust that appears to be partly of igneous nature. This lower crust is exhumed up to the bottom of the syn-extension extrusives at the outer parts of the margin. Our numerical modelling suggests that strengthening of deep continental crust during early magmatic stages provokes a divergent flow of the ductile lithosphere away from a central continental block, which becomes thinner with time due to the flow-induced mechanical erosion acting at its base. Crustal-scale faults dipping continentward are rooted over this flowing material, thus isolating micro-continents within the future oceanic domain. Pure-shear type deformation affects the bulk lithosphere at VPMs until continental breakup, and the geometry of the margin is closely related to the dynamics of an active and melting mantle.

  12. Calculating the marginal costs of a district-heating utility

    International Nuclear Information System (INIS)

    Sjoedin, Joergen; Henning, Dag

    2004-01-01

    District heating plays an important role in the Swedish heat-market. At the same time, the price of district heating varies considerably among different district-heating utilities. A case study is performed here in which a Swedish utility is analysed using three different methods for calculating the marginal costs of heat supply: a manual spreadsheet method, an optimising linear-programming model, and a least-cost dispatch simulation model. Calculated marginal-costs, obtained with the three methods, turn out to be similar. The calculated marginal-costs are also compared to the actual heat tariff in use by the utility. Using prices based on marginal costs should be able to bring about an efficient resource-allocation. It is found that the fixed rate the utility uses today should be replaced by a time-of-use rate, which would give a more accurate signal for customers to change their heat consumptions. (Author)

  13. Ensembles of novelty detection classifiers for structural health monitoring using guided waves

    Science.gov (United States)

    Dib, Gerges; Karpenko, Oleksii; Koricho, Ermias; Khomenko, Anton; Haq, Mahmoodul; Udpa, Lalita

    2018-01-01

    Guided wave structural health monitoring uses sparse sensor networks embedded in sophisticated structures for defect detection and characterization. The biggest challenge of those sensor networks is developing robust techniques for reliable damage detection under changing environmental and operating conditions (EOC). To address this challenge, we develop a novelty classifier for damage detection based on one class support vector machines. We identify appropriate features for damage detection and introduce a feature aggregation method which quadratically increases the number of available training observations. We adopt a two-level voting scheme by using an ensemble of classifiers and predictions. Each classifier is trained on a different segment of the guided wave signal, and each classifier makes an ensemble of predictions based on a single observation. Using this approach, the classifier can be trained using a small number of baseline signals. We study the performance using Monte-Carlo simulations of an analytical model and data from impact damage experiments on a glass fiber composite plate. We also demonstrate the classifier performance using two types of baseline signals: fixed and rolling baseline training set. The former requires prior knowledge of baseline signals from all EOC, while the latter does not and leverages the fact that EOC vary slowly over time and can be modeled as a Gaussian process.

  14. Informing practice regarding marginalization: the application of the Koci Marginality Index.

    Science.gov (United States)

    Koci, Anne Floyd; McFarlane, Judith; Nava, Angeles; Gilroy, Heidi; Maddoux, John

    2012-12-01

    The 49th World Health Assembly of the World Health Organization (WHO) declared violence as the leading worldwide public health problem with a focus on the increase in the incidence of injuries to women. Violence against women is an international epidemic with specific instruments required to measure the impact on women's functioning. This article describes the application of the Koci Marginality Index (KMI), a 5-item scale to measure marginality, to the baseline data of a seven-year prospective study of 300 abused women: 150 first time users of a shelter and 150 first time applicants for a protection order from the justice system. Validity and reliability of the Koci Marginality Index and its usefulness for best clinical practice and for policy decisions for abused women's health are discussed. The 49th World Health Assembly of the World Health Organization (WHO) declared violence as the leading worldwide public health problem and focused on the increase in the incidence of injuries to women (Krug et al., 2002 ). Violence against women in the form of intimate partner violence (IPV) is costly in terms of dollars and health. In the United States in 2003, estimated costs of IPV approached $8.3 billion (Centers for Disease Control and Prevention [CDC], 2011). Outcomes related to severity of IPV vary but in 2003 victims suffering severe IPV lost nearly 8 million days of paid work, and greater than 5 million days of household productivity annually (CDC, 2011). Besides the evident financial cost of IPV, research confirms that exposure to IPV impacts a woman's health immediately and in the long-term (Breiding, Black, & Ryan, 2008 ; Campbell, 2002 ; CDC, 2011). Such sequela adversely affect the health of women and may increase their marginalization, a concept akin to isolation that may further increase negative effects on health outcomes. Immigrant women are at high risk for IPV (Erez, 2002 ) and those without documentation are at higher risk for marginalization (Montalvo

  15. Geological Constraints on the Evolution of the Angolan Margin Based on Reflection and Refraction Seismic Data (ZaïAngo project)

    Science.gov (United States)

    Moulin, M.; Aslanian, D.; Olivet, J.; Contrucci, I.; Matias, L.; Geli, L.; Klingelhoefer, F.; Nouze, H.; Rabineau, M.; Labails, C.; Rehault, J.; Unternehr, P.

    2005-05-01

    Deep penetration multi-channel reflection and OBS wide-angle seismic data from the Congo-Angola margin were collected in 2000 during the ZaiAngo cruise (Ifremer and Total). These data help constrain the deep structure of the non-volcanic continental margin, the geometry of the pre-salt sediment layers and the geometry of the Aptian salt layer. Dating the deposition of the salt relative to the chronology of the margin formation is an issue of fundamental importance for reconstructing the evolution of the margin and for the understanding of the crustal thinning processes. The data show that the crust thins abruptly, from a 30 - 40km thickness to less than 10km, over a lateral distance of less than 50km. The transitional domain is a 180km wide basin with a thickness lower than 7 km. The pre-salt sediment layering within this basin is parallel to the base of the salt and hardly affected by tectonic deformation. In addition, the presence of a continuous salt cover, from the continental platform down to the presumed oceanic boundary, provides indications on the conditions of salt deposition that constrain the geometry of the margin at that time. These crucial observations imply shallow deposition environments during the rifting and suggest that vertical motions prevailed - compared to horizontal motions - during the formation of the basin.

  16. A novel statistical method for classifying habitat generalists and specialists

    DEFF Research Database (Denmark)

    Chazdon, Robin L; Chao, Anne; Colwell, Robert K

    2011-01-01

    in second-growth (SG) and old-growth (OG) rain forests in the Caribbean lowlands of northeastern Costa Rica. We evaluate the multinomial model in detail for the tree data set. Our results for birds were highly concordant with a previous nonstatistical classification, but our method classified a higher......: (1) generalist; (2) habitat A specialist; (3) habitat B specialist; and (4) too rare to classify with confidence. We illustrate our multinomial classification method using two contrasting data sets: (1) bird abundance in woodland and heath habitats in southeastern Australia and (2) tree abundance...... fraction (57.7%) of bird species with statistical confidence. Based on a conservative specialization threshold and adjustment for multiple comparisons, 64.4% of tree species in the full sample were too rare to classify with confidence. Among the species classified, OG specialists constituted the largest...

  17. Ship localization in Santa Barbara Channel using machine learning classifiers.

    Science.gov (United States)

    Niu, Haiqiang; Ozanich, Emma; Gerstoft, Peter

    2017-11-01

    Machine learning classifiers are shown to outperform conventional matched field processing for a deep water (600 m depth) ocean acoustic-based ship range estimation problem in the Santa Barbara Channel Experiment when limited environmental information is known. Recordings of three different ships of opportunity on a vertical array were used as training and test data for the feed-forward neural network and support vector machine classifiers, demonstrating the feasibility of machine learning methods to locate unseen sources. The classifiers perform well up to 10 km range whereas the conventional matched field processing fails at about 4 km range without accurate environmental information.

  18. Breast conservation therapy based on liberal selection criteria and less extensive surgery. Analysis of cases with positive margins

    International Nuclear Information System (INIS)

    Amemiya, Atsushi; Kondo, Makoto

    1999-01-01

    The relationship between the margin status and the risk of in-breast recurrence (IBR) is an important consideration in patients treated with breast conservation therapy but has not been defined adequately. To address this issue, 1533 clinical stage I and II patients who completed irradiation therapy between 1983 and 1998 were evaluated. Only selection criterion was whether she could be satisfied with cosmesis after lumpectomy. Size and location of the tumor, nodal status, histology and age were not primary consideration. The tumor was excised in such a way to obtain macroscopically clear margins. The breast was treated with 50 Gy of external irradiation but without boost. Margins were evaluated by serially sectioning of the specimen and the margin was judged positive only when cancer cells were present on the inked surface. Margins were also evaluated by scratch cytology. Seventy two IBR were experienced within 5 years. Only age and margin status were found to be independent risk factors. Five-year IBR rate with negative and positive margins was 3.7% and 10.0%, respectively. In patients with positive margins, number of positive site and positive cytology were independent risk factor for IBR. IBR rate among patients with focally involved margins by non-comedo, comedo and invasive ca, was 0.0%, 3.5%, and 8.7%, respectively. IBR rate in more than focal involvement by non-comedo, comedo, and invasive ca, was 4.0%, 33.0% and 30.0%, respectively. If histologically positive margin was also positive cytologically, IBR was 14.8%, whereas only 3.6% if negative cytologically. Even with liberal patient selection and less extensive local treatment, adequate local control can be obtained, provided that margins are histologically and/or cytologically negative. Focal margin involvement by DCIS or more than focal involvement by non-comedo type DCIS does not jeopardize local control. More than focal involvement by comedo DCIS or involvement by invasive ca results in high IBR rate

  19. A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification

    Science.gov (United States)

    Zhang, Ce; Pan, Xin; Li, Huapeng; Gardiner, Andy; Sargent, Isabel; Hare, Jonathon; Atkinson, Peter M.

    2018-06-01

    The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification.

  20. Margin Evaluation in the Presence of Deformation, Rotation, and Translation in Prostate and Entire Seminal Vesicle Irradiation With Daily Marker-Based Setup Corrections

    International Nuclear Information System (INIS)

    Mutanga, Theodore F.; Boer, Hans C.J. de; Wielen, Gerard J. van der; Hoogeman, Mischa S.; Incrocci, Luca; Heijmen, Ben J.M.

    2011-01-01

    Purpose: To develop a method for margin evaluation accounting for all measured displacements during treatment of prostate cancer. Methods and Materials: For 21 patients treated with stereographic targeting marker-based online translation corrections, dose distributions with varying margins and gradients were created. Sets of possible cumulative delivered dose distributions were simulated by moving voxels and accumulating dose per voxel. Voxel motion was simulated consistent with measured distributions of systematic and random displacements due to stereographic targeting inaccuracies, deformation, rotation, and intrafraction motion. The method of simulation maintained measured correlation of voxel motions due to organ deformation. Results: For the clinical target volume including prostate and seminal vesicles (SV), the probability that some part receives <95% of the prescribed dose, the changes in minimum dose, and volume receiving 95% of prescription dose compared with planning were 80.5% ± 19.2%, 9.0 ± 6.8 Gy, and 3.0% ± 3.7%, respectively, for the smallest studied margins (3 mm prostate, 5 mm SV) and steepest dose gradients. Corresponding values for largest margins (5 mm prostate, 8 mm SV) with a clinical intensity-modulated radiotherapy dose distribution were 46.5% ± 34.7%, 6.7 ± 5.8 Gy, and 1.6% ± 2.3%. For prostate-only clinical target volume, the values were 51.8% ± 17.7%, 3.3 ± 1.6 Gy, and 0.6% ± 0.5% with the smallest margins and 5.2% ± 7.4%, 1.8 ± 0.9 Gy, and 0.1% ± 0.1% for the largest margins. Addition of three-dimensional rotation corrections only improved these values slightly. All rectal planning constraints were met in the actual reconstructed doses for all studied margins. Conclusion: We developed a system for margin validation in the presence of deformations. In our population, a 5-mm margin provided sufficient dosimetric coverage for the prostate. In contrast, an 8-mm SV margin was still insufficient owing to deformations. Addition of

  1. Classifying features in CT imagery: accuracy for some single- and multiple-species classifiers

    Science.gov (United States)

    Daniel L. Schmoldt; Jing He; A. Lynn Abbott

    1998-01-01

    Our current approach to automatically label features in CT images of hardwood logs classifies each pixel of an image individually. These feature classifiers use a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Initially, this type of...

  2. Using a Marginal Structural Model to Design a Theory-Based Mass Media Campaign.

    Directory of Open Access Journals (Sweden)

    Hiromu Nishiuchi

    Full Text Available The essential first step in the development of mass media health campaigns is to identify specific beliefs of the target audience. The challenge is to prioritize suitable beliefs derived from behavioral theory. The purpose of this study was to identify suitable beliefs to target in a mass media campaign to change behavior using a new method to estimate the possible effect size of a small set of beliefs.Data were drawn from the 2010 Japanese Young Female Smoker Survey (n = 500, conducted by the Japanese Ministry of Health, Labor and Welfare. Survey measures included intention to quit smoking, psychological beliefs (attitude, norms, and perceived control based on the theory of planned behavior and socioeconomic status (age, education, household income, and marital status. To identify suitable candidate beliefs for a mass media health campaign, we estimated the possible effect size required to change the intention to quit smoking among the population of young Japanese women using the population attributable fraction from a marginal structural model.Thirteen percent of study participants intended to quit smoking. The marginal structural model estimated a population attributable fraction of 47 psychological beliefs (21 attitudes, 6 norms, and 19 perceived controls after controlling for socioeconomic status. The belief, "I could quit smoking if my husband or significant other recommended it" suggested a promising target for a mass media campaign (population attributable fraction = 0.12, 95% CI = 0.02-0.23. Messages targeting this belief could possibly improve intention rates by up to 12% among this population. The analysis also suggested the potential for regulatory action.This study proposed a method by which campaign planners can develop theory-based mass communication strategies to change health behaviors at the population level. This method might contribute to improving the quality of future mass health communication strategies and further

  3. Using a Marginal Structural Model to Design a Theory-Based Mass Media Campaign.

    Science.gov (United States)

    Nishiuchi, Hiromu; Taguri, Masataka; Ishikawa, Yoshiki

    2016-01-01

    The essential first step in the development of mass media health campaigns is to identify specific beliefs of the target audience. The challenge is to prioritize suitable beliefs derived from behavioral theory. The purpose of this study was to identify suitable beliefs to target in a mass media campaign to change behavior using a new method to estimate the possible effect size of a small set of beliefs. Data were drawn from the 2010 Japanese Young Female Smoker Survey (n = 500), conducted by the Japanese Ministry of Health, Labor and Welfare. Survey measures included intention to quit smoking, psychological beliefs (attitude, norms, and perceived control) based on the theory of planned behavior and socioeconomic status (age, education, household income, and marital status). To identify suitable candidate beliefs for a mass media health campaign, we estimated the possible effect size required to change the intention to quit smoking among the population of young Japanese women using the population attributable fraction from a marginal structural model. Thirteen percent of study participants intended to quit smoking. The marginal structural model estimated a population attributable fraction of 47 psychological beliefs (21 attitudes, 6 norms, and 19 perceived controls) after controlling for socioeconomic status. The belief, "I could quit smoking if my husband or significant other recommended it" suggested a promising target for a mass media campaign (population attributable fraction = 0.12, 95% CI = 0.02-0.23). Messages targeting this belief could possibly improve intention rates by up to 12% among this population. The analysis also suggested the potential for regulatory action. This study proposed a method by which campaign planners can develop theory-based mass communication strategies to change health behaviors at the population level. This method might contribute to improving the quality of future mass health communication strategies and further research is needed.

  4. Marginal Abatement Cost of CO2 in China Based on Directional Distance Function: An Industry Perspective

    Directory of Open Access Journals (Sweden)

    Bowen Xiao

    2017-01-01

    Full Text Available Industrial sectors account for around 70% of the total energy-related CO2 emissions in China. It is of great importance to measure the potential for CO2 emissions reduction and calculate the carbon price in industrial sectors covered in the Emissions Trading Scheme and carbon tax. This paper employs the directional distance function to calculate the marginal abatement costs of CO2 emissions during 2005–2011 and makes a comparative analysis between our study and the relevant literature. Our empirical results show that the marginal abatement costs vary greatly from industry to industry: high marginal abatement costs occur in industries with low carbon intensity, and vice versa. In the application of the marginal abatement cost, the abatement distribution scheme with minimum cost is established under different abatement targets. The conclusions of abatement distribution scheme indicate that those heavy industries with low MACs and high carbon intensity should take more responsibility for emissions reduction and vice versa. Finally, the policy implications for marginal abatement cost are provided.

  5. Assessment of ablative margin after radiofrequency ablation for hepatocellular carcinoma; comparison between magnetic resonance imaging with ferucarbotran and enhanced CT with iodized oil deposition

    International Nuclear Information System (INIS)

    Koda, Masahiko; Tokunaga, Shiho; Fujise, Yuki; Kato, Jun; Matono, Tomomitsu; Sugihara, Takaaki; Nagahara, Takakazu; Ueki, Masaru; Murawaki, Yoshikazu; Kakite, Suguru; Yamashita, Eijiro

    2012-01-01

    Background and purpose: Our aim was to investigate whether magnetic resonance imaging (MRI) with ferucarbotran administered prior to radiofrequency ablation could accurately assess ablative margin when compared with enhanced computed tomography (CT) with iodized oil marking. Materials and methods: We enrolled 27 patients with 32 hepatocellular carcinomas in which iodized oil deposits were visible throughout the nodule after transcatheter arterial chemoembolization. For these nodules, radiofrequency ablation was performed after ferucarbotran administration. We then performed T2-weighted MRI after 1 week and enhanced CT after 1 month. T2-weighted MRI demonstrated the ablative margin as a low-intensity rim. We classified the margin into three grades; margin (+): high-intensity area with a continuous low-intensity rim; margin zero: high-intensity area with a discontinuous low-intensity rim; and margin (−): high-intensity area extending beyond the low-intensity rim. Results: In 28 (86%) of 32 nodules, there was agreement between MRI and CT. The overall agreement between for the two modalities in the assessment of ablative margin was good (κ = 0.759, 95% confidence interval: 0.480–1.000, p < 0.001). In four nodules, ablative margins on MRI were underestimated by one grade compared with CT. Conclusion: MRI using ferucarbotran is less invasive and allows earlier assessment than CT. The MRI technique performed similarly to enhanced CT with iodized oil marking in evaluating the ablative margin after radiofrequency ablation.

  6. A web-based non-intrusive ambient system to measure and classify activities of daily living.

    Science.gov (United States)

    Stucki, Reto A; Urwyler, Prabitha; Rampa, Luca; Müri, René; Mosimann, Urs P; Nef, Tobias

    2014-07-21

    The number of older adults in the global population is increasing. This demographic shift leads to an increasing prevalence of age-associated disorders, such as Alzheimer's disease and other types of dementia. With the progression of the disease, the risk for institutional care increases, which contrasts with the desire of most patients to stay in their home environment. Despite doctors' and caregivers' awareness of the patient's cognitive status, they are often uncertain about its consequences on activities of daily living (ADL). To provide effective care, they need to know how patients cope with ADL, in particular, the estimation of risks associated with the cognitive decline. The occurrence, performance, and duration of different ADL are important indicators of functional ability. The patient's ability to cope with these activities is traditionally assessed with questionnaires, which has disadvantages (eg, lack of reliability and sensitivity). Several groups have proposed sensor-based systems to recognize and quantify these activities in the patient's home. Combined with Web technology, these systems can inform caregivers about their patients in real-time (e.g., via smartphone). We hypothesize that a non-intrusive system, which does not use body-mounted sensors, video-based imaging, and microphone recordings would be better suited for use in dementia patients. Since it does not require patient's attention and compliance, such a system might be well accepted by patients. We present a passive, Web-based, non-intrusive, assistive technology system that recognizes and classifies ADL. The components of this novel assistive technology system were wireless sensors distributed in every room of the participant's home and a central computer unit (CCU). The environmental data were acquired for 20 days (per participant) and then stored and processed on the CCU. In consultation with medical experts, eight ADL were classified. In this study, 10 healthy participants (6 women

  7. A Naive-Bayes classifier for damage detection in engineering materials

    Energy Technology Data Exchange (ETDEWEB)

    Addin, O. [Laboratory of Intelligent Systems, Institute of Advanced Technology, Universiti Putra Malaysia, 43400 Serdang, Selangor (Malaysia); Sapuan, S.M. [Department of Mechanical and Manufacturing Engineering, Universiti Putra Malaysia, 43400 Serdang, Selangor (Malaysia)]. E-mail: sapuan@eng.upm.edu.my; Mahdi, E. [Department of Aerospace Engineering, Universiti Putra Malaysia, 43400 Serdang, Selangor (Malaysia); Othman, M. [Department of Communication Technology and Networks, Universiti Putra Malaysia, 43400 Serdang, Selangor (Malaysia)

    2007-07-01

    This paper is intended to introduce the Bayesian network in general and the Naive-Bayes classifier in particular as one of the most successful classification systems to simulate damage detection in engineering materials. A method for feature subset selection has also been introduced too. The method is based on mean and maximum values of the amplitudes of waves after dividing them into folds then grouping them by a clustering algorithm (e.g. k-means algorithm). The Naive-Bayes classifier and the feature sub-set selection method were analyzed and tested on two sets of data. The data sets were conducted based on artificial damages created in quasi isotopic laminated composites of the AS4/3501-6 graphite/epoxy system and ball bearing of the type 6204 with a steel cage. The Naive-Bayes classifier and the proposed feature subset selection algorithm have been shown as efficient techniques for damage detection in engineering materials.

  8. Relief for marginal wells is better than energy tax

    International Nuclear Information System (INIS)

    Swords, J.; Wilson, D.

    1993-01-01

    By increasing production costs and reducing petroleum prices, President Bill Clinton's proposed energy tax would increase marginal well abandonments and hasten the decline of the US oil and gas industry. Instead, the US needs tax law changes to help counteract the increasing number of oil and gas well abandonments in the lower 48 states. The proposed tax would create potential difficulties, while three incentives could be introduced to reduce abandonments and at the same time preserve US government tax revenues that otherwise would be lost. Eliminating the net income limitation on percentage depletion allowances on wells that would otherwise be abandoned would be a great help for marginal well operators. Extended enhanced oil recovery (EOR) credits and broader investment tax credits could also serve the dual purpose of keeping marginal wells operating longer and generating more federal tax revenues. A marginal well investment tax credit should be provided that is not just a credit for incremented investments that exceed investment in prior years. An investment tax credit based on out-of-pocket costs of production, targeted for marginal wells, would be an important incentive to invest in, and continue to maintain, these properties. (author)

  9. Criticality safety margins for mixtures of fissionable materials

    International Nuclear Information System (INIS)

    Williamson, T.G.; Mincey, J.F.

    1992-01-01

    In the determination of criticality safety margins, approximations for combinations of fissile and fissionable isotopes are sometimes used that go by names such as the rule of fractions or equivalency relations. Use of the rule of fractions to ensure criticality safety margins was discussed in an earlier paper. The purpose of this paper is to correct errors and to clarify some of the implications. Deviations of safety margins from those calculated by the rule of fractions are still noted; however, the deviations are less severe. Caution in applying such rules is still urged. In general, these approximations are based on American National Standard ANSI/ANS-8.15, Sec. 5.2. This section allows that ratios of material masses to their limits may be summed for fissile nuclides in aqueous solutions. It also allows the addition of nonfissile nuclides if an aqueous moderator is present and addresses the effects of infinite water or equivalent reflector. Water-reflected binary combinations of aqueous solutions of fissile materials, as well as binary combinations of fissile and fissionable metals, were considered. Some combinations were shown to significantly decrease the margin of subcriticality compared to the single-unit margins. In this study, it is confirmed that some combinations of metal units in an optimum geometry may significantly decrease the margin of subcriticality. For some combinations of aqueous solutions of fissile materials, the margin of subcriticality may also be reduced by very small amounts. The conclusion of Ref. 1 that analysts should be careful in applying equivalency relations for combining materials remains valid and sound advice. The ANSI/ANS standard, which allows the use of ratios of masses to their limits, applies to aqueous, fully water-reflected, single-unit solutions. Extensions to other situations should be considered with extreme care

  10. Species-Level Differences in Hyperspectral Metrics among Tropical Rainforest Trees as Determined by a Tree-Based Classifier

    Directory of Open Access Journals (Sweden)

    Dar A. Roberts

    2012-06-01

    Full Text Available This study explores a method to classify seven tropical rainforest tree species from full-range (400–2,500 nm hyperspectral data acquired at tissue (leaf and bark, pixel and crown scales using laboratory and airborne sensors. Metrics that respond to vegetation chemistry and structure were derived using narrowband indices, derivative- and absorption-based techniques, and spectral mixture analysis. We then used the Random Forests tree-based classifier to discriminate species with minimally-correlated, importance-ranked metrics. At all scales, best overall accuracies were achieved with metrics derived from all four techniques and that targeted chemical and structural properties across the visible to shortwave infrared spectrum (400–2500 nm. For tissue spectra, overall accuracies were 86.8% for leaves, 74.2% for bark, and 84.9% for leaves plus bark. Variation in tissue metrics was best explained by an axis of red absorption related to photosynthetic leaves and an axis distinguishing bark water and other chemical absorption features. Overall accuracies for individual tree crowns were 71.5% for pixel spectra, 70.6% crown-mean spectra, and 87.4% for a pixel-majority technique. At pixel and crown scales, tree structure and phenology at the time of image acquisition were important factors that determined species spectral separability.

  11. LCC: Light Curves Classifier

    Science.gov (United States)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  12. Discrimination-Aware Classifiers for Student Performance Prediction

    Science.gov (United States)

    Luo, Ling; Koprinska, Irena; Liu, Wei

    2015-01-01

    In this paper we consider discrimination-aware classification of educational data. Mining and using rules that distinguish groups of students based on sensitive attributes such as gender and nationality may lead to discrimination. It is desirable to keep the sensitive attributes during the training of a classifier to avoid information loss but…

  13. Two-categorical bundles and their classifying spaces

    DEFF Research Database (Denmark)

    Baas, Nils A.; Bökstedt, M.; Kro, T.A.

    2012-01-01

    -category is a classifying space for the associated principal 2-bundles. In the process of proving this we develop a lot of powerful machinery which may be useful in further studies of 2-categorical topology. As a corollary we get a new proof of the classification of principal bundles. A calculation based...

  14. Using multivariate machine learning methods and structural MRI to classify childhood onset schizophrenia and healthy controls

    Directory of Open Access Journals (Sweden)

    Deanna eGreenstein

    2012-06-01

    Full Text Available Introduction: Multivariate machine learning methods can be used to classify groups of schizophrenia patients and controls using structural magnetic resonance imaging (MRI. However, machine learning methods to date have not been extended beyond classification and contemporaneously applied in a meaningful way to clinical measures. We hypothesized that brain measures would classify groups, and that increased likelihood of being classified as a patient using regional brain measures would be positively related to illness severity, developmental delays and genetic risk. Methods: Using 74 anatomic brain MRI sub regions and Random Forest, we classified 98 COS patients and 99 age, sex, and ethnicity-matched healthy controls. We also used Random Forest to determine the likelihood of being classified as a schizophrenia patient based on MRI measures. We then explored relationships between brain-based probability of illness and symptoms, premorbid development, and presence of copy number variation associated with schizophrenia. Results: Brain regions jointly classified COS and control groups with 73.7% accuracy. Greater brain-based probability of illness was associated with worse functioning (p= 0.0004 and fewer developmental delays (p=0.02. Presence of copy number variation (CNV was associated with lower probability of being classified as schizophrenia (p=0.001. The regions that were most important in classifying groups included left temporal lobes, bilateral dorsolateral prefrontal regions, and left medial parietal lobes. Conclusions: Schizophrenia and control groups can be well classified using Random Forest and anatomic brain measures, and brain-based probability of illness has a positive relationship with illness severity and a negative relationship with developmental delays/problems and CNV-based risk.

  15. A Fuzzy Logic-Based Personalized Method to Classify Perceived Exertion in Workplaces Using a Wearable Heart Rate Sensor

    Directory of Open Access Journals (Sweden)

    Pablo Pancardo

    2018-01-01

    Full Text Available Knowing the perceived exertion of workers during their physical activities facilitates the decision-making of supervisors regarding the worker allocation in the appropriate job, actions to prevent accidents, and reassignment of tasks, among others. However, although wearable heart rate sensors represent an effective way to capture perceived exertion, ergonomic methods are generic and they do not consider the diffuse nature of the ranges that classify the efforts. Personalized monitoring is needed to enable a real and efficient classification of perceived individual efforts. In this paper, we propose a heart rate-based personalized method to assess perceived exertion; our method uses fuzzy logic as an option to manage imprecision and uncertainty in involved variables. We applied some experiments to cleaning staff and obtained results that highlight the importance of a custom method to classify perceived exertion of people doing physical work.

  16. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems.

    Science.gov (United States)

    Foo, Brian; van der Schaar, Mihaela

    2010-11-01

    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  17. 15 CFR 4.8 - Classified Information.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily the...

  18. Evaluation of LDA Ensembles Classifiers for Brain Computer Interface

    International Nuclear Information System (INIS)

    Arjona, Cristian; Pentácolo, José; Gareis, Iván; Atum, Yanina; Gentiletti, Gerardo; Acevedo, Rubén; Rufiner, Leonardo

    2011-01-01

    The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.

  19. Supervised learning with decision margins in pools of spiking neurons.

    Science.gov (United States)

    Le Mouel, Charlotte; Harris, Kenneth D; Yger, Pierre

    2014-10-01

    Learning to categorise sensory inputs by generalising from a few examples whose category is precisely known is a crucial step for the brain to produce appropriate behavioural responses. At the neuronal level, this may be performed by adaptation of synaptic weights under the influence of a training signal, in order to group spiking patterns impinging on the neuron. Here we describe a framework that allows spiking neurons to perform such "supervised learning", using principles similar to the Support Vector Machine, a well-established and robust classifier. Using a hinge-loss error function, we show that requesting a margin similar to that of the SVM improves performance on linearly non-separable problems. Moreover, we show that using pools of neurons to discriminate categories can also increase the performance by sharing the load among neurons.

  20. Excision margin status: does it impact on local control of breast cancer?

    International Nuclear Information System (INIS)

    Lamoury, G.; Morgan, G.; Ward, R.

    2003-01-01

    Whilst many patients treated with breast conservation undergo re-excision(s) to obtain clear margins, the relationship between clear margins and local recurrence remains unclear. We aimed to determine the impact of final pathological margin status on local recurrence and other breast cancer outcomes. Our study cohort consisted of 755 consecutive patients treated with breast conservation between January 1984 and December 1995. Pathology reports were available for review in 681subjects (90%). Patients were stratified into 8 groups based on final pathological margin status: 1) negative (>3mm, n = 307), close [further divided into two groups 2) >0 1< 2mm, (n = 67)], 4) positive (n= 79), 5) indeterminate (n= 144), 6) low grade DCIS at the margin (n= 3), 7) high grade DCIS at the margin (n= 23) and 8) LCIS at the margin (n= 4). There were no differences between the groups based on histology, T size, grade, LN positivity or total radiation dose. At a median follow-up of 71 months, the breast relapse free survival (BRFS) was 97%, the distant metastasis free survival (DMFS) 78% and the overall survival (OS) 86%, for the entire cohort. There were no statistically significant differences between the negative, close, and positive groups in terms of BRFS: 96% vs. 94% vs. 93% (p=0.59), MFS: 98% vs. 97% vs. 98% (p=0.87) or OS: 84%vs. 85% vs. 86% (p=0.78). Although not statistically significant, the presence of EIC, in the context of close or positive margins, impacted adversely upon local and overall disease free survival. Patients undergoing breast conservation carry a lifelong risk of local recurrence. It is still not clear whether obtaining a radical margin decreases this risk. Tissue microarray analysis will be performed to further elucidate the causes of ipsilateral recurrence at a molecular and genetic level

  1. ECLogger: Cross-Project Catch-Block Logging Prediction Using Ensemble of Classifiers

    Directory of Open Access Journals (Sweden)

    Sangeeta Lal

    2017-01-01

    Full Text Available Background: Software developers insert log statements in the source code to record program execution information. However, optimizing the number of log statements in the source code is challenging. Machine learning based within-project logging prediction tools, proposed in previous studies, may not be suitable for new or small software projects. For such software projects, we can use cross-project logging prediction. Aim: The aim of the study presented here is to investigate cross-project logging prediction methods and techniques. Method: The proposed method is ECLogger, which is a novel, ensemble-based, cross-project, catch-block logging prediction model. In the research We use 9 base classifiers were used and combined using ensemble techniques. The performance of ECLogger was evaluated on on three open-source Java projects: Tomcat, CloudStack and Hadoop. Results: ECLogger Bagging, ECLogger AverageVote, and ECLogger MajorityVote show a considerable improvement in the average Logged F-measure (LF on 3, 5, and 4 source -> target project pairs, respectively, compared to the baseline classifiers. ECLogger AverageVote performs best and shows improvements of 3.12% (average LF and 6.08% (average ACC – Accuracy. Conclusion: The classifier based on ensemble techniques, such as bagging, average vote, and majority vote outperforms the baseline classifier. Overall, the ECLogger AverageVote model performs best. The results show that the CloudStack project is more generalizable than the other projects.

  2. A Bayesian Classifier for X-Ray Pulsars Recognition

    Directory of Open Access Journals (Sweden)

    Hao Liang

    2016-01-01

    Full Text Available Recognition for X-ray pulsars is important for the problem of spacecraft’s attitude determination by X-ray Pulsar Navigation (XPNAV. By using the nonhomogeneous Poisson model of the received photons and the minimum recognition error criterion, a classifier based on the Bayesian theorem is proposed. For X-ray pulsars recognition with unknown Doppler frequency and initial phase, the features of every X-ray pulsar are extracted and the unknown parameters are estimated using the Maximum Likelihood (ML method. Besides that, a method to recognize unknown X-ray pulsars or X-ray disturbances is proposed. Simulation results certificate the validity of the proposed Bayesian classifier.

  3. COMPARISON OF SVM AND FUZZY CLASSIFIER FOR AN INDIAN SCRIPT

    Directory of Open Access Journals (Sweden)

    M. J. Baheti

    2012-01-01

    Full Text Available With the advent of technological era, conversion of scanned document (handwritten or printed into machine editable format has attracted many researchers. This paper deals with the problem of recognition of Gujarati handwritten numerals. Gujarati numeral recognition requires performing some specific steps as a part of preprocessing. For preprocessing digitization, segmentation, normalization and thinning are done with considering that the image have almost no noise. Further affine invariant moments based model is used for feature extraction and finally Support Vector Machine (SVM and Fuzzy classifiers are used for numeral classification. . The comparison of SVM and Fuzzy classifier is made and it can be seen that SVM procured better results as compared to Fuzzy Classifier.

  4. Radiotherapy margin design with particular consideration of high curvature CTVs

    International Nuclear Information System (INIS)

    Herschtal, Alan; Kron, Tomas; Fox, Chris

    2009-01-01

    In applying 3D conformal radiation therapy to a tumor clinical target volume (CTV), a margin is added around the CTV to account for any sources of error in the application of treatment which may result in misalignment between the CTV and the dose distribution actually delivered. The volume enclosed within the CTV plus the margin is known as the PTV, or planning target volume. The larger the errors are anticipated to be, the wider the margin will need to be to accommodate those errors. Based on the approach of van Herk et al. [''The probability of correct target dosage: Dose-population histograms for deriving treatment margins in radiotherapy,'' Int. J. Radiat. Oncol. Biol., Phys. 47(4), 1121-1135 (2000)] this paper develops the mathematical theory behind the calculation of the margin width required to ensure that the entire CTV receives sufficiently high dose with sufficiently high probability. The margin recipe developed not only considers the magnitude of the errors but also includes a term to adjust for curved CTV surfaces. In doing so, the accuracy of the margin recipe is enhanced yet remains mathematically concise enough to be readily implemented in the clinical setting. The results are particularly relevant for clinical situations in which the uncertainties in treatment are large relative to the size of the CTV.

  5. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    Science.gov (United States)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post

  6. Comparison of Different Features and Classifiers for Driver Fatigue Detection Based on a Single EEG Channel

    Directory of Open Access Journals (Sweden)

    Jianfeng Hu

    2017-01-01

    Full Text Available Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG channel. Four types of entropies measures, sample entropy (SE, fuzzy entropy (FE, approximate entropy (AE, and spectral entropy (PE, were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF. The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different.

  7. Fingerprint prediction using classifier ensembles

    CSIR Research Space (South Africa)

    Molale, P

    2011-11-01

    Full Text Available ); logistic discrimination (LgD), k-nearest neighbour (k-NN), artificial neural network (ANN), association rules (AR) decision tree (DT), naive Bayes classifier (NBC) and the support vector machine (SVM). The performance of several multiple classifier systems...

  8. New examples of marginally trapped surfaces and tubes in warped spacetimes

    International Nuclear Information System (INIS)

    Flores, J L; Haesen, S; Ortega, M

    2010-01-01

    In this paper we provide new examples of marginally trapped surfaces and tubes in FLRW spacetimes by using a basic relation between these objects and CMC surfaces in 3-manifolds. We also provide a new method to construct marginally trapped surfaces in closed FLRW spacetimes, which is based on the classical Hopf map. The utility of this method is illustrated by providing marginally trapped surfaces crossing the expanding and collapsing regions of a closed FLRW spacetime. The approach introduced in this paper is also extended to twisted spaces.

  9. An Active Learning Classifier for Further Reducing Diabetic Retinopathy Screening System Cost

    Directory of Open Access Journals (Sweden)

    Yinan Zhang

    2016-01-01

    Full Text Available Diabetic retinopathy (DR screening system raises a financial problem. For further reducing DR screening cost, an active learning classifier is proposed in this paper. Our approach identifies retinal images based on features extracted by anatomical part recognition and lesion detection algorithms. Kernel extreme learning machine (KELM is a rapid classifier for solving classification problems in high dimensional space. Both active learning and ensemble technique elevate performance of KELM when using small training dataset. The committee only proposes necessary manual work to doctor for saving cost. On the publicly available Messidor database, our classifier is trained with 20%–35% of labeled retinal images and comparative classifiers are trained with 80% of labeled retinal images. Results show that our classifier can achieve better classification accuracy than Classification and Regression Tree, radial basis function SVM, Multilayer Perceptron SVM, Linear SVM, and K Nearest Neighbor. Empirical experiments suggest that our active learning classifier is efficient for further reducing DR screening cost.

  10. Margin improvement initiatives: realistic approaches

    Energy Technology Data Exchange (ETDEWEB)

    Chan, P.K.; Paquette, S. [Royal Military College of Canada, Chemistry and Chemical Engineering Dept., Kingston, ON (Canada); Cunning, T.A. [Department of National Defence, Ottawa, ON (Canada); French, C.; Bonin, H.W. [Royal Military College of Canada, Chemistry and Chemical Engineering Dept., Kingston, ON (Canada); Pandey, M. [Univ. of Waterloo, Waterloo, ON (Canada); Murchie, M. [Cameco Fuel Manufacturing, Port Hope, ON (Canada)

    2014-07-01

    With reactor core aging, safety margins are particularly tight. Two realistic and practical approaches are proposed here to recover margins. The first project is related to the use of a small amount of neutron absorbers in CANDU Natural Uranium (NU) fuel bundles. Preliminary results indicate that the fuelling transient and subsequent reactivity peak can be lowered to improve the reactor's operating margins, with minimal impact on burnup when less than 1000 mg of absorbers is added to a fuel bundle. The second project involves the statistical analysis of fuel manufacturing data to demonstrate safety margins. Probability distributions are fitted to actual fuel manufacturing datasets provided by Cameco Fuel Manufacturing, Inc. They are used to generate input for ELESTRES and ELOCA. It is found that the fuel response distributions are far below industrial failure limits, implying that margin exists in the current fuel design. (author)

  11. A Semiparametric Marginalized Model for Longitudinal Data with Informative Dropout

    Directory of Open Access Journals (Sweden)

    Mengling Liu

    2012-01-01

    Full Text Available We propose a marginalized joint-modeling approach for marginal inference on the association between longitudinal responses and covariates when longitudinal measurements are subject to informative dropouts. The proposed model is motivated by the idea of linking longitudinal responses and dropout times by latent variables while focusing on marginal inferences. We develop a simple inference procedure based on a series of estimating equations, and the resulting estimators are consistent and asymptotically normal with a sandwich-type covariance matrix ready to be estimated by the usual plug-in rule. The performance of our approach is evaluated through simulations and illustrated with a renal disease data application.

  12. Constraints Imposed by Rift Inheritance on the Compressional Reactivation of a Hyperextended Margin: Mapping Rift Domains in the North Iberian Margin and in the Cantabrian Mountains

    Science.gov (United States)

    Cadenas, P.; Fernández-Viejo, G.; Pulgar, J. A.; Tugend, J.; Manatschal, G.; Minshull, T. A.

    2018-03-01

    The Alpine Pyrenean-Cantabrian orogen developed along the plate boundary between Iberia and Europe, involving the inversion of Mesozoic hyperextended basins along the southern Biscay margin. Thus, this margin represents a natural laboratory to analyze the control of structural rift inheritance on the compressional reactivation of a continental margin. With the aim to identify former rift domains and investigate their role during the subsequent compression, we performed a structural analysis of the central and western North Iberian margin, based on the interpretation of seismic reflection profiles and local constraints from drill-hole data. Seismic interpretations and published seismic velocity models enabled the development of crustal thickness maps that helped to constrain further the offshore and onshore segmentation. Based on all these constraints, we present a rift domain map across the central and western North Iberian margin, as far as the adjacent western Cantabrian Mountains. Furthermore, we provide a first-order description of the margin segmentation resulting from its polyphase tectonic evolution. The most striking result is the presence of a hyperthinned domain (e.g., Asturian Basin) along the central continental platform that is bounded to the north by the Le Danois High, interpreted as a rift-related continental block separating two distinctive hyperextended domains. From the analysis of the rift domain map and the distribution of reactivation structures, we conclude that the landward limit of the necking domain and the hyperextended domains, respectively, guide and localize the compressional overprint. The Le Danois block acted as a local buttress, conditioning the inversion of the Asturian Basin.

  13. Real analytic solutions for marginal deformations in open superstring field theory

    International Nuclear Information System (INIS)

    Okawa, Yuji

    2007-01-01

    We construct analytic solutions for marginal deformations satisfying the reality condition in open superstring field theory formulated by Berkovits when operator products made of the marginal operator and the associated superconformal primary field are regular. Our strategy is based on the recent observation by Erler that the problem of finding solutions for marginal deformations in open superstring field theory can be reduced to a problem in the bosonic theory of finding a finite gauge parameter for a certain pure-gauge configuration labeled by the parameter of the marginal deformation. We find a gauge transformation generated by a real gauge parameter which infinitesimally changes the deformation parameter and construct a finite gauge parameter by its path-ordered exponential. The resulting solution satisfies the reality condition by construction

  14. Real analytic solutions for marginal deformations in open superstring field theory

    International Nuclear Information System (INIS)

    Okawa, Y.

    2007-04-01

    We construct analytic solutions for marginal deformations satisfying the reality condition in open superstring field theory formulated by Berkovits when operator products made of the marginal operator and the associated superconformal primary field are regular. Our strategy is based on the recent observation by Erler that the problem of finding solutions for marginal deformations in open superstring field theory can be reduced to a problem in the bosonic theory of finding a finite gauge parameter for a certain pure-gauge configuration labeled by the parameter of the marginal deformation. We find a gauge transformation generated by a real gauge parameter which infinitesimally changes the deformation parameter and construct a finite gauge parameter by its path-ordered exponential. The resulting solution satisfies the reality condition by construction. (orig.)

  15. The marginal costs of greenhouse gas emissions

    International Nuclear Information System (INIS)

    Tol, R.S.J.

    1999-01-01

    Estimates of the marginal costs of greenhouse gas emissions are on important input to the decision how much society would want to spend on greenhouse gas emission reduction. Marginal cost estimates in the literature range between $5 and $25 per ton of carbon. Using similar assumptions, the FUND model finds marginal costs of $9--23/tC, depending on the discount rate. If the aggregation of impacts over countries accounts for inequalities in income distribution or for risk aversion, marginal costs would rise by about a factor of 3. Marginal costs per region are an order of magnitude smaller than global marginal costs. The ratios between the marginal costs of CO 2 and those of CH 4 and N 2 O are roughly equal to the global warming potentials of these gases. The uncertainty about the marginal costs is large and right-skewed. The expected value of the marginal costs lies about 35% above the best guess, the 95-percentile about 250%

  16. Gravity Matching Aided Inertial Navigation Technique Based on Marginal Robust Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Ming Liu

    2015-01-01

    Full Text Available This paper is concerned with the topic of gravity matching aided inertial navigation technology using Kalman filter. The dynamic state space model for Kalman filter is constructed as follows: the error equation of the inertial navigation system is employed as the process equation while the local gravity model based on 9-point surface interpolation is employed as the observation equation. The unscented Kalman filter is employed to address the nonlinearity of the observation equation. The filter is refined in two ways as follows. The marginalization technique is employed to explore the conditionally linear substructure to reduce the computational load; specifically, the number of the needed sigma points is reduced from 15 to 5 after this technique is used. A robust technique based on Chi-square test is employed to make the filter insensitive to the uncertainties in the above constructed observation model. Numerical simulation is carried out, and the efficacy of the proposed method is validated by the simulation results.

  17. Colorado Basin Structure and Rifting, Argentine passive margin

    Science.gov (United States)

    Autin, Julia; Scheck-Wenderoth, Magdalena; Loegering, Markus; Anka, Zahie; Vallejo, Eduardo; Rodriguez, Jorge; Marchal, Denis; Reichert, Christian; di Primio, Rolando

    2010-05-01

    partly supports this hypothesis and shows two main directions of faulting: margin-parallel faults (~N30°) and rift-parallel faults (~N125°). A specific distribution of the two fault sets is observed: margin-parallel faults are restrained to the most distal part of the margin. Starting with a 3D structural model of the basin fill based on seismic and well data the deeper structure of the crust beneath the Colorado Basin can be evaluate using isostatic and thermal modelling. Franke, D., et al. (2002), Deep Crustal Structure Of The Argentine Continental Margin From Seismic Wide-Angle And Multichannel Reflection Seismic Data, paper presented at AAPG Hedberg Conference "Hydrocarbon Habitat of Volcanic Rifted Passive Margins", Stavanger, Norway Franke, D., et al. (2006), Crustal structure across the Colorado Basin, offshore Argentina Geophysical Journal International 165, 850-864. Gladczenko, T. P., et al. (1997), South Atlantic volcanic margins Journal of the Geological Society, London 154, 465-470. Hinz, K., et al. (1999), The Argentine continental margin north of 48°S: sedimentary successions, volcanic activity during breakup Marine and Petroleum Geology 16(1-25). Hirsch, K. K., et al. (2009), Tectonic subsidence history and thermal evolution of the Orange Basin, Marine and Petroleum Geology, in press, doi:10.1016/j.marpetgeo.2009.1006.1009

  18. Human Activity Recognition by Combining a Small Number of Classifiers.

    Science.gov (United States)

    Nazabal, Alfredo; Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Ghahramani, Zoubin

    2016-09-01

    We consider the problem of daily human activity recognition (HAR) using multiple wireless inertial sensors, and specifically, HAR systems with a very low number of sensors, each one providing an estimation of the performed activities. We propose new Bayesian models to combine the output of the sensors. The models are based on a soft outputs combination of individual classifiers to deal with the small number of sensors. We also incorporate the dynamic nature of human activities as a first-order homogeneous Markov chain. We develop both inductive and transductive inference methods for each model to be employed in supervised and semisupervised situations, respectively. Using different real HAR databases, we compare our classifiers combination models against a single classifier that employs all the signals from the sensors. Our models exhibit consistently a reduction of the error rate and an increase of robustness against sensor failures. Our models also outperform other classifiers combination models that do not consider soft outputs and an Markovian structure of the human activities.

  19. Influence of Different Implant Geometry in Clinical Longevity and Maintenance of Marginal Bone: A Systematic Review.

    Science.gov (United States)

    Lovatto, Sabrina Telles; Bassani, Rafaela; Sarkis-Onofre, Rafael; Dos Santos, Mateus Bertolini Fernandes

    2018-03-26

    To assess, through a systematic review, the influence of different implant geometries on clinical longevity and maintenance of marginal bone tissue. An electronic search was conducted in MEDLINE, Scopus, and Web of Science databases, limited to studies written in English from 1996 to 2017 using specific search strategies. Only randomized controlled trials (RCTs) that compared dental implants and their geometries were included. Two reviewers independently selected studies, extracted data, and assessed the risk of bias of included studies. From the 4006 references identified by the search, 24 were considered eligible for full-text analysis, after which 10 studies were included in this review. A similar behavior of marginal bone loss between tapered and cylindrical geometries was observed; however, implants that had micro-threads in the neck presented a slight decrease of marginal bone loss compared to implants with straight or smooth neck. Success and survival rates were high, with cylindrical implants presenting higher success and survival rates than tapered ones. Implant geometry seems to have little influence on marginal bone loss (MBL) and survival and success rates after 1 year of implant placement; however, the evidence in this systematic review was classified as very low due to limitations such as study design, sample size, and publication bias. Thus, more well-designed RCTs should be conducted to provide evidence regarding the influence of implant geometry on MBL and survival and success rates after 1 year of implant placement. © 2018 by the American College of Prosthodontists.

  20. Modification of prostate implants based on postimplant treatment margin assessment.

    Science.gov (United States)

    Mueller, Amy; Wallner, Kent; Merrick, Gregory; Courveau, Jacques; Sutlief, Steven; Butler, Wayne; Gong, Lixin; Cho, Paul

    2002-12-01

    To quantify the extent of additional source placement needed to perfect an implant after execution by standard techniques, assuming that uniform 5 mm treatment margins (TMs) is the criteria for perfection. Ten consecutive, unselected patients treated with 1-125 brachytherapy were studied. Source placement is planned just inside or outside of the prostatic margin, to achieve a minimum 5 mm TM and a central dose of 150%-200% of the prescription dose. The preimplant prostate volumes ranged from 24 to 85 cc (median: 35 cc). The number of sources implanted ranged from 48 to 102 (median: 63). Axial CT images were acquired within 2 h postoperatively for postimplant dosimetry. After completion of standard dosimetric calculations, the TMs were measured and tabulated at 45 degrees intervals around the prostate periphery at 0.0, 1.0, 2.0, and 3.0 cm planes. Sources were then added to the periphery to bring the TMs to a minimum of 5 mm at each measured TM, resulting in a modified implant. All margin modifications were done manually, without the aid of automated software. Patients' original (unmodified) D90s ranged from 111% to 154%, with a median of 116%. The original V100s ranged from 94% to 99%, with a median of 96%. No patient required placement of additional sources to meet a minimum D90 of 90% or a minimum V100 of 80%. In contrast, patients required from 7 to 17 additional sources (median: 11) to achieve minimum 5 mm TMs around the entire prostatic periphery. Additional sources equaled from 12% to 24% of the initial number of sources placed (median: 17%). By adding sufficient peripheral sources to bring the TMs to a minimum 5 mm, patients' average V100 increased from 96% to 100%, and the average D90 increased from 124% to 160% of prescription dose. In the course of achieving a minimum 5 mm TM, the average treatment margin for all patients combined increased from 5.5 to 9.9 mm. The number of sources needed to bring the TMs to a minimum 5 mm was loosely correlated with the

  1. Modification of prostate implants based on postimplant treatment margin assessment

    International Nuclear Information System (INIS)

    Mueller, Amy; Wallner, Kent; Merrick, Gregory; Couriveau, Jacques; Sutlief, Steven; Butler, Wayne; Gong, Lixin; Cho, Paul

    2002-01-01

    Purpose: To quantify the extent of additional source placement needed to perfect an implant after execution by standard techniques, assuming that uniform 5 mm treatment margins (TMs) is the criteria for perfection. Materials and Methods: Ten consecutive, unselected patients treated with I-125 brachytherapy were studied. Source placement is planned just inside or outside of the prostatic margin, to achieve a minimum 5 mm TM and a central dose of 150%-200% of the prescription dose. The preimplant prostate volumes ranged from 24 to 85 cc (median: 35 cc). The number of sources implanted ranged from 48 to 102 (median: 63). Axial CT images were acquired within 2 h postoperatively for postimplant dosimetry. After completion of standard dosimetric calculations, the TMs were measured and tabulated at 45 deg. intervals around the prostate periphery at 0.0, 1.0, 2.0, and 3.0 cm planes. Sources were then added to the periphery to bring the TMs to a minimum of 5 mm at each measured TM, resulting in a modified implant. All margin modifications were done manually, without the aid of automated software. Results: Patients' original (unmodified) D90s ranged from 111% to 154%, with a median of 116%. The original V100s ranged from 94% to 99%, with a median of 96%. No patient required placement of additional sources to meet a minimum D90 of 90% or a minimum V100 of 80%. In contrast, patients required from 7 to 17 additional sources (median: 11) to achieve minimum 5 mm TMs around the entire prostatic periphery. Additional sources equaled from 12% to 24% of the initial number of sources placed (median: 17%). By adding sufficient peripheral sources to bring the TMs to a minimum 5 mm, patients' average V100 increased from 96% to 100%, and the average D90 increased from 124% to 160% of prescription dose. In the course of achieving a minimum 5 mm TM, the average treatment margin for all patients combined increased from 5.5 to 9.9 mm. The number of sources needed to bring the TMs to a minimum

  2. Refining margins: recent trends

    International Nuclear Information System (INIS)

    Baudoin, C.; Favennec, J.P.

    1999-01-01

    Despite a business environment that was globally mediocre due primarily to the Asian crisis and to a mild winter in the northern hemisphere, the signs of improvement noted in the refining activity in 1996 were borne out in 1997. But the situation is not yet satisfactory in this sector: the low return on invested capital and the financing of environmental protection expenditure are giving cause for concern. In 1998, the drop in crude oil prices and the concomitant fall in petroleum product prices was ultimately rather favorable to margins. Two elements tended to put a damper on this relative optimism. First of all, margins continue to be extremely volatile and, secondly, the worsening of the economic and financial crisis observed during the summer made for a sharp decline in margins in all geographic regions, especially Asia. Since the beginning of 1999, refining margins are weak and utilization rates of refining capacities have decreased. (authors)

  3. Short text sentiment classification based on feature extension and ensemble classifier

    Science.gov (United States)

    Liu, Yang; Zhu, Xie

    2018-05-01

    With the rapid development of Internet social media, excavating the emotional tendencies of the short text information from the Internet, the acquisition of useful information has attracted the attention of researchers. At present, the commonly used can be attributed to the rule-based classification and statistical machine learning classification methods. Although micro-blog sentiment analysis has made good progress, there still exist some shortcomings such as not highly accurate enough and strong dependence from sentiment classification effect. Aiming at the characteristics of Chinese short texts, such as less information, sparse features, and diverse expressions, this paper considers expanding the original text by mining related semantic information from the reviews, forwarding and other related information. First, this paper uses Word2vec to compute word similarity to extend the feature words. And then uses an ensemble classifier composed of SVM, KNN and HMM to analyze the emotion of the short text of micro-blog. The experimental results show that the proposed method can make good use of the comment forwarding information to extend the original features. Compared with the traditional method, the accuracy, recall and F1 value obtained by this method have been improved.

  4. SU-F-J-102: Lower Esophagus Margin Implications Based On Rapid Computational Algorithm for SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Cardenas, M; Mazur, T; Li, H; Mutic, S; Bradley, J; Tsien, C; Green, O [Washington University School of Medicine, Saint Louis, MO (United States)

    2016-06-15

    Purpose: To quantify inter-fraction esophagus-variation. Methods: Computed tomography and daily on-treatment 0.3-T MRI data sets for 7 patients were analyzed using a novel Matlab-based (Mathworks, Natick, MA) rapid computational method. Rigid registration was performed from the cricoid to the gastro-esophageal junction. CT and MR-based contours were compared at slice intervals of 3mm. Variation was quantified by “expansion,” defined as additional length in any radial direction from CT contour to MR contour. Expansion computations were performed with 360° of freedom in each axial slice. We partitioned expansions into left anterior, right anterior, right posterior, and left posterior quadrants (LA, RA, RP, and LP, respectively). Sample means were compared by analysis of variance (ANOVA) and Fisher’s Protected Least Significant Difference test. Results: Fifteen fractions and 1121 axial slices from 7 patients undergoing SBRT for primary lung cancer (3) and metastatic lung disease (4) were analyzed, generating 41,970 measurements. Mean LA, RA, RP, and LP expansions were 4.30±0.05 mm, 3.71±0.05mm, 3.17±0.07, and 3.98±0.06mm, respectively. 50.13% of all axial slices showed variation > 5 mm in one or more directions. Variation was greatest in lower esophagus with mean LA, RA, RP, and LP expansion (5.98±0.09 mm, 4.59±0.09 mm, 4.04±0.16 mm, and 5.41±0.16 mm, respectively). The difference was significant compared to mid and upper esophagus (p<.0001). The 95th percentiles of expansion for LA, RA, RP, LP were 13.36 mm, 9.97 mm, 11.29 mm, and 12.19 mm, respectively. Conclusion: Analysis of on-treatment MR imaging of the lower esophagus during thoracic SBRT suggests margin expansions of 13.36 mm LA, 9.97 mm RA, 11.29 mm RP, 12.19 mm LP would account for 95% of measurements. Our novel algorithm for rapid assessment of margin expansion for critical structures with 360° of freedom in each axial slice enables continuously adaptive patient-specific margins which may

  5. Reliability-based approaches for safety margin assessment in the French nuclear industry

    International Nuclear Information System (INIS)

    Ardillon, E.; Barthelet, B.; Meister, E.; Cambefort, P.; Hornet, P.; Le Delliou, P.

    2003-01-01

    The prevention of the fast fracture damage of the mechanical equipment important for the safety of nuclear islands of the French PWR relies on deterministic rules. These rules include flaw acceptance criteria involving safety factors applied to characteristic values (implicit margins) of the physical variables. The sets of safety factors that are currently under application in the industrial analyses with the agreement of the Safety Authority, are distributed across the two main physical parameters and have partly been based on a semi-probabilistic approach. After presenting the generic probabilistic pro-codification approach this paper shows its application to the evaluation of the performances of the existing regulatory flaw acceptance criteria. This application can be carried out in a realistic manner or in a more simplified one. These two approaches are applied to representative mechanical components. Their results are consistent. (author)

  6. Systems considerations in seismic margin evaluations

    International Nuclear Information System (INIS)

    Buttermer, D.R.

    1987-01-01

    Increasing knowledge in the geoscience field has led to the understanding that, although highly unlikely, it is possible for a nuclear power plant to be subjected to earthquake ground motion greater than that for which the plant was designed. While it is recognized that there are conservatisms inherent in current design practices, interest has developed in evaluating the seismic risk of operating plants. Several plant-specific seismic probabilistic risk assessments (SPRA) have been completed to address questions related to the seismic risk of a plant. The results from such SPRAs are quite informative, but such studies may entail a considerable amount of expensive analysis of large portions of the plant. As an alternative to an SPRA, it may be more practical to select an earthquake level above the design basis for which plant survivability is to be demonstrated. The principal question to be addressed in a seismic margin evaluation is: At what ground motion levels does one have a high confidence that the probability of seismically induced core damage is sufficiently low? In a seismic margin evaluation, an earthquake level is selected (based on site-specific geoscience considerations) for which a stable, long-term safe shutdown condition is to be demonstrated. This prespecified earthquake level is commonly referred to as the seismic margin earthquake (SME). The Electric Power Research Institute is currently supporting a research project to develop procedures for use by the utilities to allow them to perform nuclear plant seismic margin evaluations. This paper describes the systems-related aspects of these procedures

  7. Medium-term marginal costs in competitive generation power markets

    International Nuclear Information System (INIS)

    Reneses, J.; Centeno, E.; Barquin, J.

    2004-01-01

    The meaning and significance of medium-term marginal costs for a generation company in a competitive power market in analysed. A methodology to compute and decompose medium-term generation marginal costs in a competitive environment is proposed. The methodology is based on a market equilibrium model. The aim is to provide a useful tool for generation companies so that they can manage their resources in an optimal way, helping them with their operation, decision-making processes, asset valuations or contract assessments. (author)

  8. Margin Requirements and Equity Option Returns

    DEFF Research Database (Denmark)

    Hitzemann, Steffen; Hofmann, Michael; Uhrig-Homburg, Marliese

    In equity option markets, traders face margin requirements both for the options themselves and for hedging-related positions in the underlying stock market. We show that these requirements carry a significant margin premium in the cross-section of equity option returns. The sign of the margin...... premium depends on demand pressure: If end-users are on the long side of the market, option returns decrease with margins, while they increase otherwise. Our results are statistically and economically significant and robust to different margin specifications and various control variables. We explain our...... findings by a model of funding-constrained derivatives dealers that require compensation for satisfying end-users’ option demand....

  9. Margin Requirements and Equity Option Returns

    DEFF Research Database (Denmark)

    Hitzemann, Steffen; Hofmann, Michael; Uhrig-Homburg, Marliese

    In equity option markets, traders face margin requirements both for the options themselves and for hedging-related positions in the underlying stock market. We show that these requirements carry a significant "margin premium" in the cross-section of equity option returns. The sign of the margin...... premium depends on demand pressure: If end-users are on the long side of the market, option returns decrease with margins, while they increase otherwise. Our results are statistically and economically significant and robust to different margin specifications and various control variables. We explain our...... findings by a model of funding-constrained derivatives dealers that require compensation for satisfying end-users’ option demand....

  10. Classifying Sluice Occurrences in Dialogue

    DEFF Research Database (Denmark)

    Baird, Austin; Hamza, Anissa; Hardt, Daniel

    2018-01-01

    perform manual annotation with acceptable inter-coder agreement. We build classifier models with Decision Trees and Naive Bayes, with accuracy of 67%. We deploy a classifier to automatically classify sluice occurrences in OpenSubtitles, resulting in a corpus with 1.7 million occurrences. This will support....... Despite this, the corpus can be of great use in research on sluicing and development of systems, and we are making the corpus freely available on request. Furthermore, we are in the process of improving the accuracy of sluice identification and annotation for the purpose of created a subsequent version...

  11. Nonlinear dynamics near the stability margin in rotating pipe flow

    Science.gov (United States)

    Yang, Z.; Leibovich, S.

    1991-01-01

    The nonlinear evolution of marginally unstable wave packets in rotating pipe flow is studied. These flows depend on two control parameters, which may be taken to be the axial Reynolds number R and a Rossby number, q. Marginal stability is realized on a curve in the (R, q)-plane, and the entire marginal stability boundary is explored. As the flow passes through any point on the marginal stability curve, it undergoes a supercritical Hopf bifurcation and the steady base flow is replaced by a traveling wave. The envelope of the wave system is governed by a complex Ginzburg-Landau equation. The Ginzburg-Landau equation admits Stokes waves, which correspond to standing modulations of the linear traveling wavetrain, as well as traveling wave modulations of the linear wavetrain. Bands of wavenumbers are identified in which the nonlinear modulated waves are subject to a sideband instability.

  12. Nonparametric, Coupled ,Bayesian ,Dictionary ,and Classifier Learning for Hyperspectral Classification.

    Science.gov (United States)

    Akhtar, Naveed; Mian, Ajmal

    2017-10-03

    We present a principled approach to learn a discriminative dictionary along a linear classifier for hyperspectral classification. Our approach places Gaussian Process priors over the dictionary to account for the relative smoothness of the natural spectra, whereas the classifier parameters are sampled from multivariate Gaussians. We employ two Beta-Bernoulli processes to jointly infer the dictionary and the classifier. These processes are coupled under the same sets of Bernoulli distributions. In our approach, these distributions signify the frequency of the dictionary atom usage in representing class-specific training spectra, which also makes the dictionary discriminative. Due to the coupling between the dictionary and the classifier, the popularity of the atoms for representing different classes gets encoded into the classifier. This helps in predicting the class labels of test spectra that are first represented over the dictionary by solving a simultaneous sparse optimization problem. The labels of the spectra are predicted by feeding the resulting representations to the classifier. Our approach exploits the nonparametric Bayesian framework to automatically infer the dictionary size--the key parameter in discriminative dictionary learning. Moreover, it also has the desirable property of adaptively learning the association between the dictionary atoms and the class labels by itself. We use Gibbs sampling to infer the posterior probability distributions over the dictionary and the classifier under the proposed model, for which, we derive analytical expressions. To establish the effectiveness of our approach, we test it on benchmark hyperspectral images. The classification performance is compared with the state-of-the-art dictionary learning-based classification methods.

  13. Scoring and Classifying Examinees Using Measurement Decision Theory

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2009-04-01

    Full Text Available This paper describes and evaluates the use of measurement decision theory (MDT to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1 the classification accuracy of tests scored using decision theory; (2 the effectiveness of different sequential testing procedures; and (3 the number of items needed to make a classification. A large percentage of examinees can be classified accurately with very few items using decision theory. A Java Applet for self instruction and software for generating, calibrating and scoring MDT data are provided.

  14. An IGRT margin concept for pelvic lymph nodes in high-risk prostate cancer

    International Nuclear Information System (INIS)

    Groher, M.; Kopp, P.; Deutschmann, H.; Sedlmayer, F.; Wolf, Frank; Drerup, M.

    2017-01-01

    Gold-marker-based image-guided radiation therapy (IGRT) of the prostate allows to correct for inter- and intrafraction motion and therefore to safely reduce margins for the prostate planning target volume (PTV). However, pelvic PTVs, when coadministered in a single plan (registered to gold markers [GM]), require reassessment of the margin concept since prostate movement is independent from the pelvic bony anatomy to which the lymphatics are usually referenced to. We have therefore revisited prostate translational movement relative to the bony anatomy to obtain adequate margins for the pelvic PTVs compensating mismatch resulting from referencing pelvic target volumes to GMs in the prostate. Prostate movement was analyzed in a set of 28 patients (25 fractions each, totaling in 684 fractions) and the required margins calculated for the pelvic PTVs according to Van Herk's margin formula M = 2.5 Σ + 1.64 (σ ' -σ p ). The overall mean prostate movement relative to bony anatomy was 0.9 ± 3.1, 0.6 ± 3.4, and 0.0 ± 0.7 mm in anterior/posterior (A/P), inferior/superior (I/S) and left/right (L/R) direction, respectively. Calculated margins to compensate for the resulting mismatch to bony anatomy were 9/9/2 mm in A/P, I/S, and L/R direction and 10/11/6 mm if an additional residual error of 2 mm was assumed. GM-based IGRT for pelvic PTVs is feasible if margins are adapted accordingly. Margins could be reduced further if systematic errors which are introduced during the planning CT were eliminated. (orig.) [de

  15. Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems

    Directory of Open Access Journals (Sweden)

    DimitrisG. Stavrakoudis

    2012-04-01

    Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.

  16. A simple model for enamel fracture from margin cracks.

    Science.gov (United States)

    Chai, Herzl; Lee, James J-W; Kwon, Jae-Young; Lucas, Peter W; Lawn, Brian R

    2009-06-01

    We present results of in situ fracture tests on extracted human molar teeth showing failure by margin cracking. The teeth are mounted into an epoxy base and loaded with a rod indenter capped with a Teflon insert, as representative of food modulus. In situ observations of cracks extending longitudinally upward from the cervical margins are recorded in real time with a video camera. The cracks appear above some threshold and grow steadily within the enamel coat toward the occlusal surface in a configuration reminiscent of channel-like cracks in brittle films. Substantially higher loading is required to delaminate the enamel from the dentin, attesting to the resilience of the tooth structure. A simplistic fracture mechanics analysis is applied to determine the critical load relation for traversal of the margin crack along the full length of the side wall. The capacity of any given tooth to resist failure by margin cracking is predicted to increase with greater enamel thickness and cuspal radius. Implications in relation to dentistry and evolutionary biology are briefly considered.

  17. Margins Associated with Loss of Assured Safety for Systems with Multiple Time-Dependent Failure Modes.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon C. [Arizona State Univ., Tempe, AZ (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sallaberry, Cedric Jean-Marie. [Engineering Mechanics Corp. of Columbus, OH (United States)

    2018-02-01

    Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property value at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.

  18. A support vector machine classifier reduces interscanner variation in the HRCT classification of regional disease pattern in diffuse lung disease: Comparison to a Bayesian classifier

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Yongjun; Lim, Jonghyuck; Kim, Namkug; Seo, Joon Beom [Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 138-736 (Korea, Republic of); Lynch, David A. [Department of Radiology, National Jewish Medical and Research Center, Denver, Colorado 80206 (United States)

    2013-05-15

    Purpose: To investigate the effect of using different computed tomography (CT) scanners on the accuracy of high-resolution CT (HRCT) images in classifying regional disease patterns in patients with diffuse lung disease, support vector machine (SVM) and Bayesian classifiers were applied to multicenter data. Methods: Two experienced radiologists marked sets of 600 rectangular 20 Multiplication-Sign 20 pixel regions of interest (ROIs) on HRCT images obtained from two scanners (GE and Siemens), including 100 ROIs for each of local patterns of lungs-normal lung and five of regional pulmonary disease patterns (ground-glass opacity, reticular opacity, honeycombing, emphysema, and consolidation). Each ROI was assessed using 22 quantitative features belonging to one of the following descriptors: histogram, gradient, run-length, gray level co-occurrence matrix, low-attenuation area cluster, and top-hat transform. For automatic classification, a Bayesian classifier and a SVM classifier were compared under three different conditions. First, classification accuracies were estimated using data from each scanner. Next, data from the GE and Siemens scanners were used for training and testing, respectively, and vice versa. Finally, all ROI data were integrated regardless of the scanner type and were then trained and tested together. All experiments were performed based on forward feature selection and fivefold cross-validation with 20 repetitions. Results: For each scanner, better classification accuracies were achieved with the SVM classifier than the Bayesian classifier (92% and 82%, respectively, for the GE scanner; and 92% and 86%, respectively, for the Siemens scanner). The classification accuracies were 82%/72% for training with GE data and testing with Siemens data, and 79%/72% for the reverse. The use of training and test data obtained from the HRCT images of different scanners lowered the classification accuracy compared to the use of HRCT images from the same scanner. For

  19. A support vector machine classifier reduces interscanner variation in the HRCT classification of regional disease pattern in diffuse lung disease: Comparison to a Bayesian classifier

    International Nuclear Information System (INIS)

    Chang, Yongjun; Lim, Jonghyuck; Kim, Namkug; Seo, Joon Beom; Lynch, David A.

    2013-01-01

    Purpose: To investigate the effect of using different computed tomography (CT) scanners on the accuracy of high-resolution CT (HRCT) images in classifying regional disease patterns in patients with diffuse lung disease, support vector machine (SVM) and Bayesian classifiers were applied to multicenter data. Methods: Two experienced radiologists marked sets of 600 rectangular 20 × 20 pixel regions of interest (ROIs) on HRCT images obtained from two scanners (GE and Siemens), including 100 ROIs for each of local patterns of lungs—normal lung and five of regional pulmonary disease patterns (ground-glass opacity, reticular opacity, honeycombing, emphysema, and consolidation). Each ROI was assessed using 22 quantitative features belonging to one of the following descriptors: histogram, gradient, run-length, gray level co-occurrence matrix, low-attenuation area cluster, and top-hat transform. For automatic classification, a Bayesian classifier and a SVM classifier were compared under three different conditions. First, classification accuracies were estimated using data from each scanner. Next, data from the GE and Siemens scanners were used for training and testing, respectively, and vice versa. Finally, all ROI data were integrated regardless of the scanner type and were then trained and tested together. All experiments were performed based on forward feature selection and fivefold cross-validation with 20 repetitions. Results: For each scanner, better classification accuracies were achieved with the SVM classifier than the Bayesian classifier (92% and 82%, respectively, for the GE scanner; and 92% and 86%, respectively, for the Siemens scanner). The classification accuracies were 82%/72% for training with GE data and testing with Siemens data, and 79%/72% for the reverse. The use of training and test data obtained from the HRCT images of different scanners lowered the classification accuracy compared to the use of HRCT images from the same scanner. For integrated ROI

  20. Landscape object-based analysis of wetland plant functional types: the effects of spatial scale, vegetation classes and classifier methods

    Science.gov (United States)

    Dronova, I.; Gong, P.; Wang, L.; Clinton, N.; Fu, W.; Qi, S.

    2011-12-01

    Remote sensing-based vegetation classifications representing plant function such as photosynthesis and productivity are challenging in wetlands with complex cover and difficult field access. Recent advances in object-based image analysis (OBIA) and machine-learning algorithms offer new classification tools; however, few comparisons of different algorithms and spatial scales have been discussed to date. We applied OBIA to delineate wetland plant functional types (PFTs) for Poyang Lake, the largest freshwater lake in China and Ramsar wetland conservation site, from 30-m Landsat TM scene at the peak of spring growing season. We targeted major PFTs (C3 grasses, C3 forbs and different types of C4 grasses and aquatic vegetation) that are both key players in system's biogeochemical cycles and critical providers of waterbird habitat. Classification results were compared among: a) several object segmentation scales (with average object sizes 900-9000 m2); b) several families of statistical classifiers (including Bayesian, Logistic, Neural Network, Decision Trees and Support Vector Machines) and c) two hierarchical levels of vegetation classification, a generalized 3-class set and more detailed 6-class set. We found that classification benefited from object-based approach which allowed including object shape, texture and context descriptors in classification. While a number of classifiers achieved high accuracy at the finest pixel-equivalent segmentation scale, the highest accuracies and best agreement among algorithms occurred at coarser object scales. No single classifier was consistently superior across all scales, although selected algorithms of Neural Network, Logistic and K-Nearest Neighbors families frequently provided the best discrimination of classes at different scales. The choice of vegetation categories also affected classification accuracy. The 6-class set allowed for higher individual class accuracies but lower overall accuracies than the 3-class set because

  1. Optimization of short amino acid sequences classifier

    Science.gov (United States)

    Barcz, Aleksy; Szymański, Zbigniew

    This article describes processing methods used for short amino acid sequences classification. The data processed are 9-symbols string representations of amino acid sequences, divided into 49 data sets - each one containing samples labeled as reacting or not with given enzyme. The goal of the classification is to determine for a single enzyme, whether an amino acid sequence would react with it or not. Each data set is processed separately. Feature selection is performed to reduce the number of dimensions for each data set. The method used for feature selection consists of two phases. During the first phase, significant positions are selected using Classification and Regression Trees. Afterwards, symbols appearing at the selected positions are substituted with numeric values of amino acid properties taken from the AAindex database. In the second phase the new set of features is reduced using a correlation-based ranking formula and Gram-Schmidt orthogonalization. Finally, the preprocessed data is used for training LS-SVM classifiers. SPDE, an evolutionary algorithm, is used to obtain optimal hyperparameters for the LS-SVM classifier, such as error penalty parameter C and kernel-specific hyperparameters. A simple score penalty is used to adapt the SPDE algorithm to the task of selecting classifiers with best performance measures values.

  2. Marginal Generation Technology in the Chinese Power Market towards 2030 Based on Consequential Life Cycle Assessment

    DEFF Research Database (Denmark)

    Zhao, Guangling; Guerrero, Josep M.; Pei, Yingying

    2016-01-01

    Electricity consumption is often the hotspot of life cycle assessment (LCA) of products, industrial activities, or services. The objective of this paper is to provide a consistent, scientific, region-specific electricity-supply-based inventory of electricity generation technology for national...... and regional power grids. Marginal electricity generation technology is pivotal in assessing impacts related to additional consumption of electricity. China covers a large geographical area with regional supply grids; these are arguably equally or less integrated. Meanwhile, it is also a country with internal...

  3. Splenic marginal zone lymphoma.

    Science.gov (United States)

    Piris, Miguel A; Onaindía, Arantza; Mollejo, Manuela

    Splenic marginal zone lymphoma (SMZL) is an indolent small B-cell lymphoma involving the spleen and bone marrow characterized by a micronodular tumoral infiltration that replaces the preexisting lymphoid follicles and shows marginal zone differentiation as a distinctive finding. SMZL cases are characterized by prominent splenomegaly and bone marrow and peripheral blood infiltration. Cells in peripheral blood show a villous cytology. Bone marrow and peripheral blood characteristic features usually allow a diagnosis of SMZL to be performed. Mutational spectrum of SMZL identifies specific findings, such as 7q loss and NOTCH2 and KLF2 mutations, both genes related with marginal zone differentiation. There is a striking clinical variability in SMZL cases, dependent of the tumoral load and performance status. Specific molecular markers such as 7q loss, p53 loss/mutation, NOTCH2 and KLF2 mutations have been found to be associated with the clinical variability. Distinction from Monoclonal B-cell lymphocytosis with marginal zone phenotype is still an open issue that requires identification of precise and specific thresholds with clinical meaning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Categorical marginal models: quite extensive package for the estimation of marginal models for categorical data

    OpenAIRE

    Wicher Bergsma; Andries van der Ark

    2015-01-01

    A package accompanying the book Marginal Models for Dependent, Clustered, and Longitudinal Categorical Data by Bergsma, Croon, & Hagenaars, 2009. It’s purpose is fitting and testing of marginal models.

  5. Stable Sparse Classifiers Identify qEEG Signatures that Predict Learning Disabilities (NOS) Severity.

    Science.gov (United States)

    Bosch-Bayard, Jorge; Galán-García, Lídice; Fernandez, Thalia; Lirio, Rolando B; Bringas-Vega, Maria L; Roca-Stappung, Milene; Ricardo-Garcell, Josefina; Harmony, Thalía; Valdes-Sosa, Pedro A

    2017-01-01

    In this paper, we present a novel methodology to solve the classification problem, based on sparse (data-driven) regressions, combined with techniques for ensuring stability, especially useful for high-dimensional datasets and small samples number. The sensitivity and specificity of the classifiers are assessed by a stable ROC procedure, which uses a non-parametric algorithm for estimating the area under the ROC curve. This method allows assessing the performance of the classification by the ROC technique, when more than two groups are involved in the classification problem, i.e., when the gold standard is not binary. We apply this methodology to the EEG spectral signatures to find biomarkers that allow discriminating between (and predicting pertinence to) different subgroups of children diagnosed as Not Otherwise Specified Learning Disabilities (LD-NOS) disorder. Children with LD-NOS have notable learning difficulties, which affect education but are not able to be put into some specific category as reading (Dyslexia), Mathematics (Dyscalculia), or Writing (Dysgraphia). By using the EEG spectra, we aim to identify EEG patterns that may be related to specific learning disabilities in an individual case. This could be useful to develop subject-based methods of therapy, based on information provided by the EEG. Here we study 85 LD-NOS children, divided in three subgroups previously selected by a clustering technique over the scores of cognitive tests. The classification equation produced stable marginal areas under the ROC of 0.71 for discrimination between Group 1 vs. Group 2; 0.91 for Group 1 vs. Group 3; and 0.75 for Group 2 vs. Group1. A discussion of the EEG characteristics of each group related to the cognitive scores is also presented.

  6. Characterizing Convexity of Games using Marginal Vectors

    NARCIS (Netherlands)

    van Velzen, S.; Hamers, H.J.M.; Norde, H.W.

    2003-01-01

    In this paper we study the relation between convexity of TU games and marginal vectors.We show that if specfic marginal vectors are core elements, then the game is convex.We characterize sets of marginal vectors satisfying this property, and we derive the formula for the minimum number of marginal

  7. 76 FR 34761 - Classified National Security Information

    Science.gov (United States)

    2011-06-14

    ... MARINE MAMMAL COMMISSION Classified National Security Information [Directive 11-01] AGENCY: Marine... Commission's (MMC) policy on classified information, as directed by Information Security Oversight Office... of Executive Order 13526, ``Classified National Security Information,'' and 32 CFR part 2001...

  8. Margination of Fluorescent Polylactic Acid–Polyaspartamide based Nanoparticles in Microcapillaries In Vitro: the Effect of Hematocrit and Pressure

    Directory of Open Access Journals (Sweden)

    Emanuela Fabiola Craparo

    2017-10-01

    Full Text Available The last decade has seen the emergence of vascular-targeted drug delivery systems as a promising approach for the treatment of many diseases, such as cardiovascular diseases and cancer. In this field, one of the major challenges is carrier margination propensity (i.e., particle migration from blood flow to vessel walls; indeed, binding of these particles to targeted cells and tissues is only possible if there is direct carrier–wall interaction. Here, a microfluidic system mimicking the hydrodynamic conditions of human microcirculation in vitro is used to investigate the effect of red blood cells (RBCs on a carrier margination in relation to RBC concentration (hematocrit and pressure drop. As model drug carriers, fluorescent polymeric nanoparticles (FNPs were chosen, which were obtained by using as starting material a pegylated polylactic acid–polyaspartamide copolymer. The latter was synthesized by derivatization of α,β-poly(N-2-hydroxyethyl-d,l-aspartamide (PHEA with Rhodamine (RhB, polylactic acid (PLA and then poly(ethyleneglycol (PEG chains. It was found that the carrier concentration near the wall increases with increasing pressure drop, independently of RBC concentration, and that the tendency for FNP margination decreases with increasing hematocrit. This work highlights the importance of taking into account RBC–drug carrier interactions and physiological conditions in microcirculation when planning a drug delivery strategy based on systemically administered carriers.

  9. Refining MARGINS Mini-Lessons Using Classroom Observations

    Science.gov (United States)

    Iverson, E. A.; Manduca, C. A.; McDaris, J. R.; Lee, S.

    2009-12-01

    One of the challenges that we face in developing teaching materials or activities from research findings is testing the materials to determine that they work as intended. Traditionally faculty develop material for their own class, notice what worked and didn’t, and improve them the next year. However, as we move to a community process of creating and sharing teaching materials, a community-based process for testing materials is appropriate. The MARGINS project has piloted such a process for testing teaching materials and activities developed as part of its mini-lesson project (http://serc.carleton.edu/margins/index.html). Building on prior work developing mechanisms for community review of teaching resources (e.g. Kastens, 2002; Hancock and Manduca, 2005; Mayhew and Hall, 2007), the MARGINS evaluation team developed a structured classroom observation protocol. The goals of field testing are to a) gather structured, consistent feedback for the lesson authors based on classroom use; b) guide reviewers of these lessons to reflect on research-based educational practice as a framework for their comments; c) collect information on the data and observations that the reviewer used to underpin their review; d) determine which mini-lessons are ready to be made widely available on the website. The protocol guides faculty observations on why they used the activity, the effectiveness of the activity in their classroom, the success of the activity in leading to the desired learning, and what other faculty need to successfully use the activity. Available online (http://serc.carleton.edu/margins/protocol.html), the protocol can be downloaded and completed during instruction with the activity. In order to encourage review of mini-lessons using the protocol, a workshop focused on review and revision of activities was held in May 2009. In preparation for the workshop, 13 of the 28 participants chose to field test a mini-lesson prior to the workshop and reported that they found this

  10. An Improved Ensemble Learning Method for Classifying High-Dimensional and Imbalanced Biomedicine Data.

    Science.gov (United States)

    Yu, Hualong; Ni, Jun

    2014-01-01

    Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.

  11. Cervical and incisal marginal discrepancy in ceramic laminate veneering materials: A SEM analysis

    Directory of Open Access Journals (Sweden)

    Hemalatha Ranganathan

    2017-01-01

    Full Text Available Context: Marginal discrepancy influenced by the choice of processing material used for the ceramic laminate veneers needs to be explored further for better clinical application. Aims: This study aimed to evaluate the amount of cervical and incisal marginal discrepancy associated with different ceramic laminate veneering materials. Settings and Design: This was an experimental, single-blinded, in vitro trial. Subjects and Methods: Ten central incisors were prepared for laminate veneers with 2 mm uniform reduction and heavy chamfer finish line. Ceramic laminate veneers fabricated over the prepared teeth using four different processing materials were categorized into four groups as Group I - aluminous porcelain veneers, Group II - lithium disilicate ceramic veneers, Group III - lithium disilicate-leucite-based veneers, Group IV - zirconia-based ceramic veneers. The cervical and incisal marginal discrepancy was measured using a scanning electron microscope. Statistical Analysis Used: ANOVA and post hoc Tukey honest significant difference (HSD tests were used for statistical analysis. Results: The cervical and incisal marginal discrepancy for four groups was Group I - 114.6 ± 4.3 μm, 132.5 ± 6.5 μm, Group II - 86.1 ± 6.3 μm, 105.4 ± 5.3 μm, Group III - 71.4 ± 4.4 μm, 91.3 ± 4.7 μm, and Group IV - 123.1 ± 4.1 μm, 142.0 ± 5.4 μm. ANOVA and post hoc Tukey HSD tests observed a statistically significant difference between the four test specimens with regard to cervical marginal discrepancy. The cervical and incisal marginal discrepancy scored F = 243.408, P < 0.001 and F = 180.844, P < 0.001, respectively. Conclusion: This study concluded veneers fabricated using leucite reinforced lithium disilicate exhibited the least marginal discrepancy followed by lithium disilicate ceramic, aluminous porcelain, and zirconia-based ceramics. The marginal discrepancy was more in the incisal region than in the cervical region in all the groups.

  12. Portable optical fiber probe-based spectroscopic scanner for rapid cancer diagnosis: a new tool for intraoperative margin assessment.

    Directory of Open Access Journals (Sweden)

    Niyom Lue

    Full Text Available There continues to be a significant clinical need for rapid and reliable intraoperative margin assessment during cancer surgery. Here we describe a portable, quantitative, optical fiber probe-based, spectroscopic tissue scanner designed for intraoperative diagnostic imaging of surgical margins, which we tested in a proof of concept study in human tissue for breast cancer diagnosis. The tissue scanner combines both diffuse reflectance spectroscopy (DRS and intrinsic fluorescence spectroscopy (IFS, and has hyperspectral imaging capability, acquiring full DRS and IFS spectra for each scanned image pixel. Modeling of the DRS and IFS spectra yields quantitative parameters that reflect the metabolic, biochemical and morphological state of tissue, which are translated into disease diagnosis. The tissue scanner has high spatial resolution (0.25 mm over a wide field of view (10 cm × 10 cm, and both high spectral resolution (2 nm and high spectral contrast, readily distinguishing tissues with widely varying optical properties (bone, skeletal muscle, fat and connective tissue. Tissue-simulating phantom experiments confirm that the tissue scanner can quantitatively measure spectral parameters, such as hemoglobin concentration, in a physiologically relevant range with a high degree of accuracy (<5% error. Finally, studies using human breast tissues showed that the tissue scanner can detect small foci of breast cancer in a background of normal breast tissue. This tissue scanner is simpler in design, images a larger field of view at higher resolution and provides a more physically meaningful tissue diagnosis than other spectroscopic imaging systems currently reported in literatures. We believe this spectroscopic tissue scanner can provide real-time, comprehensive diagnostic imaging of surgical margins in excised tissues, overcoming the sampling limitation in current histopathology margin assessment. As such it is a significant step in the development of a

  13. 12 CFR 220.4 - Margin account.

    Science.gov (United States)

    2010-01-01

    ... Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM CREDIT BY... securities. The required margin on a net long or net short commitment in a when-issued security is the margin...) Interest charged on credit maintained in the margin account; (ii) Premiums on securities borrowed in...

  14. Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure

    Directory of Open Access Journals (Sweden)

    Xiaodong Zeng

    2014-01-01

    Full Text Available A weighted accuracy and diversity (WAD method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.

  15. Feature Import Vector Machine: A General Classifier with Flexible Feature Selection.

    Science.gov (United States)

    Ghosh, Samiran; Wang, Yazhen

    2015-02-01

    The support vector machine (SVM) and other reproducing kernel Hilbert space (RKHS) based classifier systems are drawing much attention recently due to its robustness and generalization capability. General theme here is to construct classifiers based on the training data in a high dimensional space by using all available dimensions. The SVM achieves huge data compression by selecting only few observations which lie close to the boundary of the classifier function. However when the number of observations are not very large (small n ) but the number of dimensions/features are large (large p ), then it is not necessary that all available features are of equal importance in the classification context. Possible selection of an useful fraction of the available features may result in huge data compression. In this paper we propose an algorithmic approach by means of which such an optimal set of features could be selected. In short, we reverse the traditional sequential observation selection strategy of SVM to that of sequential feature selection. To achieve this we have modified the solution proposed by Zhu and Hastie (2005) in the context of import vector machine (IVM), to select an optimal sub-dimensional model to build the final classifier with sufficient accuracy.

  16. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  17. Mercury⊕: An evidential reasoning image classifier

    Science.gov (United States)

    Peddle, Derek R.

    1995-12-01

    MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.

  18. Defining and Classifying Interest Groups

    DEFF Research Database (Denmark)

    Baroni, Laura; Carroll, Brendan; Chalmers, Adam

    2014-01-01

    The interest group concept is defined in many different ways in the existing literature and a range of different classification schemes are employed. This complicates comparisons between different studies and their findings. One of the important tasks faced by interest group scholars engaged...... in large-N studies is therefore to define the concept of an interest group and to determine which classification scheme to use for different group types. After reviewing the existing literature, this article sets out to compare different approaches to defining and classifying interest groups with a sample...... in the organizational attributes of specific interest group types. As expected, our comparison of coding schemes reveals a closer link between group attributes and group type in narrower classification schemes based on group organizational characteristics than those based on a behavioral definition of lobbying....

  19. Can scientific journals be classified based on their citation profiles?

    Directory of Open Access Journals (Sweden)

    Sayed-Amir Marashi

    2015-03-01

    Full Text Available Classification of scientific publications is of great importance in biomedical research evaluation. However, accurate classification of research publications is challenging and normally is performed in a rather subjective way. In the present paper, we propose to classify biomedical publications into superfamilies, by analysing their citation profiles, i.e. the location of citations in the structure of citing articles. Such a classification may help authors to find the appropriate biomedical journal for publication, may make journal comparisons more rational, and may even help planners to better track the consequences of their policies on biomedical research.

  20. Developing tools to identify marginal lands and assess their potential for bioenergy production

    Science.gov (United States)

    Galatsidas, Spyridon; Gounaris, Nikolaos; Dimitriadis, Elias; Rettenmaier, Nils; Schmidt, Tobias; Vlachaki, Despoina

    2017-04-01

    The term "marginal land" is currently intertwined in discussions about bioenergy although its definition is neither specific nor firm. The uncertainty arising from marginal land classification and quantification is one of the major constraining factors for its potential use. The clarification of political aims, i.e. "what should be supported?" is also an important constraining factor. Many approaches have been developed to identify marginal lands, based on various definitions according to the management goals. Concerns have been frequently raised regarding the impacts of marginal land use on environment, ecosystem services and sustainability. Current tools of soil quality and land potentials assessment fail to meet the needs of marginal land identification and exploitation for biomass production, due to the lack of comprehensive analysis of interrelated land functions and their quantitative evaluation. Land marginality is determined by dynamic characteristics in many cases and may therefore constitute a transitional state, which requires reassessment in due time. Also, marginal land should not be considered simply a dormant natural resource waiting to be used, since it may already provide multiple benefits and services to society relating to wildlife, biodiversity, carbon sequestration, etc. The consequences of cultivating such lands need to be fully addressed to present a balanced view of their sustainable potential for bioenergy. This framework is the basis for the development of the SEEMLA tools, which aim at supporting the identification, assessment, management of marginal lands in Europe and the decision-making for sustainable biomass production of them using appropriate bioenergy crops. The tools comprise two applications, a web-based one (independent of spatial data) and a GIS-based application (land regionalization on the basis of spatial data), which both incorporate: - Land resource characteristics, restricting the cultivation of agricultural crops but

  1. A method to select aperture margin in collimated spot scanning proton therapy

    International Nuclear Information System (INIS)

    Wang, Dongxu; Smith, Blake R; Gelover, Edgar; Flynn, Ryan T; Hyer, Daniel E

    2015-01-01

    The use of collimator or aperture may sharpen the lateral dose gradient for spot scanning proton therapy. However, to date, there has not been a standard method to determine the aperture margin for a single field in collimated spot scanning proton therapy. This study describes a theoretical framework to select the optimal aperture margin for a single field, and also presents the spot spacing limit required such that the optimal aperture margin exists. Since, for a proton pencil beam partially intercepted by collimator, the maximum point dose (spot center) shifts away from the original pencil beam central axis, we propose that the optimal margin should be equal to the maximum pencil beam center shift under the condition that spot spacing is small with respect to the maximum pencil beam center shift, which can be numerically determined based on beam modeling data. A test case is presented which demonstrates agreement with the prediction made based on the proposed methods. When apertures are applied in a commercial treatment planning system this method may be implemented. (note)

  2. Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection.

    Science.gov (United States)

    Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad

    2014-11-01

    Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.

  3. NRC Seismic Design Margins Program Plan

    International Nuclear Information System (INIS)

    Cummings, G.E.; Johnson, J.J.; Budnitz, R.J.

    1985-08-01

    Recent studies estimate that seismically induced core melt comes mainly from earthquakes in the peak ground acceleration range from 2 to 4 times the safe shutdown earthquake (SSE) acceleration used in plant design. However, from the licensing perspective of the US Nuclear Regulatory Commission, there is a continuing need for consideration of the inherent quantitative seismic margins because of, among other things, the changing perceptions of the seismic hazard. This paper discusses a Seismic Design Margins Program Plan, developed under the auspices of the US NRC, that provides the technical basis for assessing the significance of design margins in terms of overall plant safety. The Plan will also identify potential weaknesses that might have to be addressed, and will recommend technical methods for assessing margins at existing plants. For the purposes of this program, a general definition of seismic design margin is expressed in terms of how much larger that the design basis earthquake an earthquake must be to compromise plant safety. In this context, margin needs to be determined at the plant, system/function, structure, and component levels. 14 refs., 1 fig

  4. Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions

    Science.gov (United States)

    Khoury, Mehdi; Liu, Honghai

    This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.

  5. The marginal band system in nymphalid butterfly wings.

    Science.gov (United States)

    Taira, Wataru; Kinjo, Seira; Otaki, Joji M

    2015-01-01

    Butterfly wing color patterns are highly complex and diverse, but they are believed to be derived from the nymphalid groundplan, which is composed of several color pattern systems. Among these pattern systems, the marginal band system, including marginal and submarginal bands, has rarely been studied. Here, we examined the color pattern diversity of the marginal band system among nymphalid butterflies. Marginal and submarginal bands are usually expressed as a pair of linear bands aligned with the wing margin. However, a submarginal band can be expressed as a broken band, an elongated oval, or a single dot. The marginal focus, usually a white dot at the middle of a wing compartment along the wing edge, corresponds to the pupal edge spot, one of the pupal cuticle spots that signify the locations of color pattern organizing centers. A marginal band can be expressed as a semicircle, an elongated oval, or a pair of eyespot-like structures, which suggest the organizing activity of the marginal focus. Physical damage at the pupal edge spot leads to distal dislocation of the submarginal band in Junonia almana and in Vanessa indica, suggesting that the marginal focus functions as an organizing center for the marginal band system. Taken together, we conclude that the marginal band system is developmentally equivalent to other symmetry systems. Additionally, the marginal band is likely a core element and the submarginal band a paracore element of the marginal band system, and both bands are primarily specified by the marginal focus organizing center.

  6. Comparing cosmic web classifiers using information theory

    Energy Technology Data Exchange (ETDEWEB)

    Leclercq, Florent [Institute of Cosmology and Gravitation (ICG), University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth PO1 3FX (United Kingdom); Lavaux, Guilhem; Wandelt, Benjamin [Institut d' Astrophysique de Paris (IAP), UMR 7095, CNRS – UPMC Université Paris 6, Sorbonne Universités, 98bis boulevard Arago, F-75014 Paris (France); Jasche, Jens, E-mail: florent.leclercq@polytechnique.org, E-mail: lavaux@iap.fr, E-mail: j.jasche@tum.de, E-mail: wandelt@iap.fr [Excellence Cluster Universe, Technische Universität München, Boltzmannstrasse 2, D-85748 Garching (Germany)

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  7. Comparing cosmic web classifiers using information theory

    International Nuclear Information System (INIS)

    Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin; Jasche, Jens

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  8. Final Validation of the ProMisE Molecular Classifier for Endometrial Carcinoma in a Large Population-based Case Series.

    Science.gov (United States)

    Kommoss, S; McConechy, M K; Kommoss, F; Leung, S; Bunz, A; Magrill, J; Britton, H; Kommoss, F; Grevenkamp, F; Karnezis, A; Yang, W; Lum, A; Krämer, B; Taran, F; Staebler, A; Lax, S; Brucker, S Y; Huntsman, D G; Gilks, C B; McAlpine, J N; Talhouk, A

    2018-02-07

    Based on The Cancer Genome Atlas, we previously developed and confirmed a pragmatic molecular classifier for endometrial cancers; ProMisE (Proactive Molecular Risk Classifier for Endometrial Cancer). ProMisE identifies four prognostically distinct molecular subtypes, and can be applied to diagnostic specimens (biopsy/curettings), enabling earlier informed decision-making. We have strictly adhered to the Institute of Medicine (IOM) guidelines for the development of genomic biomarkers, and herein present the final validation step of a locked-down classifier prior to clinical application. We assessed a retrospective cohort of women from the Tübingen University Women's Hospital treated for endometrial carcinoma between 2003-13. Primary outcomes of overall, disease-specific and progression-free survival were evaluated for clinical, pathological, and molecular features. Complete clinical and molecular data were evaluable from 452 women. Patient age ranged from 29 - 93 (median 65) years, and 87.8% cases were endometrioid histotype. Grade distribution included 282 (62.4%) G1, 75 (16.6%) G2, and 95 (21.0%) G3 tumors. 276 (61.1%) patients had stage IA disease, with the remaining stage IB (89 (19.7%)), stage II (26 (5.8%)), and stage III/IV (61 (13.5%)). ProMisE molecular classification yielded 127 (28.1%) MMR-D, 42 (9.3%) POLE, 55 (12.2%) p53abn, and 228 (50.4%) p53wt. ProMisE was a prognostic marker for progression-free (P=0.001) and disease-specific (P=0.03) survival even after adjusting for known risk factors. Concordance between diagnostic and surgical specimens was highly favorable; accuracy 0.91, kappa 0.88. We have developed, confirmed and now validated a pragmatic molecular classification tool (ProMisE) that provides consistent categorization of tumors and identifies four distinct prognostic molecular subtypes. ProMisE can be applied to diagnostic samples and thus could be used to inform surgical procedure(s) and/or need for adjuvant therapy. Based on the IOM

  9. Assessment of seismic margin calculation methods

    International Nuclear Information System (INIS)

    Kennedy, R.P.; Murray, R.C.; Ravindra, M.K.; Reed, J.W.; Stevenson, J.D.

    1989-03-01

    Seismic margin review of nuclear power plants requires that the High Confidence of Low Probability of Failure (HCLPF) capacity be calculated for certain components. The candidate methods for calculating the HCLPF capacity as recommended by the Expert Panel on Quantification of Seismic Margins are the Conservative Deterministic Failure Margin (CDFM) method and the Fragility Analysis (FA) method. The present study evaluated these two methods using some representative components in order to provide further guidance in conducting seismic margin reviews. It is concluded that either of the two methods could be used for calculating HCLPF capacities. 21 refs., 9 figs., 6 tabs

  10. Development of the Canadian Marginalization Index: a new tool for the study of inequality.

    Science.gov (United States)

    Matheson, Flora I; Dunn, James R; Smith, Katherine L W; Moineddin, Rahim; Glazier, Richard H

    2012-04-30

    Area-based measures of socio-economic status are increasingly used in population health research. Based on previous research and theory, the Canadian Marginalization Index (CAN-Marg) was created to reflect four dimensions of marginalization: residential instability, material deprivation, dependency and ethnic concentration. The objective of this paper was threefold: to describe CAN-Marg; to illustrate its stability across geographic area and time; and to describe its association with health and behavioural problems. CAN-Marg was created at the dissemination area (DA) and census tract level for census years 2001 and 2006, using factor analysis. Descriptions of 18 health and behavioural problems were selected using individual-level data from the Canadian Community Health Survey (CCHS) 3.1 and 2007/08. CAN-Marg quintiles created at the DA level (2006) were assigned to individual CCHS records. Multilevel logistic regression modeling was conducted to examine associations between marginalization and CCHS health and behavioural problems. The index demonstrated marked stability across time and geographic area. Each of the four dimensions showed strong and significant associations with the selected health and behavioural problems, and these associations differed depending on which of the dimensions of marginalization was examined. CAN-Marg is a census-based, empirically derived and theoretically informed tool designed to reflect a broader conceptualization of Canadian marginalization.

  11. A hybrid approach to select features and classify diseases based on medical data

    Science.gov (United States)

    AbdelLatif, Hisham; Luo, Jiawei

    2018-03-01

    Feature selection is popular problem in the classification of diseases in clinical medicine. Here, we developing a hybrid methodology to classify diseases, based on three medical datasets, Arrhythmia, Breast cancer, and Hepatitis datasets. This methodology called k-means ANOVA Support Vector Machine (K-ANOVA-SVM) uses K-means cluster with ANOVA statistical to preprocessing data and selection the significant features, and Support Vector Machines in the classification process. To compare and evaluate the performance, we choice three classification algorithms, decision tree Naïve Bayes, Support Vector Machines and applied the medical datasets direct to these algorithms. Our methodology was a much better classification accuracy is given of 98% in Arrhythmia datasets, 92% in Breast cancer datasets and 88% in Hepatitis datasets, Compare to use the medical data directly with decision tree Naïve Bayes, and Support Vector Machines. Also, the ROC curve and precision with (K-ANOVA-SVM) Achieved best results than other algorithms

  12. Least Square Support Vector Machine Classifier vs a Logistic Regression Classifier on the Recognition of Numeric Digits

    Directory of Open Access Journals (Sweden)

    Danilo A. López-Sarmiento

    2013-11-01

    Full Text Available In this paper is compared the performance of a multi-class least squares support vector machine (LSSVM mc versus a multi-class logistic regression classifier to problem of recognizing the numeric digits (0-9 handwritten. To develop the comparison was used a data set consisting of 5000 images of handwritten numeric digits (500 images for each number from 0-9, each image of 20 x 20 pixels. The inputs to each of the systems were vectors of 400 dimensions corresponding to each image (not done feature extraction. Both classifiers used OneVsAll strategy to enable multi-classification and a random cross-validation function for the process of minimizing the cost function. The metrics of comparison were precision and training time under the same computational conditions. Both techniques evaluated showed a precision above 95 %, with LS-SVM slightly more accurate. However the computational cost if we found a marked difference: LS-SVM training requires time 16.42 % less than that required by the logistic regression model based on the same low computational conditions.

  13. Competent person for radiation protection. Practical radiation protection for base nuclear installations and installations classified for the environment protection

    International Nuclear Information System (INIS)

    Pin, A.; Perez, S.; Videcoq, J.; Ammerich, M.

    2008-01-01

    This book corresponds to the practical module devoted to the base nuclear installations and to the installations classified for the environment protection, that is to say the permanent nuclear installations susceptible to present risks for the public, environment or workers. Complied with the legislation that stipulates this module must allow to apply the acquired theoretical training to practical situations of work, it includes seven chapters as follow: generalities on access conditions in regulated areas of nuclear installation,s or installations classified for environment protection and clothing against contamination; use of control devices and management of damaged situations; methodology of working place studies, completed by the application to a real case of a study on an intervention on a containment wall; a part entitled 'take stock of the situation' ends every chapter and proposes to the reader to check its understanding and acquisition of treated knowledge. (N.C.)

  14. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  15. Northeastern Brazilian margin: Regional tectonic evolution based on integrated analysis of seismic reflection and potential field data and modelling

    Science.gov (United States)

    Blaich, Olav A.; Tsikalas, Filippos; Faleide, Jan Inge

    2008-10-01

    Integration of regional seismic reflection and potential field data along the northeastern Brazilian margin, complemented by crustal-scale gravity modelling, is used to reveal and illustrate onshore-offshore crustal structure correlation, the character of the continent-ocean boundary, and the relationship of crustal structure to regional variation of potential field anomalies. The study reveals distinct along-margin structural and magmatic changes that are spatially related to a number of conjugate Brazil-West Africa transfer systems, governing the margin segmentation and evolution. Several conceptual tectonic models are invoked to explain the structural evolution of the different margin segments in a conjugate margin context. Furthermore, the constructed transects, the observed and modelled Moho relief, and the potential field anomalies indicate that the Recôncavo, Tucano and Jatobá rift system may reflect a polyphase deformation rifting-mode associated with a complex time-dependent thermal structure of the lithosphere. The constructed transects and available seismic reflection profiles, indicate that the northern part of the study area lacks major breakup-related magmatic activity, suggesting a rifted non-volcanic margin affinity. In contrast, the southern part of the study area is characterized by abrupt crustal thinning and evidence for breakup magmatic activity, suggesting that this region evolved, partially, with a rifted volcanic margin affinity and character.

  16. Application of SVM classifier in thermographic image classification for early detection of breast cancer

    Science.gov (United States)

    Oleszkiewicz, Witold; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał

    2016-09-01

    This article presents the application of machine learning algorithms for early detection of breast cancer on the basis of thermographic images. Supervised learning model: Support vector machine (SVM) and Sequential Minimal Optimization algorithm (SMO) for the training of SVM classifier were implemented. The SVM classifier was included in a client-server application which enables to create a training set of examinations and to apply classifiers (including SVM) for the diagnosis and early detection of the breast cancer. The sensitivity and specificity of SVM classifier were calculated based on the thermographic images from studies. Furthermore, the heuristic method for SVM's parameters tuning was proposed.

  17. Reliabilityy and operating margins of LWR fuels

    International Nuclear Information System (INIS)

    Strasser, A.A.; Lindquist, K.O.

    1977-01-01

    The margins to fuel thermal operating limits under normal and accident conditions are key to plant operating flexibility and impact on availability and capacity factor. Fuel performance problems that do not result in clad breach, can reduce these margins. However, most have or can be solved with design changes. Regulatory changes have been major factors in eroding these margins. Various methods for regaining the margins are discussed

  18. Possible world based consistency learning model for clustering and classifying uncertain data.

    Science.gov (United States)

    Liu, Han; Zhang, Xianchao; Zhang, Xiaotong

    2018-06-01

    Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. 36 CFR 1256.46 - National security-classified information.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false National security-classified... Restrictions § 1256.46 National security-classified information. In accordance with 5 U.S.C. 552(b)(1), NARA... properly classified under the provisions of the pertinent Executive Order on Classified National Security...

  20. Overview of seismic margin insights gained from seismic PRA results

    International Nuclear Information System (INIS)

    Kennedy, R.P.; Sues, R.H.; Campbell, R.D.

    1986-01-01

    This paper presents the findings of a study conducted under NRC and EPRI sponsorship in which published seismic PRAs were reviewed in order to gain insight to the seismic margins inherent in existing nuclear plants. The approach taken was to examine the fragilities of those components which have been found to be dominant contributors to seismic risk at plants in low-to-moderate seismic regions (SSE levels between 0.12g and 0.25g). It is concluded that there is significant margin inherent in the capacity of most critical components above the plant design basis. For ground motions less than about 0.3g, the predominant sources of seismic risk are loss of offsite power coupled with random failure of the emergency diesels, non-recoverable circuit breaker trip due to relay chatter, unanchored equipment, unreinforced non-load bearing block walls, vertical water storage tanks, systems interactions and possibly soil liquefaction. Recommendations as to which components should be reviewed in seismic margin studies for margin earthquakes less than 0.3g, between 0.3g and 0.5g, and greater than 0.5g, developed by the NRC expert panel on the quantification of seismic margins (based on the review of past PRA data, earthquake experience data, and their own personal experience) are presented

  1. Bayes classifiers for imbalanced traffic accidents datasets.

    Science.gov (United States)

    Mujalli, Randa Oqab; López, Griselda; Garach, Laura

    2016-03-01

    Traffic accidents data sets are usually imbalanced, where the number of instances classified under the killed or severe injuries class (minority) is much lower than those classified under the slight injuries class (majority). This, however, supposes a challenging problem for classification algorithms and may cause obtaining a model that well cover the slight injuries instances whereas the killed or severe injuries instances are misclassified frequently. Based on traffic accidents data collected on urban and suburban roads in Jordan for three years (2009-2011); three different data balancing techniques were used: under-sampling which removes some instances of the majority class, oversampling which creates new instances of the minority class and a mix technique that combines both. In addition, different Bayes classifiers were compared for the different imbalanced and balanced data sets: Averaged One-Dependence Estimators, Weightily Average One-Dependence Estimators, and Bayesian networks in order to identify factors that affect the severity of an accident. The results indicated that using the balanced data sets, especially those created using oversampling techniques, with Bayesian networks improved classifying a traffic accident according to its severity and reduced the misclassification of killed and severe injuries instances. On the other hand, the following variables were found to contribute to the occurrence of a killed causality or a severe injury in a traffic accident: number of vehicles involved, accident pattern, number of directions, accident type, lighting, surface condition, and speed limit. This work, to the knowledge of the authors, is the first that aims at analyzing historical data records for traffic accidents occurring in Jordan and the first to apply balancing techniques to analyze injury severity of traffic accidents. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Solid waste bin detection and classification using Dynamic Time Warping and MLP classifier

    Energy Technology Data Exchange (ETDEWEB)

    Islam, Md. Shafiqul, E-mail: shafique@eng.ukm.my [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Hannan, M.A., E-mail: hannan@eng.ukm.my [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Basri, Hassan [Dept. of Civil and Structural Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Hussain, Aini; Arebey, Maher [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia)

    2014-02-15

    Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensor intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.

  3. Novel Two-Step Classifier for Torsades de Pointes Risk Stratification from Direct Features

    Directory of Open Access Journals (Sweden)

    Jaimit Parikh

    2017-11-01

    Full Text Available While pre-clinical Torsades de Pointes (TdP risk classifiers had initially been based on drug-induced block of hERG potassium channels, it is now well established that improved risk prediction can be achieved by considering block of non-hERG ion channels. The current multi-channel TdP classifiers can be categorized into two classes. First, the classifiers that take as input the values of drug-induced block of ion channels (direct features. Second, the classifiers that are built on features extracted from output of the drug-induced multi-channel blockage simulations in the in-silico models (derived features. The classifiers built on derived features have thus far not consistently provided increased prediction accuracies, and hence casts doubt on the value of such approaches given the cost of including biophysical detail. Here, we propose a new two-step method for TdP risk classification, referred to as Multi-Channel Blockage at Early After Depolarization (MCB@EAD. In the first step, we classified the compound that produced insufficient hERG block as non-torsadogenic. In the second step, the role of non-hERG channels to modulate TdP risk are considered by constructing classifiers based on direct or derived features at critical hERG block concentrations that generates EADs in the computational cardiac cell models. MCB@EAD provides comparable or superior TdP risk classification of the drugs from the direct features in tests against published methods. TdP risk for the drugs highly correlated to the propensity to generate EADs in the model. However, the derived features of the biophysical models did not improve the predictive capability for TdP risk assessment.

  4. Designing a Web Spam Classifier Based on Feature Fusion in the Layered Multi-Population Genetic Programming Framework

    Directory of Open Access Journals (Sweden)

    Amir Hosein KEYHANIPOUR

    2013-11-01

    Full Text Available Nowadays, Web spam pages are a critical challenge for Web retrieval systems which have drastic influence on the performance of such systems. Although these systems try to combat the impact of spam pages on their final results list, spammers increasingly use more sophisticated techniques to increase the number of views for their intended pages in order to have more commercial success. This paper employs the recently proposed Layered Multi-population Genetic Programming model for Web spam detection task as well application of correlation coefficient analysis for feature space reduction. Based on our tentative results, the designed classifier, which is based on a combination of easy to compute features, has a very reasonable performance in comparison with similar methods.

  5. Just-in-time classifiers for recurrent concepts.

    Science.gov (United States)

    Alippi, Cesare; Boracchi, Giacomo; Roveri, Manuel

    2013-04-01

    Just-in-time (JIT) classifiers operate in evolving environments by classifying instances and reacting to concept drift. In stationary conditions, a JIT classifier improves its accuracy over time by exploiting additional supervised information coming from the field. In nonstationary conditions, however, the classifier reacts as soon as concept drift is detected; the current classification setup is discarded and a suitable one activated to keep the accuracy high. We present a novel generation of JIT classifiers able to deal with recurrent concept drift by means of a practical formalization of the concept representation and the definition of a set of operators working on such representations. The concept-drift detection activity, which is crucial in promptly reacting to changes exactly when needed, is advanced by considering change-detection tests monitoring both inputs and classes distributions.

  6. Ranking production units according to marginal efficiency contribution

    DEFF Research Database (Denmark)

    Ghiyasi, Mojtaba; Hougaard, Jens Leth

    League tables associated with various forms of service activities from schools to hospitals illustrate the public need for ranking institutions by their productive performance. We present a new method for ranking production units which is based on each units marginal contribution to the technical...

  7. Two Classifiers Based on Serum Peptide Pattern for Prediction of HBV-Induced Liver Cirrhosis Using MALDI-TOF MS

    Directory of Open Access Journals (Sweden)

    Yuan Cao

    2013-01-01

    Full Text Available Chronic infection with hepatitis B virus (HBV is associated with the majority of cases of liver cirrhosis (LC in China. Although liver biopsy is the reference method for evaluation of cirrhosis, it is an invasive procedure with inherent risk. The aim of this study is to discover novel noninvasive specific serum biomarkers for the diagnosis of HBV-induced LC. We performed bead fractionation/MALDI-TOF MS analysis on sera from patients with LC. Thirteen feature peaks which had optimal discriminatory performance were obtained by using support-vector-machine-(SVM- based strategy. Based on the previous results, five supervised machine learning methods were employed to construct classifiers that discriminated proteomic spectra of patients with HBV-induced LC from those of controls. Here, we describe two novel methods for prediction of HBV-induced LC, termed LC-NB and LC-MLP, respectively. We obtained a sensitivity of 90.9%, a specificity of 94.9%, and overall accuracy of 93.8% on an independent test set. Comparisons with the existing methods showed that LC-NB and LC-MLP held better accuracy. Our study suggests that potential serum biomarkers can be determined for discriminating LC and non-LC cohorts by using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. These two classifiers could be used for clinical practice in HBV-induced LC assessment.

  8. MARGINALIZATION OF SOUTH ASIANS BASED ON THE RACE AND SKIN COLOR IN BHARATI MUKHERJEE’S "JASMINE" AND CHITRA B. DIVAKARUNI’S "THE MISTRESS OF SPICES"

    Directory of Open Access Journals (Sweden)

    Iwona Filipczak

    2016-04-01

    Full Text Available The purpose of this article is to focus on the issue of marginalization of South Asians in the United States as portrayed in two novels written by writers of Indian origin: Bharati Mukherjee’s "Jasmine" and Chitra Bannerjee Divakaruni’s "The Mistress of Spices". It is investigated how race or skin color are the reasons for the marginalization of Indian immigrants in the United States. While "Jasmine" shows white Americans’ inability to embrace the racial difference of an Indian immigrant, which may be read as a reflection of the relative newness of this ethnic group in the United States and its shifting racial classification, "The Mistress of Spices" shows that the patterns of marginalization based on skin color may be developed already in the homeland, India, and then transferred to the US and confronted with the country’s racial diversity. Divakaruni’s novel raises a discussion of how the appreciation of whiteness developed in the country of birth leads to the hierarchical relations between the members of the Indian diaspora, and how it affects their relations with other American minorities. In this way, it shows that marginalization based on skin color is not only the outcome of inter-ethnic encounters but it can be an internal problem of this ethnic group as well.

  9. Classifying cognitive profiles using machine learning with privileged information in Mild Cognitive Impairment

    Directory of Open Access Journals (Sweden)

    Hanin Hamdan Alahmadi

    2016-11-01

    Full Text Available Early diagnosis of dementia is critical for assessing disease progression and potential treatment. State-or-the-art machine learning techniques have been increasingly employed to take on this diagnostic task. In this study, we employed Generalised Matrix Learning Vector Quantization (GMLVQ classifiers to discriminate patients with Mild Cognitive Impairment (MCI from healthy controls based on their cognitive skills. Further, we adopted a ``Learning with privileged information'' approach to combine cognitive and fMRI data for the classification task. The resulting classifier operates solely on the cognitive data while it incorporates the fMRI data as privileged information (PI during training. This novel classifier is of practical use as the collection of brain imaging data is not always possible with patients and older participants.MCI patients and healthy age-matched controls were trained to extract structure from temporal sequences. We ask whether machine learning classifiers can be used to discriminate patients from controls based on the learning performance and whether differences between these groups relate to individual cognitive profiles. To this end, we tested participants in four cognitive tasks: working memory, cognitive inhibition, divided attention, and selective attention. We also collected fMRI data before and after training on the learning task and extracted fMRI responses and connectivity as features for machine learning classifiers. Our results show that the PI guided GMLVQ classifiers outperform the baseline classifier that only used the cognitive data. In addition, we found that for the baseline classifier, divided attention is the only relevant cognitive feature. When PI was incorporated, divided attention remained the most relevant feature while cognitive inhibition became also relevant for the task. Interestingly, this analysis for the fMRI GMLVQ classifier suggests that (1 when overall fMRI signal for structured stimuli is

  10. WSEAT Shock Testing Margin Assessment Using Energy Spectra Final Report.

    Energy Technology Data Exchange (ETDEWEB)

    Sisemore, Carl; Babuska, Vit; Booher, Jason

    2018-02-01

    Several programs at Sandia National Laboratories have adopted energy spectra as a metric to relate the severity of mechanical insults to structural capacity. The purpose being to gain insight into the system's capability, reliability, and to quantify the ultimate margin between the normal operating envelope and the likely system failure point -- a system margin assessment. The fundamental concern with the use of energy metrics was that the applicability domain and implementation details were not completely defined for many problems of interest. The goal of this WSEAT project was to examine that domain of applicability and work out the necessary implementation details. The goal of this project was to provide experimental validation for the energy spectra based methods in the context of margin assessment as they relate to shock environments. The extensive test results concluded that failure predictions using energy methods did not agree with failure predictions using S-N data. As a result, a modification to the energy methods was developed following the form of Basquin's equation to incorporate the power law exponent for fatigue damage. This update to the energy-based framework brings the energy based metrics into agreement with experimental data and historical S-N data.

  11. Safety margins associated with containment structures under dynamic loading

    International Nuclear Information System (INIS)

    Lu, S.C.

    1978-01-01

    A technical basis for assessing the true safety margins of containment structures involved with MARK I boiling water reactor reevaluation activities is presented. It is based on the results of a plane-strain, large displacement, elasto-plastic, finite-element analysis of a thin cylindrical shell subjected to external and internal pressure pulses. An analytical procedure is presented for estimating the ultimate load capacity of the thin shell structure, and subsequently, for quantifying the design margins of safety for the type of loads under consideration. For defining failure of structures, a finite strain failure criterion is derived that accounts for multiaxiality effects

  12. Evans Syndrome Presented with Marginal Zone Lymphoma and Duodenal Neuroendocrine Tumor in an Elderly Woman

    Directory of Open Access Journals (Sweden)

    Daniele D'Ambrosio

    2016-12-01

    Full Text Available Evans syndrome (ES is an autoimmune disorder characterized by simultaneous or sequential development of autoimmune hemolytic anemia, immune thrombocytopenia, and/or neutropenia. ES can be classified as a primary (idiopathic or secondary (associated with an underlying disease syndrome. We report a case of ES in an elderly patient in the presence of multiple trigger factors such as recent influenza vaccine, marginal zone lymphoma, and neuroendocrine tumor G1. Whether this association is casual or causal remains a matter of speculation. It is however necessary to have a thorough work-up in a newly diagnosed ES and a more accurate search of miscellaneous factors especially in elderly patients.

  13. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud.

    Science.gov (United States)

    Munisamy, Shyamala Devi; Chokkalingam, Arun

    2015-01-01

    Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider's premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  14. Research on the margin of futures markets and the policy spillover effect

    Institute of Scientific and Technical Information of China (English)

    Difang Wan; Yang Yang; Dong Fang; Guang Yang

    2011-01-01

    Purpose-The purpose of this paper is to investigate whether the change of margin in Chinese futures markets has policy spillover effects.Design/methodology/approach-The paper constructs a model based on Harzmark and on Chng,taking Chinese futures markets status quo as a single species and restrictions on foreign investment into consideration,introduces the assumptions of spillover effect of speculators,then obtains investor's demand function.Subsequently,the effects of Shanghai Futures Exchange's 11 instances of margin changing are analyzed.Findings-The paper finds that in the Chinese futures market,margin changing has impact on the open interest (OI) and the speculator spillover effect is validated.Research limitations/implications-The irrational behavior of investors in markets is not taken into account in the model and data about spillover speculators were not directly available.Originality/value-The paper usefully analyzes the effects of the Shanghai Futures Exchange's 11 instances of margin changing from 2000 to 2007 and examines the actual effects of margin-changing policy,in the views of OI,trading volume and the externality,the results showing that margin changing has impact on investor structure and validates the existence of the assumed speculator spillover effect.

  15. Deep Structures of The Angola Margin

    Science.gov (United States)

    Moulin, M.; Contrucci, I.; Olivet, J.-L.; Aslanian, D.; Géli, L.; Sibuet, J.-C.

    1 Ifremer Centre de Brest, DRO/Géosciences Marines, B.P. 70, 29280 Plouzané cedex (France) mmoulin@ifremer.fr/Fax : 33 2 98 22 45 49 2 Université de Bretagne Occidentale, Institut Universitaire Europeen de la Mer, Place Nicolas Copernic, 29280 Plouzane (France) 3 Total Fina Elf, DGEP/GSR/PN -GEOLOGIE, 2,place de la Coupole-La Defense 6, 92078 Paris la Defense Cedex Deep reflection and refraction seismic data were collected in April 2000 on the West African margin, offshore Angola, within the framework of the Zaiango Joint Project, conducted by Ifremer and Total Fina Elf Production. Vertical multichannel reflection seismic data generated by a « single-bubble » air gun array array (Avedik et al., 1993) were recorded on a 4.5 km long, digital streamer, while refraction and wide angle reflection seismic data were acquired on OBSs (Ocean Bottom Seismometers). Despite the complexity of the margin (5 s TWT of sediment, salt tectonics), the combination of seismic reflection and refraction methods results in an image and a velocity model of the ground structures below the Aptian salt layer. Three large seismic units appear in the reflection seismic section from the deep part on the margin under the base of salt. The upper seismic unit is layered with reflectors parallel to the base of the salt ; it represents unstructured sediments, filling a basin. The middle unit is seismically transparent. The lower unit is characterized by highly energetic reflectors. According to the OBS refraction data, these two units correspond to the continental crust and the base of the high energetic unit corresponds to the Moho. The margin appears to be divided in 3 domains, from east to west : i) a domain with an unthinned, 30 km thick, continental crust ; ii) a domain located between the hinge line and the foot of the continental slope, where the crust thins sharply, from 30 km to less than 7 km, this domain is underlain by an anormal layer with velocities comprising between 7,2 and 7

  16. Marginal cost application in the power industry

    International Nuclear Information System (INIS)

    Twardy, L.; Rusak, H.

    1994-01-01

    Two kind of marginal costs, the short-run and the long-run, are defined. The former are applied in conditions when the load increase is not accompanied neither by the increase of the transmission capacity not the installed capacity while the latter assume new investments to expand the power system. The long-run marginal costs be used to forecast optimized development of the system. They contain two main components: the marginal costs of capacity and the marginal costs of energy. When the long-run marginal costs are calculated, each component is considered for particular voltage levels, seasons of the year, hours of the day - selected depending on the system reliability factor as well as on its load level. In the market economy countries the long-run marginal costs can be used for setting up the electric energy tariffs. (author). 7 refs, 11 figs

  17. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    Science.gov (United States)

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  18. Classifier-guided sampling for discrete variable, discontinuous design space exploration: Convergence and computational performance

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shahan, David W. [HRL Labs., LLC, Malibu, CA (United States); Seepersad, Carolyn Conner [Univ. of Texas, Austin, TX (United States)

    2014-04-22

    A classifier-guided sampling (CGS) method is introduced for solving engineering design optimization problems with discrete and/or continuous variables and continuous and/or discontinuous responses. The method merges concepts from metamodel-guided sampling and population-based optimization algorithms. The CGS method uses a Bayesian network classifier for predicting the performance of new designs based on a set of known observations or training points. Unlike most metamodeling techniques, however, the classifier assigns a categorical class label to a new design, rather than predicting the resulting response in continuous space, and thereby accommodates nondifferentiable and discontinuous functions of discrete or categorical variables. The CGS method uses these classifiers to guide a population-based sampling process towards combinations of discrete and/or continuous variable values with a high probability of yielding preferred performance. Accordingly, the CGS method is appropriate for discrete/discontinuous design problems that are ill-suited for conventional metamodeling techniques and too computationally expensive to be solved by population-based algorithms alone. In addition, the rates of convergence and computational properties of the CGS method are investigated when applied to a set of discrete variable optimization problems. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, when compared to genetic algorithms.

  19. Predicting membrane protein types using various decision tree classifiers based on various modes of general PseAAC for imbalanced datasets.

    Science.gov (United States)

    Sankari, E Siva; Manimegalai, D

    2017-12-21

    Predicting membrane protein types is an important and challenging research area in bioinformatics and proteomics. Traditional biophysical methods are used to classify membrane protein types. Due to large exploration of uncharacterized protein sequences in databases, traditional methods are very time consuming, expensive and susceptible to errors. Hence, it is highly desirable to develop a robust, reliable, and efficient method to predict membrane protein types. Imbalanced datasets and large datasets are often handled well by decision tree classifiers. Since imbalanced datasets are taken, the performance of various decision tree classifiers such as Decision Tree (DT), Classification And Regression Tree (CART), C4.5, Random tree, REP (Reduced Error Pruning) tree, ensemble methods such as Adaboost, RUS (Random Under Sampling) boost, Rotation forest and Random forest are analysed. Among the various decision tree classifiers Random forest performs well in less time with good accuracy of 96.35%. Another inference is RUS boost decision tree classifier is able to classify one or two samples in the class with very less samples while the other classifiers such as DT, Adaboost, Rotation forest and Random forest are not sensitive for the classes with fewer samples. Also the performance of decision tree classifiers is compared with SVM (Support Vector Machine) and Naive Bayes classifier. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy.

    Science.gov (United States)

    S K, Somasundaram; P, Alli

    2017-11-09

    of DR screening system using Bagging Ensemble Classifier (BEC) is investigated. With the help of voting the process in ML-BEC, bagging minimizes the error due to variance of the base classifier. With the publicly available retinal image databases, our classifier is trained with 25% of RI. Results show that the ensemble classifier can achieve better classification accuracy (CA) than single classification models. Empirical experiments suggest that the machine learning-based ensemble classifier is efficient for further reducing DR classification time (CT).

  1. Workers' marginal costs of commuting

    DEFF Research Database (Denmark)

    van Ommeren, Jos; Fosgerau, Mogens

    2009-01-01

    This paper applies a dynamic search model to estimate workers' marginal costs of commuting, including monetary and time costs. Using data on workers' job search activity as well as moving behaviour, for the Netherlands, we provide evidence that, on average, workers' marginal costs of one hour...

  2. The effects of frequency-encoding gradient upon detectability of the margins and height measurements of normal adult pituitary glands

    International Nuclear Information System (INIS)

    Taketomi, A.; Sato, N.; Aoki, J.; Endo, K.

    2004-01-01

    We investigated the effects of frequency-encoding gradient (FEG) upon detectability and height measurements of the normal adult pituitary gland. We obtained two sets of T1-weighted sagittal images of the pituitary gland from 70 adult subjects without known pituitary dysfunction using 1.5 tesla imagers; one with an inferior-superior FEG, and one with an anterior-posterior FEG. We classified the subjects into three types according to the distribution of fatty marrow in the clivus. Each set of images was assessed for pituitary height on midline sagittal images, and detectability of pituitary margins. Height measurements and detectability scores were evaluated for significant difference between the two FEGs. In subjects with fatty marrow in the clivus, there was significant difference between pituitary height measurements (P<0.005) and pituitary margin detectability (P<0.001). Care should be taken to image the pituitary gland using an anterior-posterior FEG. (orig.)

  3. Comparing classifiers for pronunciation error detection

    NARCIS (Netherlands)

    Strik, H.; Truong, K.; Wet, F. de; Cucchiarini, C.

    2007-01-01

    Providing feedback on pronunciation errors in computer assisted language learning systems requires that pronunciation errors be detected automatically. In the present study we compare four types of classifiers that can be used for this purpose: two acoustic-phonetic classifiers (one of which employs

  4. The intertemporal stability of the concentration-margins relationship in Dutch and U.S. manufacturing

    NARCIS (Netherlands)

    Y.M. Prince (Yvonne); A.R. Thurik (Roy)

    1994-01-01

    textabstractFactors influencing price-cost margins are investigated using a rich panel data base of the Dutch manufacturing sector. Attention is devoted to the intertemporal stability of the relationship explaining price-cost margins and to a comparison with U.S. results. Our results indicate that

  5. Setup uncertainties in linear accelerator based stereotactic radiosurgery and a derivation of the corresponding setup margin for treatment planning.

    Science.gov (United States)

    Zhang, Mutian; Zhang, Qinghui; Gan, Hua; Li, Sicong; Zhou, Su-min

    2016-02-01

    In the present study, clinical stereotactic radiosurgery (SRS) setup uncertainties from image-guidance data are analyzed, and the corresponding setup margin is estimated for treatment planning purposes. Patients undergoing single-fraction SRS at our institution were localized using invasive head ring or non-invasive thermoplastic masks. Setup discrepancies were obtained from an in-room x-ray patient position monitoring system. Post treatment re-planning using the measured setup errors was performed in order to estimate the individual target margins sufficient to compensate for the actual setup errors. The formula of setup margin for a general SRS patient population was derived by proposing a correlation between the three-dimensional setup error and the required minimal margin. Setup errors of 104 brain lesions were analyzed, in which 81 lesions were treated using an invasive head ring, and 23 were treated using non-invasive masks. In the mask cases with image guidance, the translational setup uncertainties achieved the same level as those in the head ring cases. Re-planning results showed that the margins for individual patients could be smaller than the clinical three-dimensional setup errors. The derivation of setup margin adequate to address the patient setup errors was demonstrated by using the arbitrary planning goal of treating 95% of the lesions with sufficient doses. With image guidance, the patient setup accuracy of mask cases can be comparable to that of invasive head rings. The SRS setup margin can be derived for a patient population with the proposed margin formula to compensate for the institution-specific setup errors. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. Pathology of nodal marginal zone lymphomas.

    Science.gov (United States)

    Pileri, Stefano; Ponzoni, Maurilio

    Nodal marginal zone B cell lymphomas (NMZLs) are a rare group of lymphoid disorders part of the spectrum of marginal zone B-cell lymphomas, which encompass splenic marginal one B-cell lymphoma (SMZL) and extra nodal marginal zone of B-cell lymphoma (EMZL), often of MALT-type. Two clinicopathological forms of NMZL are recognized: adult-type and pediatric-type, respectively. NMZLs show overlapping features with other types of MZ, but distinctive features as well. In this review, we will focus on the salient distinguishing features of NMZL mostly under morphological/immunophenotypical/molecular perspectives in views of the recent acquisitions and forthcoming updated 2016 WHO classification of lymphoid malignancies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Quantification and Assessment of Interfraction Setup Errors Based on Cone Beam CT and Determination of Safety Margins for Radiotherapy.

    Directory of Open Access Journals (Sweden)

    Macarena Cubillos Mesías

    Full Text Available To quantify interfraction patient setup-errors for radiotherapy based on cone-beam computed tomography and suggest safety margins accordingly.Positioning vectors of pre-treatment cone-beam computed tomography for different treatment sites were collected (n = 9504. For each patient group the total average and standard deviation were calculated and the overall mean, systematic and random errors as well as safety margins were determined.The systematic (and random errors in the superior-inferior, left-right and anterior-posterior directions were: for prostate, 2.5(3.0, 2.6(3.9 and 2.9(3.9mm; for prostate bed, 1.7(2.0, 2.2(3.6 and 2.6(3.1mm; for cervix, 2.8(3.4, 2.3(4.6 and 3.2(3.9mm; for rectum, 1.6(3.1, 2.1(2.9 and 2.5(3.8mm; for anal, 1.7(3.7, 2.1(5.1 and 2.5(4.8mm; for head and neck, 1.9(2.3, 1.4(2.0 and 1.7(2.2mm; for brain, 1.0(1.5, 1.1(1.4 and 1.0(1.1mm; and for mediastinum, 3.3(4.6, 2.6(3.7 and 3.5(4.0mm. The CTV-to-PTV margins had the smallest value for brain (3.6, 3.7 and 3.3mm and the largest for mediastinum (11.5, 9.1 and 11.6mm. For pelvic treatments the means (and standard deviations were 7.3 (1.6, 8.5 (0.8 and 9.6 (0.8mm.Systematic and random setup-errors were smaller than 5mm. The largest errors were found for organs with higher motion probability. The suggested safety margins were comparable to published values in previous but often smaller studies.

  8. Improving safety margin of LWRs by rethinking the emergency core cooling system criteria and safety system capacity

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Youho, E-mail: euo@kaist.ac.kr; Kim, Bokyung, E-mail: bkkim2@kaist.ac.kr; NO, Hee Cheon, E-mail: hcno@kaist.ac.kr

    2016-10-15

    Highlights: • Zircaloy embrittlement criteria can increase to 1370 °C for CP-ECR lower than 13%. • The draft ECCS criteria of U.S. NRC allow less than 5% in power margin. • The Japanese fracture-based criteria allow around 5% in power margin. • Increasing SIT inventory is effective in assuring safety margin for power uprates. - Abstract: This study investigates the engineering compatibility between emergency core cooling system criteria and safety water injection systems, in the pursuit of safety margin increase of light water reactors. This study proposes an acceptable temperature increase to 1370 °C as long as equivalent cladding reacted calculated by the Cathcart–Pawel equation is below 13%, after an extensive literature review. The influence of different ECCS criteria on the safety margin during large break loss of coolant accident is investigated for OPR-1000 by the system code MARS-KS, implemented with the KINS-REM method. The fracture-based emergency core cooling system (ECCS) criteria proposed in this study are shown to enable power margins up to 10%. In the meantime, the draft U.S. NRC’s embrittlement criteria (burnup-sensitive) and Japanese fracture-based criteria are shown to allow less than 5%, and around 5% of power margins, respectively. Increasing safety injection tank (SIT) water inventory is the key, yet convenient, way of assuring safety margin for power increase. More than 20% increase in the SIT water inventory is required to allow 15% power margins, for the U.S. NRC’s burnup-dependent embrittlement criteria. Controlling SIT water inventory would be a useful option that could allow the industrial desire to pursue power margins even under the recent atmosphere of imposing stricter ECCS criteria for the considerable burnup effects.

  9. Risk Informed Safety Margin Characterization Case Study: Selection of Electrical Equipment To Be Subjected to Environmental Qualification

    Energy Technology Data Exchange (ETDEWEB)

    D. Blanchard; R. Youngblood

    2012-04-01

    In general, the margins-based safety case helps the decision-maker manage plant margins most effectively. It tells the plant decision-maker such things as what margin is present (at the plant level, at the functional level, at the barrier level, at the component level), and where margin is thin or perhaps just degrading. If the plant is safe, it tells the decision-maker why the plant is safe and where margin needs to be maintained, and perhaps where the plant can afford to relax.

  10. An Unobtrusive Fall Detection and Alerting System Based on Kalman Filter and Bayes Network Classifier.

    Science.gov (United States)

    He, Jian; Bai, Shuang; Wang, Xiaoyi

    2017-06-16

    Falls are one of the main health risks among the elderly. A fall detection system based on inertial sensors can automatically detect fall event and alert a caregiver for immediate assistance, so as to reduce injuries causing by falls. Nevertheless, most inertial sensor-based fall detection technologies have focused on the accuracy of detection while neglecting quantization noise caused by inertial sensor. In this paper, an activity model based on tri-axial acceleration and gyroscope is proposed, and the difference between activities of daily living (ADLs) and falls is analyzed. Meanwhile, a Kalman filter is proposed to preprocess the raw data so as to reduce noise. A sliding window and Bayes network classifier are introduced to develop a wearable fall detection system, which is composed of a wearable motion sensor and a smart phone. The experiment shows that the proposed system distinguishes simulated falls from ADLs with a high accuracy of 95.67%, while sensitivity and specificity are 99.0% and 95.0%, respectively. Furthermore, the smart phone can issue an alarm to caregivers so as to provide timely and accurate help for the elderly, as soon as the system detects a fall.

  11. Stable Sparse Classifiers Identify qEEG Signatures that Predict Learning Disabilities (NOS Severity

    Directory of Open Access Journals (Sweden)

    Jorge Bosch-Bayard

    2018-01-01

    Full Text Available In this paper, we present a novel methodology to solve the classification problem, based on sparse (data-driven regressions, combined with techniques for ensuring stability, especially useful for high-dimensional datasets and small samples number. The sensitivity and specificity of the classifiers are assessed by a stable ROC procedure, which uses a non-parametric algorithm for estimating the area under the ROC curve. This method allows assessing the performance of the classification by the ROC technique, when more than two groups are involved in the classification problem, i.e., when the gold standard is not binary. We apply this methodology to the EEG spectral signatures to find biomarkers that allow discriminating between (and predicting pertinence to different subgroups of children diagnosed as Not Otherwise Specified Learning Disabilities (LD-NOS disorder. Children with LD-NOS have notable learning difficulties, which affect education but are not able to be put into some specific category as reading (Dyslexia, Mathematics (Dyscalculia, or Writing (Dysgraphia. By using the EEG spectra, we aim to identify EEG patterns that may be related to specific learning disabilities in an individual case. This could be useful to develop subject-based methods of therapy, based on information provided by the EEG. Here we study 85 LD-NOS children, divided in three subgroups previously selected by a clustering technique over the scores of cognitive tests. The classification equation produced stable marginal areas under the ROC of 0.71 for discrimination between Group 1 vs. Group 2; 0.91 for Group 1 vs. Group 3; and 0.75 for Group 2 vs. Group1. A discussion of the EEG characteristics of each group related to the cognitive scores is also presented.

  12. Non-Mutually Exclusive Deep Neural Network Classifier for Combined Modes of Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Bach Phi Duong

    2018-04-01

    Full Text Available The simultaneous occurrence of various types of defects in bearings makes their diagnosis more challenging owing to the resultant complexity of the constituent parts of the acoustic emission (AE signals. To address this issue, a new approach is proposed in this paper for the detection of multiple combined faults in bearings. The proposed methodology uses a deep neural network (DNN architecture to effectively diagnose the combined defects. The DNN structure is based on the stacked denoising autoencoder non-mutually exclusive classifier (NMEC method for combined modes. The NMEC-DNN is trained using data for a single fault and it classifies both single faults and multiple combined faults. The results of experiments conducted on AE data collected through an experimental test-bed demonstrate that the DNN achieves good classification performance with a maximum accuracy of 95%. The proposed method is compared with a multi-class classifier based on support vector machines (SVMs. The NMEC-DNN yields better diagnostic performance in comparison to the multi-class classifier based on SVM. The NMEC-DNN reduces the number of necessary data collections and improves the bearing fault diagnosis performance.

  13. Non-Mutually Exclusive Deep Neural Network Classifier for Combined Modes of Bearing Fault Diagnosis.

    Science.gov (United States)

    Duong, Bach Phi; Kim, Jong-Myon

    2018-04-07

    The simultaneous occurrence of various types of defects in bearings makes their diagnosis more challenging owing to the resultant complexity of the constituent parts of the acoustic emission (AE) signals. To address this issue, a new approach is proposed in this paper for the detection of multiple combined faults in bearings. The proposed methodology uses a deep neural network (DNN) architecture to effectively diagnose the combined defects. The DNN structure is based on the stacked denoising autoencoder non-mutually exclusive classifier (NMEC) method for combined modes. The NMEC-DNN is trained using data for a single fault and it classifies both single faults and multiple combined faults. The results of experiments conducted on AE data collected through an experimental test-bed demonstrate that the DNN achieves good classification performance with a maximum accuracy of 95%. The proposed method is compared with a multi-class classifier based on support vector machines (SVMs). The NMEC-DNN yields better diagnostic performance in comparison to the multi-class classifier based on SVM. The NMEC-DNN reduces the number of necessary data collections and improves the bearing fault diagnosis performance.

  14. Non-Mutually Exclusive Deep Neural Network Classifier for Combined Modes of Bearing Fault Diagnosis

    Science.gov (United States)

    Kim, Jong-Myon

    2018-01-01

    The simultaneous occurrence of various types of defects in bearings makes their diagnosis more challenging owing to the resultant complexity of the constituent parts of the acoustic emission (AE) signals. To address this issue, a new approach is proposed in this paper for the detection of multiple combined faults in bearings. The proposed methodology uses a deep neural network (DNN) architecture to effectively diagnose the combined defects. The DNN structure is based on the stacked denoising autoencoder non-mutually exclusive classifier (NMEC) method for combined modes. The NMEC-DNN is trained using data for a single fault and it classifies both single faults and multiple combined faults. The results of experiments conducted on AE data collected through an experimental test-bed demonstrate that the DNN achieves good classification performance with a maximum accuracy of 95%. The proposed method is compared with a multi-class classifier based on support vector machines (SVMs). The NMEC-DNN yields better diagnostic performance in comparison to the multi-class classifier based on SVM. The NMEC-DNN reduces the number of necessary data collections and improves the bearing fault diagnosis performance. PMID:29642466

  15. Hierarchical mixtures of naive Bayes classifiers

    NARCIS (Netherlands)

    Wiering, M.A.

    2002-01-01

    Naive Bayes classifiers tend to perform very well on a large number of problem domains, although their representation power is quite limited compared to more sophisticated machine learning algorithms. In this pa- per we study combining multiple naive Bayes classifiers by using the hierar- chical

  16. mPLR-Loc: an adaptive decision multi-label classifier based on penalized logistic regression for protein subcellular localization prediction.

    Science.gov (United States)

    Wan, Shibiao; Mak, Man-Wai; Kung, Sun-Yuan

    2015-03-15

    Proteins located in appropriate cellular compartments are of paramount importance to exert their biological functions. Prediction of protein subcellular localization by computational methods is required in the post-genomic era. Recent studies have been focusing on predicting not only single-location proteins but also multi-location proteins. However, most of the existing predictors are far from effective for tackling the challenges of multi-label proteins. This article proposes an efficient multi-label predictor, namely mPLR-Loc, based on penalized logistic regression and adaptive decisions for predicting both single- and multi-location proteins. Specifically, for each query protein, mPLR-Loc exploits the information from the Gene Ontology (GO) database by using its accession number (AC) or the ACs of its homologs obtained via BLAST. The frequencies of GO occurrences are used to construct feature vectors, which are then classified by an adaptive decision-based multi-label penalized logistic regression classifier. Experimental results based on two recent stringent benchmark datasets (virus and plant) show that mPLR-Loc remarkably outperforms existing state-of-the-art multi-label predictors. In addition to being able to rapidly and accurately predict subcellular localization of single- and multi-label proteins, mPLR-Loc can also provide probabilistic confidence scores for the prediction decisions. For readers' convenience, the mPLR-Loc server is available online (http://bioinfo.eie.polyu.edu.hk/mPLRLocServer). Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Buying on margin, selling short in an agent-based market model

    Science.gov (United States)

    Zhang, Ting; Li, Honggang

    2013-09-01

    Credit trading, or leverage trading, which includes buying on margin and selling short, plays an important role in financial markets, where agents tend to increase their leverages for increased profits. This paper presents an agent-based asset market model to study the effect of the permissive leverage level on traders’ wealth and overall market indicators. In this model, heterogeneous agents can assume fundamental value-converging expectations or trend-persistence expectations, and their effective demands of assets depend both on demand willingness and wealth constraints, where leverage can relieve the wealth constraints to some extent. The asset market price is determined by a market maker, who watches the market excess demand, and is influenced by noise factors. By simulations, we examine market results for different leverage ratios. At the individual level, we focus on how the leverage ratio influences agents’ wealth accumulation. At the market level, we focus on how the leverage ratio influences changes in the asset price, volatility, and trading volume. Qualitatively, our model provides some meaningful results supported by empirical facts. More importantly, we find a continuous phase transition as we increase the leverage threshold, which may provide a further prospective of credit trading.

  18. Yellowfin Tuna (Thunnusalbacares Fishing Ground Forecasting Model Based On Bayes Classifier In The South China Sea

    Directory of Open Access Journals (Sweden)

    Zhou Wei-feng

    2017-08-01

    Full Text Available Using the yellowfin tuna (Thunnusalbacares,YFTlongline fishing catch data in the open South China Sea (SCS provided by WCPFC, the optimum interpolation sea surface temperature (OISST from CPC/NOAA and multi-satellites altimetric monthly averaged product sea surface height (SSH released by CNES, eight alternative options based on Bayes classifier were made in this paper according to different strategies on the choice of environment factors and the levels of fishing zones to classify the YFT fishing ground in the open SCS. The classification results were compared with the actual ones for validation and analyzed to know how different plans impact on classification results and precision. The results of validation showed that the precision of the eight options were 71.4%, 75%, 70.8%, 74.4%, 66.7%, 68.5%, 57.7% and 63.7% in sequence, the first to sixth among them above 65% would meet the practical application needs basically. The alternatives which use SST and SSH simultaneously as the environmental factors have higher precision than which only use single SST environmental factor, and the consideration of adding SSH can improve the model precision to a certain extent. The options which use CPUE’s mean ± standard deviation as threshold have higher precision than which use CPUE’s 33.3%-quantile and 66.7%-quantile as the threshold

  19. Marginal cost and congestion in the Italian electricity market: An indirect estimation approach

    International Nuclear Information System (INIS)

    Bigerna, Simona; Andrea Bollino, Carlo; Polinori, Paolo

    2015-01-01

    In this paper we construct an indirect measure of the supply marginal cost function for the main generators from the observed bid data in the Italian electricity market in the period 2004–2007. We compute the residual demand function for each generator, taking explicitly into account the issue of transmission line congestion. This procedure allows recovering correct zonal Lerner index and the implied measure of the marginal cost function. We find evidence of a stable U-shaped marginal cost function for three main Italian generators, but a flat function for ENEL, the former national monopolist. The policy relevance of our approach lies in the possibility to offer some empirical knowledge of the marginal cost function of each generator to the regulator to design appropriate policy measures geared to the promotion of competitive market conditions. We propose a new market surveillance mechanism, which is based on the principle of sanctioning excessive deviations from the estimated measure of the marginal cost function presented in this work. -- Highlights: •We construct an indirect measure of the supply marginal cost function. •We compute the residual demand function taking into account transmission line congestion. •We find a general evidence of a stable U-shaped marginal cost function for Italian generators. •We find flat marginal cost function for the former national monopolist. •We use excessive deviations from estimated marginal cost function as a new market surveillance mechanism

  20. A Supervised Multiclass Classifier for an Autocoding System

    Directory of Open Access Journals (Sweden)

    Yukako Toko

    2017-11-01

    Full Text Available Classification is often required in various contexts, including in the field of official statistics. In the previous study, we have developed a multiclass classifier that can classify short text descriptions with high accuracy. The algorithm borrows the concept of the naïve Bayes classifier and is so simple that its structure is easily understandable. The proposed classifier has the following two advantages. First, the processing times for both learning and classifying are extremely practical. Second, the proposed classifier yields high-accuracy results for a large portion of a dataset. We have previously developed an autocoding system for the Family Income and Expenditure Survey in Japan that has a better performing classifier. While the original system was developed in Perl in order to improve the efficiency of the coding process of short Japanese texts, the proposed system is implemented in the R programming language in order to explore versatility and is modified to make the system easily applicable to English text descriptions, in consideration of the increasing number of R users in the field of official statistics. We are planning to publish the proposed classifier as an R-package. The proposed classifier would be generally applicable to other classification tasks including coding activities in the field of official statistics, and it would contribute greatly to improving their efficiency.

  1. Classifier for gravitational-wave inspiral signals in nonideal single-detector data

    Science.gov (United States)

    Kapadia, S. J.; Dent, T.; Dal Canton, T.

    2017-11-01

    We describe a multivariate classifier for candidate events in a templated search for gravitational-wave (GW) inspiral signals from neutron-star-black-hole (NS-BH) binaries, in data from ground-based detectors where sensitivity is limited by non-Gaussian noise transients. The standard signal-to-noise ratio (SNR) and chi-squared test for inspiral searches use only properties of a single matched filter at the time of an event; instead, we propose a classifier using features derived from a bank of inspiral templates around the time of each event, and also from a search using approximate sine-Gaussian templates. The classifier thus extracts additional information from strain data to discriminate inspiral signals from noise transients. We evaluate a random forest classifier on a set of single-detector events obtained from realistic simulated advanced LIGO data, using simulated NS-BH signals added to the data. The new classifier detects a factor of 1.5-2 more signals at low false positive rates as compared to the standard "reweighted SNR" statistic, and does not require the chi-squared test to be computed. Conversely, if only the SNR and chi-squared values of single-detector events are available, random forest classification performs nearly identically to the reweighted SNR.

  2. A decision support system using combined-classifier for high-speed data stream in smart grid

    Science.gov (United States)

    Yang, Hang; Li, Peng; He, Zhian; Guo, Xiaobin; Fong, Simon; Chen, Huajun

    2016-11-01

    Large volume of high-speed streaming data is generated by big power grids continuously. In order to detect and avoid power grid failure, decision support systems (DSSs) are commonly adopted in power grid enterprises. Among all the decision-making algorithms, incremental decision tree is the most widely used one. In this paper, we propose a combined classifier that is a composite of a cache-based classifier (CBC) and a main tree classifier (MTC). We integrate this classifier into a stream processing engine on top of the DSS such that high-speed steaming data can be transformed into operational intelligence efficiently. Experimental results show that our proposed classifier can return more accurate answers than other existing ones.

  3. Deoxycholic Acid and the Marginal Mandibular Nerve: A Cadaver Study.

    Science.gov (United States)

    Blandford, Alexander D; Ansari, Waseem; Young, Jason M; Maley, Bruce; Plesec, Thomas P; Hwang, Catherine J; Perry, Julian D

    2018-06-04

    One of the rare but serious complications observed with deoxycholic acid administration is damage to the marginal mandibular nerve. In this study, we evaluated if deoxycholic acid directly induces histologic damage to fresh cadaveric marginal mandibular nerve. A segment of marginal mandibular nerve was harvested from 12 hemifaces of 6 fresh cadavers. The nerve specimen was exposed to either 0.9% sterile saline for 24 h, deoxycholic acid (10 mg/ml) for 20 min, or deoxycholic acid (10 mg/ml) for 24 h. The nerve specimens were then fixed in glutaraldehyde for a minimum of 24 h. Toluidine blue stained sections were evaluated for stain intensity using light microscopy and color deconvolution image analysis. Supraplatysmal fat was harvested as a positive control and exposed to the same treatments as the marginal mandibular nerve specimens, then evaluated using transmission electron microscopy. Toluidine blue staining was less in the marginal mandibular nerve exposed to deoxycholic acid when compared to saline. The specimen exposed to deoxycholic acid for 24 h showed less toluidine blue staining than that of the nerve exposed to deoxycholic acid for 20 min. Transmission electron microscopy of submental fat exposed to deoxycholic acid revealed disruption of adipocyte cell membrane integrity and loss of cellular organelles when compared to specimens only exposed to saline. Deoxycholic acid (10 mg/ml) damages the marginal mandibular nerve myelin sheath in fresh human cadaver specimens. Direct deoxycholic acid neurotoxicity may cause marginal mandibular nerve injury clinically. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  4. Biodiversity and agro-ecology in field margins.

    Science.gov (United States)

    De Cauwer, B; Reheul, D; Nijs, I; Milbau, A

    2005-01-01

    This multidisciplinary study investigates agro-ecological functions (nature conservation, agriculture, environment) and implications of newly created, mown sown and unsown field margin strips installed on ex-arable land to increase biodiversity. From conservational concern, the development of species rich field margin strips was not strongly affected by the installed type of margin strip since species diversity converged over time, whether strips were sown or not. Convergence between unsown and sown margin strips occurred also in terms of species composition: unsown and sown strips became similar over time. Mowing without removal of cuttings significantly reduced species richness, yielded more grassy margin strips and delayed similarity in species composition between sown and unsown margin strips. Species richness on the longer term was not significantly affected by light regime nor by disturbance despite significant temporary effects shortly after the disturbance event. On the contrary vegetation composition in terms of importance of functional groups changed after disturbance: the share of spontaneous species within functional groups increased resulting in higher similarity between the sown and unsown vegetation. Furthermore risk of invasion was highest in the disturbed unsown community on the unshaded side of a tree lane. A positive effect of botanical diversity on insect number and diversity was found. However the effects of botanical diversity on insect number was mediated by light regime. At high light availability differences between plant communities were more pronounced compared to low light availablilty. The abundance of some insect families was dependent on the vegetation composition. Furthermore light availability significantly influenced insect diversity as well as the spatial distribution of families. From agricultural concern, installing margin strips by sowing a species mixture and a mowing regime with removal of cuttings are good practices to

  5. Detection of microaneurysms in retinal images using an ensemble classifier

    Directory of Open Access Journals (Sweden)

    M.M. Habib

    2017-01-01

    Full Text Available This paper introduces, and reports on the performance of, a novel combination of algorithms for automated microaneurysm (MA detection in retinal images. The presence of MAs in retinal images is a pathognomonic sign of Diabetic Retinopathy (DR which is one of the leading causes of blindness amongst the working age population. An extensive survey of the literature is presented and current techniques in the field are summarised. The proposed technique first detects an initial set of candidates using a Gaussian Matched Filter and then classifies this set to reduce the number of false positives. A Tree Ensemble classifier is used with a set of 70 features (the most commons features in the literature. A new set of 32 MA groundtruth images (with a total of 256 labelled MAs based on images from the MESSIDOR dataset is introduced as a public dataset for benchmarking MA detection algorithms. We evaluate our algorithm on this dataset as well as another public dataset (DIARETDB1 v2.1 and compare it against the best available alternative. Results show that the proposed classifier is superior in terms of eliminating false positive MA detection from the initial set of candidates. The proposed method achieves an ROC score of 0.415 compared to 0.2636 achieved by the best available technique. Furthermore, results show that the classifier model maintains consistent performance across datasets, illustrating the generalisability of the classifier and that overfitting does not occur.

  6. A method of accurate determination of voltage stability margin

    Energy Technology Data Exchange (ETDEWEB)

    Wiszniewski, A.; Rebizant, W. [Wroclaw Univ. of Technology, Wroclaw (Poland); Klimek, A. [AREVA Transmission and Distribution, Stafford (United Kingdom)

    2008-07-01

    In the process of developing power system disturbance, voltage instability at the receiving substations often contributes to deteriorating system stability, which eventually may lead to severe blackouts. The voltage stability margin at receiving substations may be used to determine measures to prevent voltage collapse, primarily by operating or blocking the transformer tap changing device, or by load shedding. The best measure of the stability margin is the actual load to source impedance ratio and its critical value, which is unity. This paper presented an accurate method of calculating the load to source impedance ratio, derived from the Thevenin's equivalent circuit of the system, which led to calculation of the stability margin. The paper described the calculation of the load to source impedance ratio including the supporting equations. The calculation was based on the very definition of voltage stability, which says that system stability is maintained as long as the change of power, which follows the increase of admittance is positive. The testing of the stability margin assessment method was performed in a simulative way for a number of power network structures and simulation scenarios. Results of the simulations revealed that this method is accurate and stable for all possible events occurring downstream of the device location. 3 refs., 8 figs.

  7. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  8. Systematic Review of Studies Reporting Positive Surgical Margins After Bladder Neck Sparing Radical Prostatectomy.

    Science.gov (United States)

    Bellangino, Mariangela; Verrill, Clare; Leslie, Tom; Bell, Richard W; Hamdy, Freddie C; Lamb, Alastair D

    2017-11-07

    Bladder neck preservation (BNP) during radical prostatectomy (RP) has been proposed as a method to improve early recovery of urinary continence after radical prostatectomy. However, there is concern over a possible increase in the risk of positive surgical margins and prostate cancer recurrence rate. A recent systematic review and meta-analysis reported improved early recovery and overall long-term urinary continence without compromising oncologic control. The aim of our study was to perform a critical review of the literature to assess the impact on bladder neck and base margins after bladder neck sparing radical prostatectomy. We carried out a systematic review of the literature using Pubmed, Scopus and Cochrane library databases in May 2017 using medical subject headings and free-text protocol according to PRISMA guidelines. We used the following search terms: bladder neck preservation, prostate cancer, radical prostatectomy and surgical margins. Studies focusing on positive surgical margins (PSM) in bladder neck sparing RP pertinent to the objective of this review were included. Overall, we found 15 relevant studies reporting overall and site-specific positive surgical margins rate after bladder neck sparing radical prostatectomy. This included two RCTs, seven prospective comparative studies, two retrospective comparative studies and four case series. All studies were published between 1993 and 2015 with sample sizes ranging between 50 and 1067. Surgical approaches included open, laparoscopic and robot-assisted radical prostatectomy. The overall and base-specific PSM rates ranged between 7-36% and 0-16.3%, respectively. Mean base PSM was 4.9% in those patients where bladder neck sparing was performed, but only 1.85% in those without sparing. Bladder neck preservation during radical prostatectomy may increase base-positive margins. Further studies are needed to better investigate the impact of this technique on oncological outcomes. A future paradigm could

  9. Exact marginality in open string field theory. A general framework

    International Nuclear Information System (INIS)

    Kiermaier, M.

    2007-07-01

    We construct analytic solutions of open bosonic string field theory for any exactly marginal deformation in any boundary conformal field theory when properly renormalized operator products of the marginal operator are given. We explicitly provide such renormalized operator products for a class of marginal deformations which include the deformations of flat D-branes in flat backgrounds by constant massless modes of the gauge field and of the scalar fields on the D-branes, the cosine potential for a space-like coordinate, and the hyperbolic cosine potential for the time-like coordinate. In our construction we use integrated vertex operators, which are closely related to finite deformations in boundary conformal field theory, while previous analytic solutions were based on unintegrated vertex operators. We also introduce a modified star product to formulate string field theory around the deformed background. (orig.)

  10. Vandalism Detection in Wikipedia: a Bag-of-Words Classifier Approach

    OpenAIRE

    Belani, Amit

    2010-01-01

    A bag-of-words based probabilistic classifier is trained using regularized logistic regression to detect vandalism in the English Wikipedia. Isotonic regression is used to calibrate the class membership probabilities. Learning curve, reliability, ROC, and cost analysis are performed.

  11. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud

    Directory of Open Access Journals (Sweden)

    Shyamala Devi Munisamy

    2015-01-01

    Full Text Available Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider’s premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  12. An Ensemble of Classifiers based Approach for Prediction of Alzheimer's Disease using fMRI Images based on Fusion of Volumetric, Textural and Hemodynamic Features

    Directory of Open Access Journals (Sweden)

    MALIK, F.

    2018-02-01

    Full Text Available Alzheimer's is a neurodegenerative disease caused by the destruction and death of brain neurons resulting in memory loss, impaired thinking ability, and in certain behavioral changes. Alzheimer disease is a major cause of dementia and eventually death all around the world. Early diagnosis of the disease is crucial which can help the victims to maintain their level of independence for comparatively longer time and live a best life possible. For early detection of Alzheimer's disease, we are proposing a novel approach based on fusion of multiple types of features including hemodynamic, volumetric and textural features of the brain. Our approach uses non-invasive fMRI with ensemble of classifiers, for the classification of the normal controls and the Alzheimer patients. For performance evaluation, ten-fold cross validation is used. Individual feature sets and fusion of features have been investigated with ensemble classifiers for successful classification of Alzheimer's patients from normal controls. It is observed that fusion of features resulted in improved results for accuracy, specificity and sensitivity.

  13. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. A methodology to determine margins by EPID measurements of patient setup variation and motion as applied to immobilization devices

    International Nuclear Information System (INIS)

    Prisciandaro, Joann I.; Frechette, Christina M.; Herman, Michael G.; Brown, Paul D.; Garces, Yolanda I.; Foote, Robert L.

    2004-01-01

    Assessment of clinic and site specific margins are essential for the effective use of three-dimensional and intensity modulated radiation therapy. An electronic portal imaging device (EPID) based methodology is introduced which allows individual and population based CTV-to-PTV margins to be determined and compared with traditional margins prescribed during treatment. This method was applied to a patient cohort receiving external beam head and neck radiotherapy under an IRB approved protocol. Although the full study involved the use of an EPID-based method to assess the impact of (1) simulation technique (2) immobilization, and (3) surgical intervention on inter- and intrafraction variations of individual and population-based CTV-to-PTV margins, the focus of the paper is on the technique. As an illustration, the methodology is utilized to examine the influence of two immobilization devices, the UON TM thermoplastic mask and the Type-S TM head/neck shoulder immobilization system on margins. Daily through port images were acquired for selected fields for each patient with an EPID. To analyze these images, simulation films or digitally reconstructed radiographs (DRR's) were imported into the EPID software. Up to five anatomical landmarks were identified and outlined by the clinician and up to three of these structures were matched for each reference image. Once the individual based errors were quantified, the patient results were grouped into populations by matched anatomical structures and immobilization device. The variation within the subgroup was quantified by calculating the systematic and random errors (Σ sub and σ sub ). Individual patient margins were approximated as 1.65 times the individual-based random error and ranged from 1.1 to 6.3 mm (A-P) and 1.1 to 12.3 mm (S-I) for fields matched on skull and cervical structures, and 1.7 to 10.2 mm (L-R) and 2.0 to 13.8 mm (S-I) for supraclavicular fields. Population-based margins ranging from 5.1 to 6.6 mm (A

  15. Unified framework for triaxial accelerometer-based fall event detection and classification using cumulants and hierarchical decision tree classifier.

    Science.gov (United States)

    Kambhampati, Satya Samyukta; Singh, Vishal; Manikandan, M Sabarimalai; Ramkumar, Barathram

    2015-08-01

    In this Letter, the authors present a unified framework for fall event detection and classification using the cumulants extracted from the acceleration (ACC) signals acquired using a single waist-mounted triaxial accelerometer. The main objective of this Letter is to find suitable representative cumulants and classifiers in effectively detecting and classifying different types of fall and non-fall events. It was discovered that the first level of the proposed hierarchical decision tree algorithm implements fall detection using fifth-order cumulants and support vector machine (SVM) classifier. In the second level, the fall event classification algorithm uses the fifth-order cumulants and SVM. Finally, human activity classification is performed using the second-order cumulants and SVM. The detection and classification results are compared with those of the decision tree, naive Bayes, multilayer perceptron and SVM classifiers with different types of time-domain features including the second-, third-, fourth- and fifth-order cumulants and the signal magnitude vector and signal magnitude area. The experimental results demonstrate that the second- and fifth-order cumulant features and SVM classifier can achieve optimal detection and classification rates of above 95%, as well as the lowest false alarm rate of 1.03%.

  16. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    Science.gov (United States)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  17. Risk insights from seismic margin reviews

    International Nuclear Information System (INIS)

    Budnitz, R.J.

    1990-01-01

    This paper discusses the information that has been derived from the three seismic-margin reviews conducted so far, and the information that is potentially available from using the seismic-margin method more generally. There are two different methodologies for conducting seismic margin reviews of nuclear power plants, one developed under NRC sponsorship and one developed under sponsorship of the Electric Power Research Institute. Both methodologies will be covered in this paper. The paper begins with a summary of the steps necessary to complete a margin review, and will then outline the key technical difficulties that need to be addressed. After this introduction, the paper covers the safety and operational insights derived from the three seismic-margin reviews already completed: the NRC-sponsored review at Maine Yankee; the EPRI-sponsored review at Catawba; and the joint EPRI/NRC/utility effort at Hatch. The emphasis is on engineering insights, with attention to the aspects of the reviews that are easiest to perform and that provide the most readily available insights

  18. DECISION TREE CLASSIFIERS FOR STAR/GALAXY SEPARATION

    International Nuclear Information System (INIS)

    Vasconcellos, E. C.; Ruiz, R. S. R.; De Carvalho, R. R.; Capelato, H. V.; Gal, R. R.; LaBarbera, F. L.; Frago Campos Velho, H.; Trevisan, M.

    2011-01-01

    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 ≤ r ≤ 21 (85.2%) and r ≥ 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 ≤ r ≤ 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (>80%) while simultaneously achieving low contamination (∼2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 ≤ r ≤ 21.

  19. Exactly marginal deformations from exceptional generalised geometry

    Energy Technology Data Exchange (ETDEWEB)

    Ashmore, Anthony [Merton College, University of Oxford,Merton Street, Oxford, OX1 4JD (United Kingdom); Mathematical Institute, University of Oxford,Andrew Wiles Building, Woodstock Road, Oxford, OX2 6GG (United Kingdom); Gabella, Maxime [Institute for Advanced Study,Einstein Drive, Princeton, NJ 08540 (United States); Graña, Mariana [Institut de Physique Théorique, CEA/Saclay,91191 Gif-sur-Yvette (France); Petrini, Michela [Sorbonne Université, UPMC Paris 05, UMR 7589, LPTHE,75005 Paris (France); Waldram, Daniel [Department of Physics, Imperial College London,Prince Consort Road, London, SW7 2AZ (United Kingdom)

    2017-01-27

    We apply exceptional generalised geometry to the study of exactly marginal deformations of N=1 SCFTs that are dual to generic AdS{sub 5} flux backgrounds in type IIB or eleven-dimensional supergravity. In the gauge theory, marginal deformations are parametrised by the space of chiral primary operators of conformal dimension three, while exactly marginal deformations correspond to quotienting this space by the complexified global symmetry group. We show how the supergravity analysis gives a geometric interpretation of the gauge theory results. The marginal deformations arise from deformations of generalised structures that solve moment maps for the generalised diffeomorphism group and have the correct charge under the generalised Reeb vector, generating the R-symmetry. If this is the only symmetry of the background, all marginal deformations are exactly marginal. If the background possesses extra isometries, there are obstructions that come from fixed points of the moment maps. The exactly marginal deformations are then given by a further quotient by these extra isometries. Our analysis holds for any N=2 AdS{sub 5} flux background. Focussing on the particular case of type IIB Sasaki-Einstein backgrounds we recover the result that marginal deformations correspond to perturbing the solution by three-form flux at first order. In various explicit examples, we show that our expression for the three-form flux matches those in the literature and the obstruction conditions match the one-loop beta functions of the dual SCFT.

  20. RISK-INFORMED SAFETY MARGIN CHARACTERIZATION

    International Nuclear Information System (INIS)

    Dinh, Nam; Szilard, Ronaldo

    2009-01-01

    The concept of safety margins has served as a fundamental principle in the design and operation of commercial nuclear power plants (NPPs). Defined as the minimum distance between a system's 'loading' and its 'capacity', plant design and operation is predicated on ensuring an adequate safety margin for safety-significant parameters (e.g., fuel cladding temperature, containment pressure, etc.) is provided over the spectrum of anticipated plant operating, transient and accident conditions. To meet the anticipated challenges associated with extending the operational lifetimes of the current fleet of operating NPPs, the United States Department of Energy (USDOE), the Idaho National Laboratory (INL) and the Electric Power Research Institute (EPRI) have developed a collaboration to conduct coordinated research to identify and address the technological challenges and opportunities that likely would affect the safe and economic operation of the existing NPP fleet over the postulated long-term time horizons. In this paper we describe a framework for developing and implementing a Risk-Informed Safety Margin Characterization (RISMC) approach to evaluate and manage changes in plant safety margins over long time horizons

  1. A novel implementation of kNN classifier based on multi-tupled meteorological input data for wind power prediction

    International Nuclear Information System (INIS)

    Yesilbudak, Mehmet; Sagiroglu, Seref; Colak, Ilhami

    2017-01-01

    Highlights: • An accurate wind power prediction model is proposed for very short-term horizon. • The k-nearest neighbor classifier is implemented based on the multi-tupled inputs. • The variation of wind power prediction errors is evaluated in various aspects. • Our approach shows the superior prediction performance over the persistence method. - Abstract: With the growing share of wind power production in the electric power grids, many critical challenges to the grid operators have been emerged in terms of the power balance, power quality, voltage support, frequency stability, load scheduling, unit commitment and spinning reserve calculations. To overcome such problems, numerous studies have been conducted to predict the wind power production, but a small number of them have attempted to improve the prediction accuracy by employing the multidimensional meteorological input data. The novelties of this study lie in the proposal of an efficient and easy to implement very short-term wind power prediction model based on the k-nearest neighbor classifier (kNN), in the usage of wind speed, wind direction, barometric pressure and air temperature parameters as the multi-tupled meteorological inputs and in the comparison of wind power prediction results with respect to the persistence reference model. As a result of the achieved patterns, we characterize the variation of wind power prediction errors according to the input tuples, distance measures and neighbor numbers, and uncover the most influential and the most ineffective meteorological parameters on the optimization of wind power prediction results.

  2. A New Adaptive Structural Signature for Symbol Recognition by Using a Galois Lattice as a Classifier.

    Science.gov (United States)

    Coustaty, M; Bertet, K; Visani, M; Ogier, J

    2011-08-01

    In this paper, we propose a new approach for symbol recognition using structural signatures and a Galois lattice as a classifier. The structural signatures are based on topological graphs computed from segments which are extracted from the symbol images by using an adapted Hough transform. These structural signatures-that can be seen as dynamic paths which carry high-level information-are robust toward various transformations. They are classified by using a Galois lattice as a classifier. The performance of the proposed approach is evaluated based on the GREC'03 symbol database, and the experimental results we obtain are encouraging.

  3. 32 CFR 2400.28 - Dissemination of classified information.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Dissemination of classified information. 2400.28... SECURITY PROGRAM Safeguarding § 2400.28 Dissemination of classified information. Heads of OSTP offices... originating official may prescribe specific restrictions on dissemination of classified information when...

  4. How well Can We Classify SWOT-derived Water Surface Profiles?

    Science.gov (United States)

    Frasson, R. P. M.; Wei, R.; Picamilh, C.; Durand, M. T.

    2015-12-01

    The upcoming Surface Water Ocean Topography (SWOT) mission will detect water bodies and measure water surface elevation throughout the globe. Within its continental high resolution mask, SWOT is expected to deliver measurements of river width, water elevation and slope of rivers wider than ~50 m. The definition of river reaches is an integral step of the computation of discharge based on SWOT's observables. As poorly defined reaches can negatively affect the accuracy of discharge estimations, we seek strategies to break up rivers into physically meaningful sections. In the present work, we investigate how accurately we can classify water surface profiles based on simulated SWOT observations. We assume that most river sections can be classified as either M1 (mild slope, with depth larger than the normal depth), or A1 (adverse slope with depth larger than the critical depth). This assumption allows the classification to be based solely on the second derivative of water surface profiles, with convex profiles being classified as A1 and concave profiles as M1. We consider a HEC-RAS model of the Sacramento River as a representation of the true state of the river. We employ the SWOT instrument simulator to generate a synthetic pass of the river, which includes our best estimates of height measurement noise and geolocation errors. We process the resulting point cloud of water surface heights with the RiverObs package, which delineates the river center line and draws the water surface profile. Next, we identify inflection points in the water surface profile and classify the sections between the inflection points. Finally, we compare our limited classification of simulated SWOT-derived water surface profile to the "exact" classification of the modeled Sacramento River. With this exercise, we expect to determine if SWOT observations can be used to find inflection points in water surface profiles, which would bring knowledge of flow regimes into the definition of river reaches.

  5. Systematic reviews: I. The correlation between laboratory tests on marginal quality and bond strength. II. The correlation between marginal quality and clinical outcome.

    Science.gov (United States)

    Heintze, Siegward D

    2007-01-01

    topics, about 80% of the studies revealed that there was no correlation between the two methods. For the correlation quantitative marginal analysis/clinical outcome, data were compared to the clinical outcome of 11 selected clinical studies. In only 2 out of the 11 studies (18%) did the clinical outcome match the prognosis based on the laboratory tests; the remaining studies did not show any correlation. When pooling data on 20 adhesive systems, no correlation was found between the percentage of continuous margin of restorations placed in extracted premolars and the percentage of teeth that showed no retention loss in clinical studies, no discoloured margins, acceptable margins, or absence of secondary caries. With regard to the correlation of dye penetration and clinical studies, no sufficient number of studies was found that matched the inclusion criteria. However, literature data suggest that there is no correlation between microleakage data as measured in the laboratory and clinical parameters. The results of bond strength tests did not correlate with laboratory tests that evaluated the marginal seal of restorations such as microleakage or gap analysis. The quantitative marginal analysis of Class V fillings in the laboratory was unable to predict the performance of the same materials in vivo. Therefore, microleakage tests or the quantitative marginal analysis should be abandoned and research should focus on laboratory tests that are validated with regard to their ability to satisfactorily predict the clinical performance of restorative materials.

  6. Professional Commitment and Professional Marginalism in Teachers

    Directory of Open Access Journals (Sweden)

    Kalashnikov A.I.

    2017-11-01

    Full Text Available The article reviews teachers' attitudes towards the teaching profession which can be expressed both in professional commitment and in professional marginalism. The dominance of professional marginalism could affect destructively the students as well as the teacher’s personality, hence the issues related to the content of personal position of a marginal and the rate of marginalism among teachers. It was suggested that marginalism could be revealed in the study of professional commitment. The study involved 81 teachers of Sverdlovsk secondary schools aged 21—60 years with work experience ranging from 1 month to 39 years. The Professional Commitment Questionnaire was used as the study technique. The results showed that negative emotional attitude towards the profession and reluctance to leave the profession were grouped as a separate factor. The dispersion factor was 12,5%. The factor loadings ranged from 0.42 to 0.84. The study proved that professional marginalism in teachers includes dissatisfaction with work, feelings of resentment against profession and an unwillingness to leave the profession.

  7. A quantitative analysis of transtensional margin width

    Science.gov (United States)

    Jeanniot, Ludovic; Buiter, Susanne J. H.

    2018-06-01

    Continental rifted margins show variations between a few hundred to almost a thousand kilometres in their conjugated widths from the relatively undisturbed continent to the oceanic crust. Analogue and numerical modelling results suggest that the conjugated width of rifted margins may have a relationship to their obliquity of divergence, with narrower margins occurring for higher obliquity. We here test this prediction by analysing the obliquity and rift width for 26 segments of transtensional conjugate rifted margins in the Atlantic and Indian Oceans. We use the plate reconstruction software GPlates (http://www.gplates.org) for different plate rotation models to estimate the direction and magnitude of rifting from the initial phases of continental rifting until breakup. Our rift width corresponds to the distance between the onshore maximum topography and the last identified continental crust. We find a weak positive correlation between the obliquity of rifting and rift width. Highly oblique margins are narrower than orthogonal margins, as expected from analogue and numerical models. We find no relationships between rift obliquities and rift duration nor the presence or absence of Large Igneous Provinces (LIPs).

  8. Classifying Transition Behaviour in Postural Activity Monitoring

    Directory of Open Access Journals (Sweden)

    James BRUSEY

    2009-10-01

    Full Text Available A few accelerometers positioned on different parts of the body can be used to accurately classify steady state behaviour, such as walking, running, or sitting. Such systems are usually built using supervised learning approaches. Transitions between postures are, however, difficult to deal with using posture classification systems proposed to date, since there is no label set for intermediary postures and also the exact point at which the transition occurs can sometimes be hard to pinpoint. The usual bypass when using supervised learning to train such systems is to discard a section of the dataset around each transition. This leads to poorer classification performance when the systems are deployed out of the laboratory and used on-line, particularly if the regimes monitored involve fast paced activity changes. Time-based filtering that takes advantage of sequential patterns is a potential mechanism to improve posture classification accuracy in such real-life applications. Also, such filtering should reduce the number of event messages needed to be sent across a wireless network to track posture remotely, hence extending the system’s life. To support time-based filtering, understanding transitions, which are the major event generators in a classification system, is a key. This work examines three approaches to post-process the output of a posture classifier using time-based filtering: a naïve voting scheme, an exponentially weighted voting scheme, and a Bayes filter. Best performance is obtained from the exponentially weighted voting scheme although it is suspected that a more sophisticated treatment of the Bayes filter might yield better results.

  9. How large a training set is needed to develop a classifier for microarray data?

    Science.gov (United States)

    Dobbin, Kevin K; Zhao, Yingdong; Simon, Richard M

    2008-01-01

    A common goal of gene expression microarray studies is the development of a classifier that can be used to divide patients into groups with different prognoses, or with different expected responses to a therapy. These types of classifiers are developed on a training set, which is the set of samples used to train a classifier. The question of how many samples are needed in the training set to produce a good classifier from high-dimensional microarray data is challenging. We present a model-based approach to determining the sample size required to adequately train a classifier. It is shown that sample size can be determined from three quantities: standardized fold change, class prevalence, and number of genes or features on the arrays. Numerous examples and important experimental design issues are discussed. The method is adapted to address ex post facto determination of whether the size of a training set used to develop a classifier was adequate. An interactive web site for performing the sample size calculations is provided. We showed that sample size calculations for classifier development from high-dimensional microarray data are feasible, discussed numerous important considerations, and presented examples.

  10. Classifying smoking urges via machine learning.

    Science.gov (United States)

    Dumortier, Antoine; Beckjord, Ellen; Shiffman, Saul; Sejdić, Ervin

    2016-12-01

    Smoking is the largest preventable cause of death and diseases in the developed world, and advances in modern electronics and machine learning can help us deliver real-time intervention to smokers in novel ways. In this paper, we examine different machine learning approaches to use situational features associated with having or not having urges to smoke during a quit attempt in order to accurately classify high-urge states. To test our machine learning approaches, specifically, Bayes, discriminant analysis and decision tree learning methods, we used a dataset collected from over 300 participants who had initiated a quit attempt. The three classification approaches are evaluated observing sensitivity, specificity, accuracy and precision. The outcome of the analysis showed that algorithms based on feature selection make it possible to obtain high classification rates with only a few features selected from the entire dataset. The classification tree method outperformed the naive Bayes and discriminant analysis methods, with an accuracy of the classifications up to 86%. These numbers suggest that machine learning may be a suitable approach to deal with smoking cessation matters, and to predict smoking urges, outlining a potential use for mobile health applications. In conclusion, machine learning classifiers can help identify smoking situations, and the search for the best features and classifier parameters significantly improves the algorithms' performance. In addition, this study also supports the usefulness of new technologies in improving the effect of smoking cessation interventions, the management of time and patients by therapists, and thus the optimization of available health care resources. Future studies should focus on providing more adaptive and personalized support to people who really need it, in a minimum amount of time by developing novel expert systems capable of delivering real-time interventions. Copyright © 2016 Elsevier Ireland Ltd. All rights

  11. TRABAJO, REMUNERACIÓN Y PRODUCTIVIDAD (Cómo establecer la cuota de remuneración justa al trabajo en base a su productividad marginal TRABAJO, REMUNERACIÓN Y PRODUCTIVIDAD (Cómo establecer la cuota de remuneración justa al trabajo en base a su productividad marginal

    Directory of Open Access Journals (Sweden)

    Jorge Rionda Ramírez

    2012-02-01

    Full Text Available Este trabajo hace una propuesta metodológica acerca de cómo medir la cuota justa de remuneración al trabajo con base a su contribución marginal al valor del producto. Se aplica al caso de la producción de bienes tangibles. Parte de un modelo de programación lineal cuya función dual estipula precios sombra que indican la contribución marginal al valor producido de cada unidad de insumo involucrado en la producción. Un acercamiento novedoso dado que en los trabajos marginalistas no existe acercamiento alguno ex profeso en torno al tema de la tasa de explotación y la cuota de remuneración al trabajo. Así también presenta un acercamiento interesante en materia de precios de oportunidad en la sustitución de factores productivos en base al grado de intensidad utilizada del mismo, como de su contribución al valor producido. Con esto se comprueba que aún dentro de la tesis utilitarista el tema de la explotación tiene cabida, simplemente que los teóricos de esta visión evaden el tema por tratar de establecer un planteamiento del problema económico de tipo cientificista, positivo, sin implicaciones de corte político-normativo.This work makes a methodological proposal on how to measure a just remuneration rate for work on the basis of its marginal contribution to product value. It is applied to the production of tangible goods and comes from a linear programming model whose dual function stipulates the shadow prices that indicate the marginal contribution of the value produced of each factor involved in production. It is a novel approach given that in marginalist work no approach exists ex profeso on the subjects of exploitation rate and the remuneration rate for work. It also presents an interesting approach on the subject of opportunity prices with the substitution of productive factors based on the degree of intensity used in production, as well as its contribution to the value produced. It is shown that even within a utilitarian

  12. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    Science.gov (United States)

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  13. Geomorphic response of a continental margin to tectonic and eustatic variations, the Levant margin during the Messinian Salinity Crisis

    Science.gov (United States)

    Ben Moshe, Liran; Ben-Avraham, Zvi; Enzel, Yehouda; Uri, Schattner

    2017-04-01

    During the Messinian Salinity Crisis (MSC, 5.97±0.01-5.33 Ma) the Mediterranean Levant margin experienced major eustatic and sedimentary cycles as well as tectonic motion along the nearby Dead Sea fault plate boundary. New structures formed along this margin with morphology responding to these changes. Our study focuses on changes in this morphology across the margin. It is based on interpretation of three 3D seismic reflection volumes from offshore Israel. Multi-attribute analysis aided the extraction of key reflectors. Morphologic analysis of these data quantified interacting eustasy, sedimentation, and tectonics. Late Messinian morphologic domains include: (a) continental shelf; (b) 'Delta' anticline, forming a ridge diagonal to the strike of the margin; (c) southward dipping 'Hadera' valley, separating between (a) and (b); (d) 'Delta Gap' - a water gap crossing perpendicular to the anticline axis, exhibiting a sinuous thalweg; (e) continental slope. Drainage across the margin developed in several stages. Remains of turbidite flows crossing the margin down-slope were spotted across the 'Delta' anticline. These flows accumulated with the MSC evaporate sequence and prior to the anticline folding. Rising of the anticline, above the then bathymetry, either blocked or diverted the turbidites. That rising also defined the Hadera valley. In-situ evaporates, covering the valley floor, are, in turn covered by a fan-delta at the distal end of the valley. The fan-delta complex contains eroded evaporites and Lago-Mare fauna. Its top is truncated by dendritic fluvial channels that drained towards the Delta Gap. The Delta Gap was carved through the Delta ridge in a morphological and structural transition zone. We propose that during the first stages of the MSC (5.97±0.01-5.59 ma) destabilization of the continental slope due to oscillating sea level produced gravity currents that flowed through the pre-existing Delta anticline. Subsequent folding of the Delta anticline

  14. A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    WANG Shaoyu

    2016-12-01

    Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.

  15. Classification Identification of Acoustic Emission Signals from Underground Metal Mine Rock by ICIMF Classifier

    Directory of Open Access Journals (Sweden)

    Hongyan Zuo

    2014-01-01

    Full Text Available To overcome the drawback that fuzzy classifier was sensitive to noises and outliers, Mamdani fuzzy classifier based on improved chaos immune algorithm was developed, in which bilateral Gaussian membership function parameters were set as constraint conditions and the indexes of fuzzy classification effectiveness and number of correct samples of fuzzy classification as the subgoal of fitness function. Moreover, Iris database was used for simulation experiment, classification, and recognition of acoustic emission signals and interference signals from stope wall rock of underground metal mines. The results showed that Mamdani fuzzy classifier based on improved chaos immune algorithm could effectively improve the prediction accuracy of classification of data sets with noises and outliers and the classification accuracy of acoustic emission signal and interference signal from stope wall rock of underground metal mines was 90.00%. It was obvious that the improved chaos immune Mamdani fuzzy (ICIMF classifier was useful for accurate diagnosis of acoustic emission signal and interference signal from stope wall rock of underground metal mines.

  16. Binary classifiers and latent sequence models for emotion detection in suicide notes.

    Science.gov (United States)

    Cherry, Colin; Mohammad, Saif M; de Bruijn, Berry

    2012-01-01

    This paper describes the National Research Council of Canada's submission to the 2011 i2b2 NLP challenge on the detection of emotions in suicide notes. In this task, each sentence of a suicide note is annotated with zero or more emotions, making it a multi-label sentence classification task. We employ two distinct large-margin models capable of handling multiple labels. The first uses one classifier per emotion, and is built to simplify label balance issues and to allow extremely fast development. This approach is very effective, scoring an F-measure of 55.22 and placing fourth in the competition, making it the best system that does not use web-derived statistics or re-annotated training data. Second, we present a latent sequence model, which learns to segment the sentence into a number of emotion regions. This model is intended to gracefully handle sentences that convey multiple thoughts and emotions. Preliminary work with the latent sequence model shows promise, resulting in comparable performance using fewer features.

  17. Memory-Based Specification of Verbal Features for Classifying Animals into Super-Ordinate and Sub-Ordinate Categories

    Directory of Open Access Journals (Sweden)

    Takahiro Soshi

    2017-09-01

    Full Text Available Accumulating evidence suggests that category representations are based on features. Distinguishing features are considered to define categories, because of all-or-none responses for objects in different categories; however, it is unclear how distinguishing features actually classify objects at various category levels. The present study included 75 animals within three classes (mammal, bird, and fish, along with 195 verbal features. Healthy adults participated in memory-based feature-animal matching verification tests. Analyses included a hierarchical clustering analysis, support vector machine, and independent component analysis to specify features effective for classifications. Quantitative and qualitative comparisons for significant features were conducted between super-ordinate and sub-ordinate levels. The number of significant features was larger for super-ordinate than sub-ordinate levels. Qualitatively, the proportion of biological features was larger than cultural/affective features in both the levels, while the proportion of affective features increased at the sub-ordinate level. To summarize, the two types of features differentially function to establish category representations.

  18. On the evaluation of marginal expected shortfall

    DEFF Research Database (Denmark)

    Caporin, Massimiliano; Santucci de Magistris, Paolo

    2012-01-01

    In the analysis of systemic risk, Marginal Expected Shortfall may be considered to evaluate the marginal impact of a single stock on the market Expected Shortfall. These quantities are generally computed using log-returns, in particular when there is also a focus on returns conditional distribution....... In this case, the market log-return is only approximately equal to the weighed sum of equities log-returns. We show that the approximation error is large during turbulent market phases, with a subsequent impact on Marginal Expected Shortfall. We then suggest how to improve the evaluation of Marginal Expected...

  19. [Resection margins in conservative breast cancer surgery].

    Science.gov (United States)

    Medina Fernández, Francisco Javier; Ayllón Terán, María Dolores; Lombardo Galera, María Sagrario; Rioja Torres, Pilar; Bascuñana Estudillo, Guillermo; Rufián Peña, Sebastián

    2013-01-01

    Conservative breast cancer surgery is facing a new problem: the potential tumour involvement of resection margins. This eventuality has been closely and negatively associated with disease-free survival. Various factors may influence the likelihood of margins being affected, mostly related to the characteristics of the tumour, patient or surgical technique. In the last decade, many studies have attempted to find predictive factors for margin involvement. However, it is currently the new techniques used in the study of margins and tumour localisation that are significantly reducing reoperations in conservative breast cancer surgery. Copyright © 2012 AEC. Published by Elsevier Espana. All rights reserved.

  20. Spectral classifying base on color of live corals and dead corals covered with algae

    Science.gov (United States)

    Nurdin, Nurjannah; Komatsu, Teruhisa; Barille, Laurent; Akbar, A. S. M.; Sawayama, Shuhei; Fitrah, Muh. Nur; Prasyad, Hermansyah

    2016-05-01

    Pigments in the host tissues of corals can make a significant contribution to their spectral signature and can affect their apparent color as perceived by a human observer. The aim of this study is classifying the spectral reflectance of corals base on different color. It is expected that they can be used as references in discriminating between live corals, dead coral covered with algae Spectral reflectance data was collected in three small islands, Spermonde Archipelago, Indonesia by using a hyperspectral radiometer underwater. First and second derivative analysis resolved the wavelength locations of dominant features contributing to reflectance in corals and support the distinct differences in spectra among colour existed. Spectral derivative analysis was used to determine the specific wavelength regions ideal for remote identification of substrate type. The analysis results shown that yellow, green, brown and violet live corals are spectrally separable from each other, but they are similar with dead coral covered with algae spectral.