WorldWideScience

Sample records for margin based classifiers

  1. Maximum margin classifier working in a set of strings.

    Science.gov (United States)

    Koyano, Hitoshi; Hayashida, Morihiro; Akutsu, Tatsuya

    2016-03-01

    Numbers and numerical vectors account for a large portion of data. However, recently, the amount of string data generated has increased dramatically. Consequently, classifying string data is a common problem in many fields. The most widely used approach to this problem is to convert strings into numerical vectors using string kernels and subsequently apply a support vector machine that works in a numerical vector space. However, this non-one-to-one conversion involves a loss of information and makes it impossible to evaluate, using probability theory, the generalization error of a learning machine, considering that the given data to train and test the machine are strings generated according to probability laws. In this study, we approach this classification problem by constructing a classifier that works in a set of strings. To evaluate the generalization error of such a classifier theoretically, probability theory for strings is required. Therefore, we first extend a limit theorem for a consensus sequence of strings demonstrated by one of the authors and co-workers in a previous study. Using the obtained result, we then demonstrate that our learning machine classifies strings in an asymptotically optimal manner. Furthermore, we demonstrate the usefulness of our machine in practical data analysis by applying it to predicting protein-protein interactions using amino acid sequences and classifying RNAs by the secondary structure using nucleotide sequences.

  2. Correlation Dimension-Based Classifier

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    2014-01-01

    Roč. 44, č. 12 (2014), s. 2253-2263 ISSN 2168-2267 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : classifier * multidimensional data * correlation dimension * scaling exponent * polynomial expansion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014

  3. Aggregation Operator Based Fuzzy Pattern Classifier Design

    DEFF Research Database (Denmark)

    Mönks, Uwe; Larsen, Henrik Legind; Lohweg, Volker

    2009-01-01

    This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable....... The performances of novel classifiers using substitutes of MFPC's geometric mean aggregator are benchmarked in the scope of an image processing application against the MFPC to reveal classification improvement potentials for obtaining higher classification rates....

  4. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  5. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  6. Reinforcement Learning Based Artificial Immune Classifier

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  7. Double Ramp Loss Based Reject Option Classifier

    Science.gov (United States)

    2015-05-22

    of convex (DC) functions. To minimize it, we use DC programming approach [1]. The proposed method has following advantages: (1) the proposed loss LDR ...space constraints. We see that LDR does not put any restriction on ρ for it to be an upper bound of L0−d−1. 2.2 Risk Formulation Using LDR Let S = {(xn...classifier learnt using LDR based approach (C = 100, μ = 1, d = .2). Filled circles and triangles represent the support vectors. 4 Experimental Results We show

  8. Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model

    Directory of Open Access Journals (Sweden)

    Bogdan Gliwa

    2011-01-01

    Full Text Available The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods. Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented.

  9. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  10. Ensemble of classifiers based network intrusion detection system performance bound

    CSIR Research Space (South Africa)

    Mkuzangwe, Nenekazi NP

    2017-11-01

    Full Text Available This paper provides a performance bound of a network intrusion detection system (NIDS) that uses an ensemble of classifiers. Currently researchers rely on implementing the ensemble of classifiers based NIDS before they can determine the performance...

  11. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  12. Entropy Based Classifier Combination for Sentence Segmentation

    Science.gov (United States)

    2007-01-01

    speaker diarization system to divide the audio data into hypothetical speakers [17...the prosodic feature also includes turn-based features which describe the position of a word in relation to diarization seg- mentation. The speaker ...ro- bust speaker segmentation: the ICSI-SRI fall 2004 diarization system,” in Proc. RT-04F Workshop, 2004. [18] “The rich transcription fall 2003,” http://nist.gov/speech/tests/rt/rt2003/fall/docs/rt03-fall-eval- plan-v9.pdf.

  13. Machine Learning Based Classifier for Falsehood Detection

    Science.gov (United States)

    Mallikarjun, H. M.; Manimegalai, P., Dr.; Suresh, H. N., Dr.

    2017-08-01

    The investigation of physiological techniques for Falsehood identification tests utilizing the enthusiastic aggravations started as a part of mid 1900s. The need of Falsehood recognition has been a piece of our general public from hundreds of years back. Different requirements drifted over the general public raising the need to create trick evidence philosophies for Falsehood identification. The established similar addressing tests have been having a tendency to gather uncertain results against which new hearty strategies are being explored upon for acquiring more productive Falsehood discovery set up. Electroencephalography (EEG) is a non-obtrusive strategy to quantify the action of mind through the anodes appended to the scalp of a subject. Electroencephalogram is a record of the electric signs produced by the synchronous activity of mind cells over a timeframe. The fundamental goal is to accumulate and distinguish the important information through this action which can be acclimatized for giving surmising to Falsehood discovery in future analysis. This work proposes a strategy for Falsehood discovery utilizing EEG database recorded on irregular people of various age gatherings and social organizations. The factual investigation is directed utilizing MATLAB v-14. It is a superior dialect for specialized registering which spares a considerable measure of time with streamlined investigation systems. In this work center is made on Falsehood Classification by Support Vector Machine (SVM). 72 Samples are set up by making inquiries from standard poll with a Wright and wrong replies in a diverse era from the individual in wearable head unit. 52 samples are trained and 20 are tested. By utilizing Bluetooth based Neurosky’s Mindwave kit, brain waves are recorded and qualities are arranged appropriately. In this work confusion matrix is derived by matlab programs and accuracy of 56.25 % is achieved.

  14. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    International Nuclear Information System (INIS)

    Blanco, A; Rodriguez, R; Martinez-Maranon, I

    2014-01-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity

  15. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    Science.gov (United States)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  16. CRBRP structural and thermal margin beyond the design base

    International Nuclear Information System (INIS)

    Strawbridge, L.E.

    1979-01-01

    Prudent margins beyond the design base have been included in the design of Clinch River Breeder Reactor Plant to further reduce the risk to the public from highly improbable occurrences. These margins include Structural Margin Beyond the Design Base to address the energetics aspects and Thermal Margin Beyond the Design Base to address the longer term thermal and radiological consequences. The assessments that led to the specification of these margins are described, along with the experimental support for those assessments. 8 refs

  17. A systems biology-based classifier for hepatocellular carcinoma diagnosis.

    Directory of Open Access Journals (Sweden)

    Yanqiong Zhang

    Full Text Available AIM: The diagnosis of hepatocellular carcinoma (HCC in the early stage is crucial to the application of curative treatments which are the only hope for increasing the life expectancy of patients. Recently, several large-scale studies have shed light on this problem through analysis of gene expression profiles to identify markers correlated with HCC progression. However, those marker sets shared few genes in common and were poorly validated using independent data. Therefore, we developed a systems biology based classifier by combining the differential gene expression with topological features of human protein interaction networks to enhance the ability of HCC diagnosis. METHODS AND RESULTS: In the Oncomine platform, genes differentially expressed in HCC tissues relative to their corresponding normal tissues were filtered by a corrected Q value cut-off and Concept filters. The identified genes that are common to different microarray datasets were chosen as the candidate markers. Then, their networks were analyzed by GeneGO Meta-Core software and the hub genes were chosen. After that, an HCC diagnostic classifier was constructed by Partial Least Squares modeling based on the microarray gene expression data of the hub genes. Validations of diagnostic performance showed that this classifier had high predictive accuracy (85.88∼92.71% and area under ROC curve (approximating 1.0, and that the network topological features integrated into this classifier contribute greatly to improving the predictive performance. Furthermore, it has been demonstrated that this modeling strategy is not only applicable to HCC, but also to other cancers. CONCLUSION: Our analysis suggests that the systems biology-based classifier that combines the differential gene expression and topological features of human protein interaction network may enhance the diagnostic performance of HCC classifier.

  18. Data Stream Classification Based on the Gamma Classifier

    Directory of Open Access Journals (Sweden)

    Abril Valeria Uriarte-Arcia

    2015-01-01

    Full Text Available The ever increasing data generation confronts us with the problem of handling online massive amounts of information. One of the biggest challenges is how to extract valuable information from these massive continuous data streams during single scanning. In a data stream context, data arrive continuously at high speed; therefore the algorithms developed to address this context must be efficient regarding memory and time management and capable of detecting changes over time in the underlying distribution that generated the data. This work describes a novel method for the task of pattern classification over a continuous data stream based on an associative model. The proposed method is based on the Gamma classifier, which is inspired by the Alpha-Beta associative memories, which are both supervised pattern recognition models. The proposed method is capable of handling the space and time constrain inherent to data stream scenarios. The Data Streaming Gamma classifier (DS-Gamma classifier implements a sliding window approach to provide concept drift detection and a forgetting mechanism. In order to test the classifier, several experiments were performed using different data stream scenarios with real and synthetic data streams. The experimental results show that the method exhibits competitive performance when compared to other state-of-the-art algorithms.

  19. A support vector machine (SVM) based voltage stability classifier

    Energy Technology Data Exchange (ETDEWEB)

    Dosano, R.D.; Song, H. [Kunsan National Univ., Kunsan, Jeonbuk (Korea, Republic of); Lee, B. [Korea Univ., Seoul (Korea, Republic of)

    2007-07-01

    Power system stability has become even more complex and critical with the advent of deregulated energy markets and the growing desire to completely employ existing transmission and infrastructure. The economic pressure on electricity markets forces the operation of power systems and components to their limit of capacity and performance. System conditions can be more exposed to instability due to greater uncertainty in day to day system operations and increase in the number of potential components for system disturbances potentially resulting in voltage stability. This paper proposed a support vector machine (SVM) based power system voltage stability classifier using local measurements of voltage and active power of load. It described the procedure for fast classification of long-term voltage stability using the SVM algorithm. The application of the SVM based voltage stability classifier was presented with reference to the choice of input parameters; input data preconditioning; moving window for feature vector; determination of learning samples; and other considerations in SVM applications. The paper presented a case study with numerical examples of an 11-bus test system. The test results for the feasibility study demonstrated that the classifier could offer an excellent performance in classification with time-series measurements in terms of long-term voltage stability. 9 refs., 14 figs.

  20. Nonlinear Knowledge in Kernel-Based Multiple Criteria Programming Classifier

    Science.gov (United States)

    Zhang, Dongling; Tian, Yingjie; Shi, Yong

    Kernel-based Multiple Criteria Linear Programming (KMCLP) model is used as classification methods, which can learn from training examples. Whereas, in traditional machine learning area, data sets are classified only by prior knowledge. Some works combine the above two classification principle to overcome the defaults of each approach. In this paper, we propose a model to incorporate the nonlinear knowledge into KMCLP in order to solve the problem when input consists of not only training example, but also nonlinear prior knowledge. In dealing with real world case breast cancer diagnosis, the model shows its better performance than the model solely based on training data.

  1. Interface Prostheses With Classifier-Feedback-Based User Training.

    Science.gov (United States)

    Fang, Yinfeng; Zhou, Dalin; Li, Kairu; Liu, Honghai

    2017-11-01

    It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well as the centroids of the training samples, whose dimensionality is reduced to minimal number by dimension reduction. Clustering feedback provides a criterion that guides users to adjust motion gestures and muscle contraction forces intentionally. The experiment results have demonstrated that hand motion recognition accuracy increases steadily along the progress of the clustering-feedback-based user training, while conventional classifier-feedback methods, i.e., label feedback, hardly achieve any improvement. The result concludes that the use of proper classifier feedback can accelerate the process of user training, and implies prosperous future for the amputees with limited or no experience in pattern-recognition-based prosthetic device manipulation.It is evident that user training significantly affects performance of pattern-recognition-based myoelectric prosthetic device control. Despite plausible classification accuracy on offline datasets, online accuracy usually suffers from the changes in physiological conditions and electrode displacement. The user ability in generating consistent electromyographic (EMG) patterns can be enhanced via proper user training strategies in order to improve online performance. This study proposes a clustering-feedback strategy that provides real-time feedback to users by means of a visualized online EMG signal input as well

  2. Double-sided Moral Hazard and Margin-based Royalty

    OpenAIRE

    NARIU, Tatsuhiko; UEDA, Kaoru; LEE, DongJoon

    2009-01-01

    This paper analyzes royalty modes in the franchise arrangements of convenience stores under double-sided moral hazard. In Japan, the majority of franchisors charge margin-based royalties based on net margins rather than sales-based royalties based on sales. We show that the franchisor can attain the first-best outcome by adopting margin-based royalties under double-sided moral hazard. We consider a case where a franchisee sells two kinds of goods; one is shipped from its franchisor and the ot...

  3. Speaker gender identification based on majority vote classifiers

    Science.gov (United States)

    Mezghani, Eya; Charfeddine, Maha; Nicolas, Henri; Ben Amar, Chokri

    2017-03-01

    Speaker gender identification is considered among the most important tools in several multimedia applications namely in automatic speech recognition, interactive voice response systems and audio browsing systems. Gender identification systems performance is closely linked to the selected feature set and the employed classification model. Typical techniques are based on selecting the best performing classification method or searching optimum tuning of one classifier parameters through experimentation. In this paper, we consider a relevant and rich set of features involving pitch, MFCCs as well as other temporal and frequency-domain descriptors. Five classification models including decision tree, discriminant analysis, nave Bayes, support vector machine and k-nearest neighbor was experimented. The three best perming classifiers among the five ones will contribute by majority voting between their scores. Experimentations were performed on three different datasets spoken in three languages: English, German and Arabic in order to validate language independency of the proposed scheme. Results confirm that the presented system has reached a satisfying accuracy rate and promising classification performance thanks to the discriminating abilities and diversity of the used features combined with mid-level statistics.

  4. SpectraClassifier 1.0: a user friendly, automated MRS-based classifier-development system

    Directory of Open Access Journals (Sweden)

    Julià-Sapé Margarida

    2010-02-01

    Full Text Available Abstract Background SpectraClassifier (SC is a Java solution for designing and implementing Magnetic Resonance Spectroscopy (MRS-based classifiers. The main goal of SC is to allow users with minimum background knowledge of multivariate statistics to perform a fully automated pattern recognition analysis. SC incorporates feature selection (greedy stepwise approach, either forward or backward, and feature extraction (PCA. Fisher Linear Discriminant Analysis is the method of choice for classification. Classifier evaluation is performed through various methods: display of the confusion matrix of the training and testing datasets; K-fold cross-validation, leave-one-out and bootstrapping as well as Receiver Operating Characteristic (ROC curves. Results SC is composed of the following modules: Classifier design, Data exploration, Data visualisation, Classifier evaluation, Reports, and Classifier history. It is able to read low resolution in-vivo MRS (single-voxel and multi-voxel and high resolution tissue MRS (HRMAS, processed with existing tools (jMRUI, INTERPRET, 3DiCSI or TopSpin. In addition, to facilitate exchanging data between applications, a standard format capable of storing all the information needed for a dataset was developed. Each functionality of SC has been specifically validated with real data with the purpose of bug-testing and methods validation. Data from the INTERPRET project was used. Conclusions SC is a user-friendly software designed to fulfil the needs of potential users in the MRS community. It accepts all kinds of pre-processed MRS data types and classifies them semi-automatically, allowing spectroscopists to concentrate on interpretation of results with the use of its visualisation tools.

  5. Hyperspectral image classifier based on beach spectral feature

    International Nuclear Information System (INIS)

    Liang, Zhang; Lianru, Gao; Bing, Zhang

    2014-01-01

    The seashore, especially coral bank, is sensitive to human activities and environmental changes. A multispectral image, with coarse spectral resolution, is inadaptable for identify subtle spectral distinctions between various beaches. To the contrary, hyperspectral image with narrow and consecutive channels increases our capability to retrieve minor spectral features which is suit for identification and classification of surface materials on the shore. Herein, this paper used airborne hyperspectral data, in addition to ground spectral data to study the beaches in Qingdao. The image data first went through image pretreatment to deal with the disturbance of noise, radiation inconsistence and distortion. In succession, the reflection spectrum, the derivative spectrum and the spectral absorption features of the beach surface were inspected in search of diagnostic features. Hence, spectra indices specific for the unique environment of seashore were developed. According to expert decisions based on image spectrums, the beaches are ultimately classified into sand beach, rock beach, vegetation beach, mud beach, bare land and water. In situ surveying reflection spectrum from GER1500 field spectrometer validated the classification production. In conclusion, the classification approach under expert decision based on feature spectrum is proved to be feasible for beaches

  6. Locating and classifying defects using an hybrid data base

    Energy Technology Data Exchange (ETDEWEB)

    Luna-Aviles, A; Diaz Pineda, A [Tecnologico de Estudios Superiores de Coacalco. Av. 16 de Septiembre 54, Col. Cabecera Municipal. C.P. 55700 (Mexico); Hernandez-Gomez, L H; Urriolagoitia-Calderon, G; Urriolagoitia-Sosa, G [Instituto Politecnico Nacional. ESIME-SEPI. Unidad Profesional ' Adolfo Lopez Mateos' Edificio 5, 30 Piso, Colonia Lindavista. Gustavo A. Madero. 07738 Mexico D.F. (Mexico); Durodola, J F [School of Technology, Oxford Brookes University, Headington Campus, Gipsy Lane, Oxford OX3 0BP (United Kingdom); Beltran Fernandez, J A, E-mail: alelunaav@hotmail.com, E-mail: luishector56@hotmail.com, E-mail: jdurodola@brookes.ac.uk

    2011-07-19

    A computational inverse technique was used in the localization and classification of defects. Postulated voids of two different sizes (2 mm and 4 mm diameter) were introduced in PMMA bars with and without a notch. The bar dimensions are 200x20x5 mm. One half of them were plain and the other half has a notch (3 mm x 4 mm) which is close to the defect area (19 mm x 16 mm).This analysis was done with an Artificial Neural Network (ANN) and its optimization was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). A hybrid data base was developed with numerical and experimental results. Synthetic data was generated with the finite element method using SOLID95 element of ANSYS code. A parametric analysis was carried out. Only one defect in such bars was taken into account and the first five natural frequencies were calculated. 460 cases were evaluated. Half of them were plain and the other half has a notch. All the input data was classified in two groups. Each one has 230 cases and corresponds to one of the two sort of voids mentioned above. On the other hand, experimental analysis was carried on with PMMA specimens of the same size. The first two natural frequencies of 40 cases were obtained with one void. The other three frequencies were obtained numerically. 20 of these bars were plain and the others have a notch. These experimental results were introduced in the synthetic data base. 400 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. In the next stage of this work, the ANN output was optimized with ANFIS. Previous papers showed that localization and classification of defects was reduced as notches were introduced in such bars. In the case of this paper, improved results were obtained when a hybrid data base was used.

  7. Locating and classifying defects using an hybrid data base

    Science.gov (United States)

    Luna-Avilés, A.; Hernández-Gómez, L. H.; Durodola, J. F.; Urriolagoitia-Calderón, G.; Urriolagoitia-Sosa, G.; Beltrán Fernández, J. A.; Díaz Pineda, A.

    2011-07-01

    A computational inverse technique was used in the localization and classification of defects. Postulated voids of two different sizes (2 mm and 4 mm diameter) were introduced in PMMA bars with and without a notch. The bar dimensions are 200×20×5 mm. One half of them were plain and the other half has a notch (3 mm × 4 mm) which is close to the defect area (19 mm × 16 mm).This analysis was done with an Artificial Neural Network (ANN) and its optimization was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). A hybrid data base was developed with numerical and experimental results. Synthetic data was generated with the finite element method using SOLID95 element of ANSYS code. A parametric analysis was carried out. Only one defect in such bars was taken into account and the first five natural frequencies were calculated. 460 cases were evaluated. Half of them were plain and the other half has a notch. All the input data was classified in two groups. Each one has 230 cases and corresponds to one of the two sort of voids mentioned above. On the other hand, experimental analysis was carried on with PMMA specimens of the same size. The first two natural frequencies of 40 cases were obtained with one void. The other three frequencies were obtained numerically. 20 of these bars were plain and the others have a notch. These experimental results were introduced in the synthetic data base. 400 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. In the next stage of this work, the ANN output was optimized with ANFIS. Previous papers showed that localization and classification of defects was reduced as notches were introduced in such bars. In the case of this paper, improved results were obtained when a hybrid data base was used.

  8. Locating and classifying defects using an hybrid data base

    International Nuclear Information System (INIS)

    Luna-Aviles, A; Diaz Pineda, A; Hernandez-Gomez, L H; Urriolagoitia-Calderon, G; Urriolagoitia-Sosa, G; Durodola, J F; Beltran Fernandez, J A

    2011-01-01

    A computational inverse technique was used in the localization and classification of defects. Postulated voids of two different sizes (2 mm and 4 mm diameter) were introduced in PMMA bars with and without a notch. The bar dimensions are 200x20x5 mm. One half of them were plain and the other half has a notch (3 mm x 4 mm) which is close to the defect area (19 mm x 16 mm).This analysis was done with an Artificial Neural Network (ANN) and its optimization was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). A hybrid data base was developed with numerical and experimental results. Synthetic data was generated with the finite element method using SOLID95 element of ANSYS code. A parametric analysis was carried out. Only one defect in such bars was taken into account and the first five natural frequencies were calculated. 460 cases were evaluated. Half of them were plain and the other half has a notch. All the input data was classified in two groups. Each one has 230 cases and corresponds to one of the two sort of voids mentioned above. On the other hand, experimental analysis was carried on with PMMA specimens of the same size. The first two natural frequencies of 40 cases were obtained with one void. The other three frequencies were obtained numerically. 20 of these bars were plain and the others have a notch. These experimental results were introduced in the synthetic data base. 400 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. In the next stage of this work, the ANN output was optimized with ANFIS. Previous papers showed that localization and classification of defects was reduced as notches were introduced in such bars. In the case of this paper, improved results were obtained when a hybrid data base was used.

  9. Asymptotic performance of regularized quadratic discriminant analysis based classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-12-13

    This paper carries out a large dimensional analysis of the standard regularized quadratic discriminant analysis (QDA) classifier designed on the assumption that data arise from a Gaussian mixture model. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that depends only on the covariances and means associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized QDA and can be used to determine the optimal regularization parameter that minimizes the misclassification error probability. Despite being valid only for Gaussian data, our theoretical findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from popular real data bases, thereby making an interesting connection between theory and practice.

  10. Case base classification on digital mammograms: improving the performance of case base classifier

    Science.gov (United States)

    Raman, Valliappan; Then, H. H.; Sumari, Putra; Venkatesa Mohan, N.

    2011-10-01

    Breast cancer continues to be a significant public health problem in the world. Early detection is the key for improving breast cancer prognosis. The aim of the research presented here is in twofold. First stage of research involves machine learning techniques, which segments and extracts features from the mass of digital mammograms. Second level is on problem solving approach which includes classification of mass by performance based case base classifier. In this paper we build a case-based Classifier in order to diagnose mammographic images. We explain different methods and behaviors that have been added to the classifier to improve the performance of the classifier. Currently the initial Performance base Classifier with Bagging is proposed in the paper and it's been implemented and it shows an improvement in specificity and sensitivity.

  11. Entropy based classifier for cross-domain opinion mining

    Directory of Open Access Journals (Sweden)

    Jyoti S. Deshmukh

    2018-01-01

    Full Text Available In recent years, the growth of social network has increased the interest of people in analyzing reviews and opinions for products before they buy them. Consequently, this has given rise to the domain adaptation as a prominent area of research in sentiment analysis. A classifier trained from one domain often gives poor results on data from another domain. Expression of sentiment is different in every domain. The labeling cost of each domain separately is very high as well as time consuming. Therefore, this study has proposed an approach that extracts and classifies opinion words from one domain called source domain and predicts opinion words of another domain called target domain using a semi-supervised approach, which combines modified maximum entropy and bipartite graph clustering. A comparison of opinion classification on reviews on four different product domains is presented. The results demonstrate that the proposed method performs relatively well in comparison to the other methods. Comparison of SentiWordNet of domain-specific and domain-independent words reveals that on an average 72.6% and 88.4% words, respectively, are correctly classified.

  12. Can scientific journals be classified based on their citation profiles?

    Directory of Open Access Journals (Sweden)

    Sayed-Amir Marashi

    2015-03-01

    Full Text Available Classification of scientific publications is of great importance in biomedical research evaluation. However, accurate classification of research publications is challenging and normally is performed in a rather subjective way. In the present paper, we propose to classify biomedical publications into superfamilies, by analysing their citation profiles, i.e. the location of citations in the structure of citing articles. Such a classification may help authors to find the appropriate biomedical journal for publication, may make journal comparisons more rational, and may even help planners to better track the consequences of their policies on biomedical research.

  13. Model-Based Systems Engineering Approach to Managing Mass Margin

    Science.gov (United States)

    Chung, Seung H.; Bayer, Todd J.; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Christopher; Lam, Doris

    2012-01-01

    When designing a flight system from concept through implementation, one of the fundamental systems engineering tasks ismanaging the mass margin and a mass equipment list (MEL) of the flight system. While generating a MEL and computing a mass margin is conceptually a trivial task, maintaining consistent and correct MELs and mass margins can be challenging due to the current practices of maintaining duplicate information in various forms, such as diagrams and tables, and in various media, such as files and emails. We have overcome this challenge through a model-based systems engineering (MBSE) approach within which we allow only a single-source-of-truth. In this paper we describe the modeling patternsused to capture the single-source-of-truth and the views that have been developed for the Europa Habitability Mission (EHM) project, a mission concept study, at the Jet Propulsion Laboratory (JPL).

  14. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    OpenAIRE

    M. Favorskaya; A. Nosov; A. Popov

    2015-01-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin dete...

  15. Effective Heart Disease Detection Based on Quantitative Computerized Traditional Chinese Medicine Using Representation Based Classifiers

    Directory of Open Access Journals (Sweden)

    Ting Shu

    2017-01-01

    Full Text Available At present, heart disease is the number one cause of death worldwide. Traditionally, heart disease is commonly detected using blood tests, electrocardiogram, cardiac computerized tomography scan, cardiac magnetic resonance imaging, and so on. However, these traditional diagnostic methods are time consuming and/or invasive. In this paper, we propose an effective noninvasive computerized method based on facial images to quantitatively detect heart disease. Specifically, facial key block color features are extracted from facial images and analyzed using the Probabilistic Collaborative Representation Based Classifier. The idea of facial key block color analysis is founded in Traditional Chinese Medicine. A new dataset consisting of 581 heart disease and 581 healthy samples was experimented by the proposed method. In order to optimize the Probabilistic Collaborative Representation Based Classifier, an analysis of its parameters was performed. According to the experimental results, the proposed method obtains the highest accuracy compared with other classifiers and is proven to be effective at heart disease detection.

  16. The scenario-based generalization of radiation therapy margins

    International Nuclear Information System (INIS)

    Fredriksson, Albin; Bokrantz, Rasmus

    2016-01-01

    We give a scenario-based treatment plan optimization formulation that is equivalent to planning with geometric margins if the scenario doses are calculated using the static dose cloud approximation. If the scenario doses are instead calculated more accurately, then our formulation provides a novel robust planning method that overcomes many of the difficulties associated with previous scenario-based robust planning methods. In particular, our method protects only against uncertainties that can occur in practice, it gives a sharp dose fall-off outside high dose regions, and it avoids underdosage of the target in ‘easy’ scenarios. The method shares the benefits of the previous scenario-based robust planning methods over geometric margins for applications where the static dose cloud approximation is inaccurate, such as irradiation with few fields and irradiation with ion beams. These properties are demonstrated on a suite of phantom cases planned for treatment with scanned proton beams subject to systematic setup uncertainty. (paper)

  17. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  18. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  19. Analysis and minimization of overtraining effect in rule-based classifiers for computer-aided diagnosis

    International Nuclear Information System (INIS)

    Li Qiang; Doi Kunio

    2006-01-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists detect various lesions in medical images. In CAD schemes, classifiers play a key role in achieving a high lesion detection rate and a low false-positive rate. Although many popular classifiers such as linear discriminant analysis and artificial neural networks have been employed in CAD schemes for reduction of false positives, a rule-based classifier has probably been the simplest and most frequently used one since the early days of development of various CAD schemes. However, with existing rule-based classifiers, there are major disadvantages that significantly reduce their practicality and credibility. The disadvantages include manual design, poor reproducibility, poor evaluation methods such as resubstitution, and a large overtraining effect. An automated rule-based classifier with a minimized overtraining effect can overcome or significantly reduce the extent of the above-mentioned disadvantages. In this study, we developed an 'optimal' method for the selection of cutoff thresholds and a fully automated rule-based classifier. Experimental results performed with Monte Carlo simulation and a real lung nodule CT data set demonstrated that the automated threshold selection method can completely eliminate overtraining effect in the procedure of cutoff threshold selection, and thus can minimize overall overtraining effect in the constructed rule-based classifier. We believe that this threshold selection method is very useful in the construction of automated rule-based classifiers with minimized overtraining effect

  20. Heart Sound Biometric System Based on Marginal Spectrum Analysis

    Science.gov (United States)

    Zhao, Zhidong; Shen, Qinqin; Ren, Fangqin

    2013-01-01

    This work presents a heart sound biometric system based on marginal spectrum analysis, which is a new feature extraction technique for identification purposes. This heart sound identification system is comprised of signal acquisition, pre-processing, feature extraction, training, and identification. Experiments on the selection of the optimal values for the system parameters are conducted. The results indicate that the new spectrum coefficients result in a significant increase in the recognition rate of 94.40% compared with that of the traditional Fourier spectrum (84.32%) based on a database of 280 heart sounds from 40 participants. PMID:23429515

  1. Conference Report: The New Discovery of Margins: Theory-Based Excursions in Marginal Social Fields

    Directory of Open Access Journals (Sweden)

    Babette Kirchner

    2014-05-01

    Full Text Available At this year's spring conference of the Sociology of Knowledge Section of the German Sociological Association, a diverse range of theoretical concepts and multiple empirical insights into different marginal social fields were presented. As in everyday life, drawing a line between center and margin can be seen as an important challenge that must equally be faced in sociology. The socially constructed borderline appears to be highly variable. Therefore it has to be delineated or fixed somehow. The construction of margins is necessary for society in general and smaller social groupings alike to confirm one's own "normal" identity, or one's own membership on the fringes. The different contributions exemplify what was established at the beginning of the conference: Namely that society and its margins are defined differently according to the empirical as well as conceptual focus. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1402148

  2. Conference Report: The New Discovery of Margins: Theory-Based Excursions in Marginal Social Fields

    OpenAIRE

    Kirchner, Babette; Lorenzen, Jule-Marie; Striffler, Christine

    2014-01-01

    At this year's spring conference of the Sociology of Knowledge Section of the German Sociological Association, a diverse range of theoretical concepts and multiple empirical insights into different marginal social fields were presented. As in everyday life, drawing a line between center and margin can be seen as an important challenge that must equally be faced in sociology. The socially constructed borderline appears to be highly variable. Therefore it has to be delineated or fixed somehow. ...

  3. Lung Nodule Image Classification Based on Local Difference Pattern and Combined Classifier.

    Science.gov (United States)

    Mao, Keming; Deng, Zhuofu

    2016-01-01

    This paper proposes a novel lung nodule classification method for low-dose CT images. The method includes two stages. First, Local Difference Pattern (LDP) is proposed to encode the feature representation, which is extracted by comparing intensity difference along circular regions centered at the lung nodule. Then, the single-center classifier is trained based on LDP. Due to the diversity of feature distribution for different class, the training images are further clustered into multiple cores and the multicenter classifier is constructed. The two classifiers are combined to make the final decision. Experimental results on public dataset show the superior performance of LDP and the combined classifier.

  4. Lung Nodule Image Classification Based on Local Difference Pattern and Combined Classifier

    Directory of Open Access Journals (Sweden)

    Keming Mao

    2016-01-01

    Full Text Available This paper proposes a novel lung nodule classification method for low-dose CT images. The method includes two stages. First, Local Difference Pattern (LDP is proposed to encode the feature representation, which is extracted by comparing intensity difference along circular regions centered at the lung nodule. Then, the single-center classifier is trained based on LDP. Due to the diversity of feature distribution for different class, the training images are further clustered into multiple cores and the multicenter classifier is constructed. The two classifiers are combined to make the final decision. Experimental results on public dataset show the superior performance of LDP and the combined classifier.

  5. SVM Classifiers: The Objects Identification on the Base of Their Hyperspectral Features

    Directory of Open Access Journals (Sweden)

    Demidova Liliya

    2017-01-01

    Full Text Available The problem of the objects identification on the base of their hyperspectral features has been considered. It is offered to use the SVM classifiers on the base of the modified PSO algorithm, adapted to specifics of the problem of the objects identification on the base of their hyperspectral features. The results of the objects identification on the base of their hyperspectral features with using of the SVM classifiers have been presented.

  6. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Khan, L.; Israël, Menno; Petrushin, V.A.; van den Broek, Egon; van der Putten, Peter

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  7. A NEW FRAMEWORK FOR OBJECT-BASED IMAGE ANALYSIS BASED ON SEGMENTATION SCALE SPACE AND RANDOM FOREST CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Hadavand

    2015-12-01

    Full Text Available In this paper a new object-based framework is developed for automate scale selection in image segmentation. The quality of image objects have an important impact on further analyses. Due to the strong dependency of segmentation results to the scale parameter, choosing the best value for this parameter, for each class, becomes a main challenge in object-based image analysis. We propose a new framework which employs pixel-based land cover map to estimate the initial scale dedicated to each class. These scales are used to build segmentation scale space (SSS, a hierarchy of image objects. Optimization of SSS, respect to NDVI and DSM values in each super object is used to get the best scale in local regions of image scene. Optimized SSS segmentations are finally classified to produce the final land cover map. Very high resolution aerial image and digital surface model provided by ISPRS 2D semantic labelling dataset is used in our experiments. The result of our proposed method is comparable to those of ESP tool, a well-known method to estimate the scale of segmentation, and marginally improved the overall accuracy of classification from 79% to 80%.

  8. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    Science.gov (United States)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  9. Learning to Detect Traffic Incidents from Data Based on Tree Augmented Naive Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Dawei Li

    2017-01-01

    Full Text Available This study develops a tree augmented naive Bayesian (TAN classifier based incident detection algorithm. Compared with the Bayesian networks based detection algorithms developed in the previous studies, this algorithm has less dependency on experts’ knowledge. The structure of TAN classifier for incident detection is learned from data. The discretization of continuous attributes is processed using an entropy-based method automatically. A simulation dataset on the section of the Ayer Rajah Expressway (AYE in Singapore is used to demonstrate the development of proposed algorithm, including wavelet denoising, normalization, entropy-based discretization, and structure learning. The performance of TAN based algorithm is evaluated compared with the previous developed Bayesian network (BN based and multilayer feed forward (MLF neural networks based algorithms with the same AYE data. The experiment results show that the TAN based algorithms perform better than the BN classifiers and have a similar performance to the MLF based algorithm. However, TAN based algorithm would have wider vista of applications because the theory of TAN classifiers is much less complicated than MLF. It should be found from the experiment that the TAN classifier based algorithm has a significant superiority over the speed of model training and calibration compared with MLF.

  10. Simulation-based marginal likelihood for cluster strong lensing cosmology

    Science.gov (United States)

    Killedar, M.; Borgani, S.; Fabjan, D.; Dolag, K.; Granato, G.; Meneghetti, M.; Planelles, S.; Ragone-Figueroa, C.

    2018-01-01

    Comparisons between observed and predicted strong lensing properties of galaxy clusters have been routinely used to claim either tension or consistency with Λ cold dark matter cosmology. However, standard approaches to such cosmological tests are unable to quantify the preference for one cosmology over another. We advocate approximating the relevant Bayes factor using a marginal likelihood that is based on the following summary statistic: the posterior probability distribution function for the parameters of the scaling relation between Einstein radii and cluster mass, α and β. We demonstrate, for the first time, a method of estimating the marginal likelihood using the X-ray selected z > 0.5 Massive Cluster Survey clusters as a case in point and employing both N-body and hydrodynamic simulations of clusters. We investigate the uncertainty in this estimate and consequential ability to compare competing cosmologies, which arises from incomplete descriptions of baryonic processes, discrepancies in cluster selection criteria, redshift distribution and dynamical state. The relation between triaxial cluster masses at various overdensities provides a promising alternative to the strong lensing test.

  11. Multiple classifier systems in texton-based approach for the classification of CT images of Lung

    DEFF Research Database (Denmark)

    Gangeh, Mehrdad J.; Sørensen, Lauge; Shaker, Saher B.

    2010-01-01

    In this paper, we propose using texton signatures based on raw pixel representation along with a parallel multiple classifier system for the classification of emphysema in computed tomography images of the lung. The multiple classifier system is composed of support vector machines on the texton.......e., texton size and k value in k-means. Our results show that while aggregation of single decisions by SVMs over various k values using multiple classifier systems helps to improve the results compared to single SVMs, combining over different texton sizes is not beneficial. The performance of the proposed...

  12. Automatic construction of a recurrent neural network based classifier for vehicle passage detection

    Science.gov (United States)

    Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur

    2017-03-01

    Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.

  13. SAR Target Recognition Based on Multi-feature Multiple Representation Classifier Fusion

    Directory of Open Access Journals (Sweden)

    Zhang Xinzheng

    2017-10-01

    Full Text Available In this paper, we present a Synthetic Aperture Radar (SAR image target recognition algorithm based on multi-feature multiple representation learning classifier fusion. First, it extracts three features from the SAR images, namely principal component analysis, wavelet transform, and Two-Dimensional Slice Zernike Moments (2DSZM features. Second, we harness the sparse representation classifier and the cooperative representation classifier with the above-mentioned features to get six predictive labels. Finally, we adopt classifier fusion to obtain the final recognition decision. We researched three different classifier fusion algorithms in our experiments, and the results demonstrate thatusing Bayesian decision fusion gives thebest recognition performance. The method based on multi-feature multiple representation learning classifier fusion integrates the discrimination of multi-features and combines the sparse and cooperative representation classification performance to gain complementary advantages and to improve recognition accuracy. The experiments are based on the Moving and Stationary Target Acquisition and Recognition (MSTAR database,and they demonstrate the effectiveness of the proposed approach.

  14. Feature and score fusion based multiple classifier selection for iris recognition.

    Science.gov (United States)

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  15. FERAL : Network-based classifier with application to breast cancer outcome prediction

    NARCIS (Netherlands)

    Allahyar, A.; De Ridder, J.

    2015-01-01

    Motivation: Breast cancer outcome prediction based on gene expression profiles is an important strategy for personalize patient care. To improve performance and consistency of discovered markers of the initial molecular classifiers, network-based outcome prediction methods (NOPs) have been proposed.

  16. Detecting Dutch political tweets : A classifier based on voting system using supervised learning

    NARCIS (Netherlands)

    de Mello Araújo, Eric Fernandes; Ebbelaar, Dave

    The task of classifying political tweets has been shown to be very difficult, with controversial results in many works and with non-replicable methods. Most of the works with this goal use rule-based methods to identify political tweets. We propose here two methods, being one rule-based approach,

  17. Thai Finger-Spelling Recognition Using a Cascaded Classifier Based on Histogram of Orientation Gradient Features

    Directory of Open Access Journals (Sweden)

    Kittasil Silanon

    2017-01-01

    Full Text Available Hand posture recognition is an essential module in applications such as human-computer interaction (HCI, games, and sign language systems, in which performance and robustness are the primary requirements. In this paper, we proposed automatic classification to recognize 21 hand postures that represent letters in Thai finger-spelling based on Histogram of Orientation Gradient (HOG feature (which is applied with more focus on the information within certain region of the image rather than each single pixel and Adaptive Boost (i.e., AdaBoost learning technique to select the best weak classifier and to construct a strong classifier that consists of several weak classifiers to be cascaded in detection architecture. We collected 21 static hand posture images from 10 subjects for testing and training in Thai letters finger-spelling. The parameters for the training process have been adjusted in three experiments, false positive rates (FPR, true positive rates (TPR, and number of training stages (N, to achieve the most suitable training model for each hand posture. All cascaded classifiers are loaded into the system simultaneously to classify different hand postures. A correlation coefficient is computed to distinguish the hand postures that are similar. The system achieves approximately 78% accuracy on average on all classifier experiments.

  18. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations.

    Science.gov (United States)

    Zhang, Yi; Ren, Jinchang; Jiang, Jianmin

    2015-01-01

    Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.

  19. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    2015-01-01

    Full Text Available Maximum likelihood classifier (MLC and support vector machines (SVM are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.

  20. Evaluation of motion management strategies based on required margins

    International Nuclear Information System (INIS)

    Sawkey, D; Svatos, M; Zankowski, C

    2012-01-01

    Strategies for delivering radiation to a moving lesion each require a margin to compensate for uncertainties in treatment. These motion margins have been determined here by separating the total uncertainty into components. Probability density functions for the individual sources of uncertainty were calculated for ten motion traces obtained from the literature. Motion margins required to compensate for the center of mass motion of the clinical treatment volume were found by convolving the individual sources of uncertainty. For measurements of position at a frequency of 33 Hz, system latency was the dominant source of positional uncertainty. Averaged over the ten motion traces, the motion margin for tracking with a latency of 200 ms was 4.6 mm. Gating with a duty cycle of 33% required a mean motion margin of 3.2–3.4 mm, and tracking with a latency of 100 ms required a motion margin of 3.1 mm. Feasible reductions in the effects of the sources of uncertainty, for example by using a simple prediction algorithm to anticipate the lesion position at the end of the latency period, resulted in a mean motion margin of 1.7 mm for tracking with a latency of 100 ms, 2.4 mm for tracking with a latency of 200 ms, and 2.1–2.2 mm for the gating strategies with duty cycles of 33%. A crossover tracking latency of 150 ms was found, below which tracking strategies could take advantage of narrower motion margins than gating strategies. The methods described here provide a means to guide selection of a motion management strategy for a given patient. (paper)

  1. Oil inventories should be based on margins, supply reliability

    International Nuclear Information System (INIS)

    Waguespack, K.; Cantor, B.D.

    1996-01-01

    US oil inventories have plummeted to their lowest recorded levels this year, leading industry observers to conclude that refiners have adopted new just-in-time (JIT) inventory policies. Total crude oil inventories are about 300 million bbl -- 8% below the 10-year average. Distillate inventories posted similar declines this year because of unusually cold winter temperatures and refiners' reluctance to build sufficient stocks in the autumn months. Gasoline stocks are 20% below the 10-year average at 200 million bbl, despite forecasts of record-high gasoline demand this summer. The sudden drop in crude and product inventories this year is widely considered a sign that refiners have implemented JIT, signaling a permanent shift to reduced stocks. The authors submit that the shift towards reduced oil inventories is not related to a concerted adoption of JIT by US refiners, and that oil inventory management decisions should instead be based on refining margins and supply reliability. The paper discusses the JIT revolution and the optimal-inventory model

  2. A bench-top hyperspectral imaging system to classify beef from Nellore cattle based on tenderness

    Science.gov (United States)

    Nubiato, Keni Eduardo Zanoni; Mazon, Madeline Rezende; Antonelo, Daniel Silva; Calkins, Chris R.; Naganathan, Govindarajan Konda; Subbiah, Jeyamkondan; da Luz e Silva, Saulo

    2018-03-01

    The aim of this study was to evaluate the accuracy of classification of Nellore beef aged for 0, 7, 14, or 21 days and classification based on tenderness and aging period using a bench-top hyperspectral imaging system. A hyperspectral imaging system (λ = 928-2524 nm) was used to collect hyperspectral images of the Longissimus thoracis et lumborum (aging n = 376 and tenderness n = 345) of Nellore cattle. The image processing steps included selection of region of interest, extraction of spectra, and indentification and evalution of selected wavelengths for classification. Six linear discriminant models were developed to classify samples based on tenderness and aging period. The model using the first derivative of partial absorbance spectra (give wavelength range spectra) was able to classify steaks based on the tenderness with an overall accuracy of 89.8%. The model using the first derivative of full absorbance spectra was able to classify steaks based on aging period with an overall accuracy of 84.8%. The results demonstrate that the HIS may be a viable technology for classifying beef based on tenderness and aging period.

  3. Multiobjective optimization of classifiers by means of 3D convex-hull-based evolutionary algorithms

    NARCIS (Netherlands)

    Zhao, J.; Basto, Fernandes V.; Jiao, L.; Yevseyeva, I.; Asep, Maulana A.; Li, R.; Bäck, T.H.W.; Tang, T.; Michael, Emmerich T. M.

    2016-01-01

    The receiver operating characteristic (ROC) and detection error tradeoff(DET) curves are frequently used in the machine learning community to analyze the performance of binary classifiers. Recently, the convex-hull-based multiobjective genetic programming algorithm was proposed and successfully

  4. Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers

    Science.gov (United States)

    Favorskaya, M.; Nosov, A.; Popov, A.

    2015-05-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case). Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset "Multi-modal Gesture Recognition Challenge 2013: Dataset and Results" including 393 dynamic hand-gestures was chosen. The proposed method yielded 84-91% recognition accuracy, in average, for restricted set of dynamic gestures.

  5. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    Directory of Open Access Journals (Sweden)

    M. Favorskaya

    2015-05-01

    Full Text Available Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case. Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset “Multi-modal Gesture Recognition Challenge 2013: Dataset and Results” including 393 dynamic hand-gestures was chosen. The proposed method yielded 84–91% recognition accuracy, in average, for restricted set of dynamic gestures.

  6. Heterogeneity wavelet kinetics from DCE-MRI for classifying gene expression based breast cancer recurrence risk.

    Science.gov (United States)

    Mahrooghy, Majid; Ashraf, Ahmed B; Daye, Dania; Mies, Carolyn; Feldman, Michael; Rosen, Mark; Kontos, Despina

    2013-01-01

    Breast tumors are heterogeneous lesions. Intra-tumor heterogeneity presents a major challenge for cancer diagnosis and treatment. Few studies have worked on capturing tumor heterogeneity from imaging. Most studies to date consider aggregate measures for tumor characterization. In this work we capture tumor heterogeneity by partitioning tumor pixels into subregions and extracting heterogeneity wavelet kinetic (HetWave) features from breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to obtain the spatiotemporal patterns of the wavelet coefficients and contrast agent uptake from each partition. Using a genetic algorithm for feature selection, and a logistic regression classifier with leave one-out cross validation, we tested our proposed HetWave features for the task of classifying breast cancer recurrence risk. The classifier based on our features gave an ROC AUC of 0.78, outperforming previously proposed kinetic, texture, and spatial enhancement variance features which give AUCs of 0.69, 0.64, and 0.65, respectively.

  7. Combining Biometric Fractal Pattern and Particle Swarm Optimization-Based Classifier for Fingerprint Recognition

    Directory of Open Access Journals (Sweden)

    Chia-Hung Lin

    2010-01-01

    Full Text Available This paper proposes combining the biometric fractal pattern and particle swarm optimization (PSO-based classifier for fingerprint recognition. Fingerprints have arch, loop, whorl, and accidental morphologies, and embed singular points, resulting in the establishment of fingerprint individuality. An automatic fingerprint identification system consists of two stages: digital image processing (DIP and pattern recognition. DIP is used to convert to binary images, refine out noise, and locate the reference point. For binary images, Katz's algorithm is employed to estimate the fractal dimension (FD from a two-dimensional (2D image. Biometric features are extracted as fractal patterns using different FDs. Probabilistic neural network (PNN as a classifier performs to compare the fractal patterns among the small-scale database. A PSO algorithm is used to tune the optimal parameters and heighten the accuracy. For 30 subjects in the laboratory, the proposed classifier demonstrates greater efficiency and higher accuracy in fingerprint recognition.

  8. Effective Sequential Classifier Training for SVM-Based Multitemporal Remote Sensing Image Classification

    Science.gov (United States)

    Guo, Yiqing; Jia, Xiuping; Paull, David

    2018-06-01

    The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.

  9. Finger vein identification using fuzzy-based k-nearest centroid neighbor classifier

    Science.gov (United States)

    Rosdi, Bakhtiar Affendi; Jaafar, Haryati; Ramli, Dzati Athiar

    2015-02-01

    In this paper, a new approach for personal identification using finger vein image is presented. Finger vein is an emerging type of biometrics that attracts attention of researchers in biometrics area. As compared to other biometric traits such as face, fingerprint and iris, finger vein is more secured and hard to counterfeit since the features are inside the human body. So far, most of the researchers focus on how to extract robust features from the captured vein images. Not much research was conducted on the classification of the extracted features. In this paper, a new classifier called fuzzy-based k-nearest centroid neighbor (FkNCN) is applied to classify the finger vein image. The proposed FkNCN employs a surrounding rule to obtain the k-nearest centroid neighbors based on the spatial distributions of the training images and their distance to the test image. Then, the fuzzy membership function is utilized to assign the test image to the class which is frequently represented by the k-nearest centroid neighbors. Experimental evaluation using our own database which was collected from 492 fingers shows that the proposed FkNCN has better performance than the k-nearest neighbor, k-nearest-centroid neighbor and fuzzy-based-k-nearest neighbor classifiers. This shows that the proposed classifier is able to identify the finger vein image effectively.

  10. Textual and shape-based feature extraction and neuro-fuzzy classifier for nuclear track recognition

    Science.gov (United States)

    Khayat, Omid; Afarideh, Hossein

    2013-04-01

    Track counting algorithms as one of the fundamental principles of nuclear science have been emphasized in the recent years. Accurate measurement of nuclear tracks on solid-state nuclear track detectors is the aim of track counting systems. Commonly track counting systems comprise a hardware system for the task of imaging and software for analysing the track images. In this paper, a track recognition algorithm based on 12 defined textual and shape-based features and a neuro-fuzzy classifier is proposed. Features are defined so as to discern the tracks from the background and small objects. Then, according to the defined features, tracks are detected using a trained neuro-fuzzy system. Features and the classifier are finally validated via 100 Alpha track images and 40 training samples. It is shown that principle textual and shape-based features concomitantly yield a high rate of track detection compared with the single-feature based methods.

  11. A method of distributed avionics data processing based on SVM classifier

    Science.gov (United States)

    Guo, Hangyu; Wang, Jinyan; Kang, Minyang; Xu, Guojing

    2018-03-01

    Under the environment of system combat, in order to solve the problem on management and analysis of the massive heterogeneous data on multi-platform avionics system, this paper proposes a management solution which called avionics "resource cloud" based on big data technology, and designs an aided decision classifier based on SVM algorithm. We design an experiment with STK simulation, the result shows that this method has a high accuracy and a broad application prospect.

  12. Fuzzy prototype classifier based on items and its application in recommender system

    Directory of Open Access Journals (Sweden)

    Mei Cai

    2017-01-01

    Full Text Available Currently, recommender systems (RS are incorporating implicit information from social circle of the Internet. The implicit social information in human mind is not easy to reflect in appropriate decision making techniques. This paper consists of 2 contributions. First, we develop an item-based prototype classifier (IPC in which a prototype represents a social circlers preferences as a pattern classification technique. We assume the social circle which distinguishes with others by the items their members like. The prototype structure of the classifier is defined by two2-dimensional matrices. We use information gain and OWA aggregator to construct a feature space. The item-based classifier assigns a new item to some prototypes with different prototypicalities. We reform a typical data setmIris data set in UCI Machine Learning Repository to verify our fuzzy prototype classifier. The second proposition of this paper is to give the application of IPC in recommender system to solve new item cold-start problems. We modify the dataset of MovieLens to perform experimental demonstrations of the proposed ideas.

  13. Classifier models and architectures for EEG-based neonatal seizure detection

    International Nuclear Information System (INIS)

    Greene, B R; Marnane, W P; Lightbody, G; Reilly, R B; Boylan, G B

    2008-01-01

    Neonatal seizures are the most common neurological emergency in the neonatal period and are associated with a poor long-term outcome. Early detection and treatment may improve prognosis. This paper aims to develop an optimal set of parameters and a comprehensive scheme for patient-independent multi-channel EEG-based neonatal seizure detection. We employed a dataset containing 411 neonatal seizures. The dataset consists of multi-channel EEG recordings with a mean duration of 14.8 h from 17 neonatal patients. Early-integration and late-integration classifier architectures were considered for the combination of information across EEG channels. Three classifier models based on linear discriminants, quadratic discriminants and regularized discriminants were employed. Furthermore, the effect of electrode montage was considered. The best performing seizure detection system was found to be an early integration configuration employing a regularized discriminant classifier model. A referential EEG montage was found to outperform the more standard bipolar electrode montage for automated neonatal seizure detection. A cross-fold validation estimate of the classifier performance for the best performing system yielded 81.03% of seizures correctly detected with a false detection rate of 3.82%. With post-processing, the false detection rate was reduced to 1.30% with 59.49% of seizures correctly detected. These results represent a comprehensive illustration that robust reliable patient-independent neonatal seizure detection is possible using multi-channel EEG

  14. WEB-BASED ADAPTIVE TESTING SYSTEM (WATS FOR CLASSIFYING STUDENTS ACADEMIC ABILITY

    Directory of Open Access Journals (Sweden)

    Jaemu LEE,

    2012-08-01

    Full Text Available Computer Adaptive Testing (CAT has been highlighted as a promising assessment method to fulfill two testing purposes: estimating student academic ability and classifying student academic level. In this paper, we introduced the Web-based Adaptive Testing System (WATS developed to support a cost effective assessment for classifying students’ ability into different academic levels. Instead of using a traditional paper and pencil test, the WATS is expected to serve as an alternate method to promptly diagnosis and identify underachieving students through Web-based testing. The WATS can also help provide students with appropriate learning contents and necessary academic support in time. In this paper, theoretical background and structure of WATS, item construction process based upon item response theory, and user interfaces of WATS were discussed.

  15. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2014-01-01

    Full Text Available The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  16. Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers

    OpenAIRE

    Rosenberg, Ishai; Shabtai, Asaf; Rokach, Lior; Elovici, Yuval

    2017-01-01

    In this paper, we present a black-box attack against API call based machine learning malware classifiers, focusing on generating adversarial sequences combining API calls and static features (e.g., printable strings) that will be misclassified by the classifier without affecting the malware functionality. We show that this attack is effective against many classifiers due to the transferability principle between RNN variants, feed forward DNNs, and traditional machine learning classifiers such...

  17. Fusion of classifiers for REIS-based detection of suspicious breast lesions

    Science.gov (United States)

    Lederman, Dror; Wang, Xingwei; Zheng, Bin; Sumkin, Jules H.; Tublin, Mitchell; Gur, David

    2011-03-01

    After developing a multi-probe resonance-frequency electrical impedance spectroscopy (REIS) system aimed at detecting women with breast abnormalities that may indicate a developing breast cancer, we have been conducting a prospective clinical study to explore the feasibility of applying this REIS system to classify younger women (breast cancer. The system comprises one central probe placed in contact with the nipple, and six additional probes uniformly distributed along an outside circle to be placed in contact with six points on the outer breast skin surface. In this preliminary study, we selected an initial set of 174 examinations on participants that have completed REIS examinations and have clinical status verification. Among these, 66 examinations were recommended for biopsy due to findings of a highly suspicious breast lesion ("positives"), and 108 were determined as negative during imaging based procedures ("negatives"). A set of REIS-based features, extracted using a mirror-matched approach, was computed and fed into five machine learning classifiers. A genetic algorithm was used to select an optimal subset of features for each of the five classifiers. Three fusion rules, namely sum rule, weighted sum rule and weighted median rule, were used to combine the results of the classifiers. Performance evaluation was performed using a leave-one-case-out cross-validation method. The results indicated that REIS may provide a new technology to identify younger women with higher than average risk of having or developing breast cancer. Furthermore, it was shown that fusion rule, such as a weighted median fusion rule and a weighted sum fusion rule may improve performance as compared with the highest performing single classifier.

  18. Ensemble Classifiers for Predicting HIV-1 Resistance from Three Rule-Based Genotypic Resistance Interpretation Systems.

    Science.gov (United States)

    Raposo, Letícia M; Nobre, Flavio F

    2017-08-30

    Resistance to antiretrovirals (ARVs) is a major problem faced by HIV-infected individuals. Different rule-based algorithms were developed to infer HIV-1 susceptibility to antiretrovirals from genotypic data. However, there is discordance between them, resulting in difficulties for clinical decisions about which treatment to use. Here, we developed ensemble classifiers integrating three interpretation algorithms: Agence Nationale de Recherche sur le SIDA (ANRS), Rega, and the genotypic resistance interpretation system from Stanford HIV Drug Resistance Database (HIVdb). Three approaches were applied to develop a classifier with a single resistance profile: stacked generalization, a simple plurality vote scheme and the selection of the interpretation system with the best performance. The strategies were compared with the Friedman's test and the performance of the classifiers was evaluated using the F-measure, sensitivity and specificity values. We found that the three strategies had similar performances for the selected antiretrovirals. For some cases, the stacking technique with naïve Bayes as the learning algorithm showed a statistically superior F-measure. This study demonstrates that ensemble classifiers can be an alternative tool for clinical decision-making since they provide a single resistance profile from the most commonly used resistance interpretation systems.

  19. Detecting and classifying method based on similarity matching of Android malware behavior with profile.

    Science.gov (United States)

    Jang, Jae-Wook; Yun, Jaesung; Mohaisen, Aziz; Woo, Jiyoung; Kim, Huy Kang

    2016-01-01

    Mass-market mobile security threats have increased recently due to the growth of mobile technologies and the popularity of mobile devices. Accordingly, techniques have been introduced for identifying, classifying, and defending against mobile threats utilizing static, dynamic, on-device, and off-device techniques. Static techniques are easy to evade, while dynamic techniques are expensive. On-device techniques are evasion, while off-device techniques need being always online. To address some of those shortcomings, we introduce Andro-profiler, a hybrid behavior based analysis and classification system for mobile malware. Andro-profiler main goals are efficiency, scalability, and accuracy. For that, Andro-profiler classifies malware by exploiting the behavior profiling extracted from the integrated system logs including system calls. Andro-profiler executes a malicious application on an emulator in order to generate the integrated system logs, and creates human-readable behavior profiles by analyzing the integrated system logs. By comparing the behavior profile of malicious application with representative behavior profile for each malware family using a weighted similarity matching technique, Andro-profiler detects and classifies it into malware families. The experiment results demonstrate that Andro-profiler is scalable, performs well in detecting and classifying malware with accuracy greater than 98 %, outperforms the existing state-of-the-art work, and is capable of identifying 0-day mobile malware samples.

  20. Predicting protein subcellular locations using hierarchical ensemble of Bayesian classifiers based on Markov chains

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2006-06-01

    Full Text Available Abstract Background The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. Results A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. Conclusion This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.

  1. Graphic Symbol Recognition using Graph Based Signature and Bayesian Network Classifier

    OpenAIRE

    Luqman, Muhammad Muzzamil; Brouard, Thierry; Ramel, Jean-Yves

    2010-01-01

    We present a new approach for recognition of complex graphic symbols in technical documents. Graphic symbol recognition is a well known challenge in the field of document image analysis and is at heart of most graphic recognition systems. Our method uses structural approach for symbol representation and statistical classifier for symbol recognition. In our system we represent symbols by their graph based signatures: a graphic symbol is vectorized and is converted to an attributed relational g...

  2. Exemplar-based optical neural net classifier for color pattern recognition

    Science.gov (United States)

    Yu, Francis T. S.; Uang, Chii-Maw; Yang, Xiangyang

    1992-10-01

    We present a color exemplar-based neural network that can be used as an optimum image classifier or an associative memory. Color decomposition and composition technique is used for constructing the polychromatic interconnection weight matrix (IWM). The Hamming net algorithm is modified to relax the dynamic range requirement of the spatial light modulator and to reduce the number of iteration cycles in the winner-take-all layer. Computer simulation results demonstrated the feasibility of this approach

  3. A Novel Approach for Multi Class Fault Diagnosis in Induction Machine Based on Statistical Time Features and Random Forest Classifier

    Science.gov (United States)

    Sonje, M. Deepak; Kundu, P.; Chowdhury, A.

    2017-08-01

    Fault diagnosis and detection is the important area in health monitoring of electrical machines. This paper proposes the recently developed machine learning classifier for multi class fault diagnosis in induction machine. The classification is based on random forest (RF) algorithm. Initially, stator currents are acquired from the induction machine under various conditions. After preprocessing the currents, fourteen statistical time features are estimated for each phase of the current. These parameters are considered as inputs to the classifier. The main scope of the paper is to evaluate effectiveness of RF classifier for individual and mixed fault diagnosis in induction machine. The stator, rotor and mixed faults (stator and rotor faults) are classified using the proposed classifier. The obtained performance measures are compared with the multilayer perceptron neural network (MLPNN) classifier. The results show the much better performance measures and more accurate than MLPNN classifier. For demonstration of planned fault diagnosis algorithm, experimentally obtained results are considered to build the classifier more practical.

  4. Identifying and Classifying Mobile Business Models Based on Meta-Synthesis Approach

    Directory of Open Access Journals (Sweden)

    Porrandokht Niroomand

    2012-03-01

    Full Text Available The appearance of mobile has provided unique opportunities and fields through the development and creation of businesses and has been able to create the new job opportunities. The current research tries to familiarize entrepreneures who are running the businesses especially in the area of mobile services with business models. These business models can familiarize them for implementing the new ideas and designs since they can enter to business market. Searching in many papers shows that there are no propitiated papers and researches that can identify, categorize and analyze the mobile business models. Consequently, this paper involves innovation. The first part of this paper presents the review about the concepts and theories about the different mobile generations, the mobile commerce and business models. Afterwards, 92 models are compared, interpreted, translated and combined using 33 papers, books based on two different criteria that are expert criterion and kind of product criterion. In the classification of models according to models that are presented by experts, the models are classified based on criteria such as business fields, business partners, the rate of dynamism, the kind of activity, the focus areas, the mobile generations, transparency, the type of operator activities, marketing and advertisements. The models that are classified based on the kind of product have been analyzed and classified at four different areas of mobile commerce including the content production, technology (software and hardware, network and synthetic.

  5. Accountable Accounting: Carbon-Based Management on Marginal Lands

    Directory of Open Access Journals (Sweden)

    Tara L. DiRocco

    2014-04-01

    Full Text Available Substantial discussion exists concerning the best land use options for mitigating greenhouse gas (GHG emissions on marginal land. Emissions-mitigating land use options include displacement of fossil fuels via biofuel production and afforestation. Comparing C recovery dynamics under these different options is crucial to assessing the efficacy of offset programs. In this paper, we focus on forest recovery on marginal land, and show that there is substantial inaccuracy and discrepancy in the literature concerning carbon accumulation. We find that uncertainty in carbon accumulation occurs in estimations of carbon stocks and models of carbon dynamics over time. We suggest that analyses to date have been largely unsuccessful at determining reliable trends in site recovery due to broad land use categories, a failure to consider the effect of current and post-restoration management, and problems with meta-analysis. Understanding of C recovery could be greatly improved with increased data collection on pre-restoration site quality, prior land use history, and management practices as well as increased methodological standardization. Finally, given the current and likely future uncertainty in C dynamics, we recommend carbon mitigation potential should not be the only environmental service driving land use decisions on marginal lands.

  6. Training Classifiers with Shadow Features for Sensor-Based Human Activity Recognition.

    Science.gov (United States)

    Fong, Simon; Song, Wei; Cho, Kyungeun; Wong, Raymond; Wong, Kelvin K L

    2017-02-27

    In this paper, a novel training/testing process for building/using a classification model based on human activity recognition (HAR) is proposed. Traditionally, HAR has been accomplished by a classifier that learns the activities of a person by training with skeletal data obtained from a motion sensor, such as Microsoft Kinect. These skeletal data are the spatial coordinates (x, y, z) of different parts of the human body. The numeric information forms time series, temporal records of movement sequences that can be used for training a classifier. In addition to the spatial features that describe current positions in the skeletal data, new features called 'shadow features' are used to improve the supervised learning efficacy of the classifier. Shadow features are inferred from the dynamics of body movements, and thereby modelling the underlying momentum of the performed activities. They provide extra dimensions of information for characterising activities in the classification process, and thereby significantly improve the classification accuracy. Two cases of HAR are tested using a classification model trained with shadow features: one is by using wearable sensor and the other is by a Kinect-based remote sensor. Our experiments can demonstrate the advantages of the new method, which will have an impact on human activity detection research.

  7. Training Classifiers with Shadow Features for Sensor-Based Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2017-02-01

    Full Text Available In this paper, a novel training/testing process for building/using a classification model based on human activity recognition (HAR is proposed. Traditionally, HAR has been accomplished by a classifier that learns the activities of a person by training with skeletal data obtained from a motion sensor, such as Microsoft Kinect. These skeletal data are the spatial coordinates (x, y, z of different parts of the human body. The numeric information forms time series, temporal records of movement sequences that can be used for training a classifier. In addition to the spatial features that describe current positions in the skeletal data, new features called ‘shadow features’ are used to improve the supervised learning efficacy of the classifier. Shadow features are inferred from the dynamics of body movements, and thereby modelling the underlying momentum of the performed activities. They provide extra dimensions of information for characterising activities in the classification process, and thereby significantly improve the classification accuracy. Two cases of HAR are tested using a classification model trained with shadow features: one is by using wearable sensor and the other is by a Kinect-based remote sensor. Our experiments can demonstrate the advantages of the new method, which will have an impact on human activity detection research.

  8. Deep Convolutional Neural Networks for Classifying Body Constitution Based on Face Image.

    Science.gov (United States)

    Huan, Er-Yang; Wen, Gui-Hua; Zhang, Shi-Jun; Li, Dan-Yang; Hu, Yang; Chang, Tian-Yuan; Wang, Qing; Huang, Bing-Lin

    2017-01-01

    Body constitution classification is the basis and core content of traditional Chinese medicine constitution research. It is to extract the relevant laws from the complex constitution phenomenon and finally build the constitution classification system. Traditional identification methods have the disadvantages of inefficiency and low accuracy, for instance, questionnaires. This paper proposed a body constitution recognition algorithm based on deep convolutional neural network, which can classify individual constitution types according to face images. The proposed model first uses the convolutional neural network to extract the features of face image and then combines the extracted features with the color features. Finally, the fusion features are input to the Softmax classifier to get the classification result. Different comparison experiments show that the algorithm proposed in this paper can achieve the accuracy of 65.29% about the constitution classification. And its performance was accepted by Chinese medicine practitioners.

  9. Classification of EEG signals using a genetic-based machine learning classifier.

    Science.gov (United States)

    Skinner, B T; Nguyen, H T; Liu, D K

    2007-01-01

    This paper investigates the efficacy of the genetic-based learning classifier system XCS, for the classification of noisy, artefact-inclusive human electroencephalogram (EEG) signals represented using large condition strings (108bits). EEG signals from three participants were recorded while they performed four mental tasks designed to elicit hemispheric responses. Autoregressive (AR) models and Fast Fourier Transform (FFT) methods were used to form feature vectors with which mental tasks can be discriminated. XCS achieved a maximum classification accuracy of 99.3% and a best average of 88.9%. The relative classification performance of XCS was then compared against four non-evolutionary classifier systems originating from different learning techniques. The experimental results will be used as part of our larger research effort investigating the feasibility of using EEG signals as an interface to allow paralysed persons to control a powered wheelchair or other devices.

  10. EVALUATING A COMPUTER BASED SKILLS ACQUISITION TRAINER TO CLASSIFY BADMINTON PLAYERS

    Directory of Open Access Journals (Sweden)

    Minh Vu Huynh

    2011-09-01

    Full Text Available The aim of the present study was to compare the statistical ability of both neural networks and discriminant function analysis on the newly developed SATB program. Using these statistical tools, we identified the accuracy of the SATB in classifying badminton players into different skill level groups. Forty-one participants, classified as advanced, intermediate, or beginner skilled level, participated in this study. Results indicated neural networks are more effective in predicting group membership, and displayed higher predictive validity when compared to discriminant analysis. Using these outcomes, in conjunction with the physiological and biomechanical variables of the participants, we assessed the authenticity and accuracy of the SATB and commented on the overall effectiveness of the visual based training approach to training badminton athletes

  11. Effective diagnosis of Alzheimer’s disease by means of large margin-based methodology

    Directory of Open Access Journals (Sweden)

    Chaves Rosa

    2012-07-01

    Full Text Available Abstract Background Functional brain images such as Single-Photon Emission Computed Tomography (SPECT and Positron Emission Tomography (PET have been widely used to guide the clinicians in the Alzheimer’s Disease (AD diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD Systems. Methods It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT, Principal Component Analysis (PCA or Partial Least Squares (PLS (the two latter also analysed with a LMNN transformation. Regarding the classifiers, kernel Support Vector Machines (SVMs and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. Results Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i linear transformation of the PLS or PCA reduced data, ii feature reduction technique, and iii classifier (with Euclidean, Mahalanobis or Energy-based methodology. The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT and 90.67%, 88% and 93.33% (for PET, respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. Conclusions All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between

  12. Classifying dysmorphic syndromes by using artificial neural network based hierarchical decision tree.

    Science.gov (United States)

    Özdemir, Merve Erkınay; Telatar, Ziya; Eroğul, Osman; Tunca, Yusuf

    2018-05-01

    Dysmorphic syndromes have different facial malformations. These malformations are significant to an early diagnosis of dysmorphic syndromes and contain distinctive information for face recognition. In this study we define the certain features of each syndrome by considering facial malformations and classify Fragile X, Hurler, Prader Willi, Down, Wolf Hirschhorn syndromes and healthy groups automatically. The reference points are marked on the face images and ratios between the points' distances are taken into consideration as features. We suggest a neural network based hierarchical decision tree structure in order to classify the syndrome types. We also implement k-nearest neighbor (k-NN) and artificial neural network (ANN) classifiers to compare classification accuracy with our hierarchical decision tree. The classification accuracy is 50, 73 and 86.7% with k-NN, ANN and hierarchical decision tree methods, respectively. Then, the same images are shown to a clinical expert who achieve a recognition rate of 46.7%. We develop an efficient system to recognize different syndrome types automatically in a simple, non-invasive imaging data, which is independent from the patient's age, sex and race at high accuracy. The promising results indicate that our method can be used for pre-diagnosis of the dysmorphic syndromes by clinical experts.

  13. Communication Behaviour-Based Big Data Application to Classify and Detect HTTP Automated Software

    Directory of Open Access Journals (Sweden)

    Manh Cong Tran

    2016-01-01

    Full Text Available HTTP is recognized as the most widely used protocol on the Internet when applications are being transferred more and more by developers onto the web. Due to increasingly complex computer systems, diversity HTTP automated software (autoware thrives. Unfortunately, besides normal autoware, HTTP malware and greyware are also spreading rapidly in web environment. Consequently, network communication is not just rigorously controlled by users intention. This raises the demand for analyzing HTTP autoware communication behaviour to detect and classify malicious and normal activities via HTTP traffic. Hence, in this paper, based on many studies and analysis of the autoware communication behaviour through access graph, a new method to detect and classify HTTP autoware communication at network level is presented. The proposal system includes combination of MapReduce of Hadoop and MarkLogic NoSQL database along with xQuery to deal with huge HTTP traffic generated each day in a large network. The method is examined with real outbound HTTP traffic data collected through a proxy server of a private network. Experimental results obtained for proposed method showed that promised outcomes are achieved since 95.1% of suspicious autoware are classified and detected. This finding may assist network and system administrator in inspecting early the internal threats caused by HTTP autoware.

  14. Discovering mammography-based machine learning classifiers for breast cancer diagnosis.

    Science.gov (United States)

    Ramos-Pollán, Raúl; Guevara-López, Miguel Angel; Suárez-Ortega, Cesar; Díaz-Herrero, Guillermo; Franco-Valiente, Jose Miguel; Rubio-Del-Solar, Manuel; González-de-Posada, Naimy; Vaz, Mario Augusto Pires; Loureiro, Joana; Ramos, Isabel

    2012-08-01

    This work explores the design of mammography-based machine learning classifiers (MLC) and proposes a new method to build MLC for breast cancer diagnosis. We massively evaluated MLC configurations to classify features vectors extracted from segmented regions (pathological lesion or normal tissue) on craniocaudal (CC) and/or mediolateral oblique (MLO) mammography image views, providing BI-RADS diagnosis. Previously, appropriate combinations of image processing and normalization techniques were applied to reduce image artifacts and increase mammograms details. The method can be used under different data acquisition circumstances and exploits computer clusters to select well performing MLC configurations. We evaluated 286 cases extracted from the repository owned by HSJ-FMUP, where specialized radiologists segmented regions on CC and/or MLO images (biopsies provided the golden standard). Around 20,000 MLC configurations were evaluated, obtaining classifiers achieving an area under the ROC curve of 0.996 when combining features vectors extracted from CC and MLO views of the same case.

  15. RRHGE: A Novel Approach to Classify the Estrogen Receptor Based Breast Cancer Subtypes

    Directory of Open Access Journals (Sweden)

    Ashish Saini

    2014-01-01

    Full Text Available Background. Breast cancer is the most common type of cancer among females with a high mortality rate. It is essential to classify the estrogen receptor based breast cancer subtypes into correct subclasses, so that the right treatments can be applied to lower the mortality rate. Using gene signatures derived from gene interaction networks to classify breast cancers has proven to be more reproducible and can achieve higher classification performance. However, the interactions in the gene interaction network usually contain many false-positive interactions that do not have any biological meanings. Therefore, it is a challenge to incorporate the reliability assessment of interactions when deriving gene signatures from gene interaction networks. How to effectively extract gene signatures from available resources is critical to the success of cancer classification. Methods. We propose a novel method to measure and extract the reliable (biologically true or valid interactions from gene interaction networks and incorporate the extracted reliable gene interactions into our proposed RRHGE algorithm to identify significant gene signatures from microarray gene expression data for classifying ER+ and ER− breast cancer samples. Results. The evaluation on real breast cancer samples showed that our RRHGE algorithm achieved higher classification accuracy than the existing approaches.

  16. Carbon classified?

    DEFF Research Database (Denmark)

    Lippert, Ingmar

    2012-01-01

    . Using an actor- network theory (ANT) framework, the aim is to investigate the actors who bring together the elements needed to classify their carbon emission sources and unpack the heterogeneous relations drawn on. Based on an ethnographic study of corporate agents of ecological modernisation over...... a period of 13 months, this paper provides an exploration of three cases of enacting classification. Drawing on ANT, we problematise the silencing of a range of possible modalities of consumption facts and point to the ontological ethics involved in such performances. In a context of global warming...

  17. Contributions to knowledge of the continental margin of Uruguay. Uruguayan continental margin: morphology, geology and identification of the base of the slope

    International Nuclear Information System (INIS)

    Preciozzi, F.

    2014-01-01

    This work is about the morphology, geology and the identification of the base of the slope in the The Uruguayan continental margin which corresponds to the the type of divergent, volcanic and segmented margins. Morphologically is constituted by a clearly defined continental shelf, as well as a continental slope that presents configuration changes from north to south and passes directly to the abyssal plain

  18. Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy.

    Directory of Open Access Journals (Sweden)

    Lina Zhang

    Full Text Available Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information, PSSM (Position Specific Scoring Matrix, RSA (Relative Solvent Accessibility, and CTD (Composition, Transition, Distribution. The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest, SMO (Sequential Minimal Optimization, NNA (Nearest Neighbor Algorithm, and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew's Correlation Coefficient of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc.

  19. Modification of prostate implants based on postimplant treatment margin assessment.

    Science.gov (United States)

    Mueller, Amy; Wallner, Kent; Merrick, Gregory; Courveau, Jacques; Sutlief, Steven; Butler, Wayne; Gong, Lixin; Cho, Paul

    2002-12-01

    To quantify the extent of additional source placement needed to perfect an implant after execution by standard techniques, assuming that uniform 5 mm treatment margins (TMs) is the criteria for perfection. Ten consecutive, unselected patients treated with 1-125 brachytherapy were studied. Source placement is planned just inside or outside of the prostatic margin, to achieve a minimum 5 mm TM and a central dose of 150%-200% of the prescription dose. The preimplant prostate volumes ranged from 24 to 85 cc (median: 35 cc). The number of sources implanted ranged from 48 to 102 (median: 63). Axial CT images were acquired within 2 h postoperatively for postimplant dosimetry. After completion of standard dosimetric calculations, the TMs were measured and tabulated at 45 degrees intervals around the prostate periphery at 0.0, 1.0, 2.0, and 3.0 cm planes. Sources were then added to the periphery to bring the TMs to a minimum of 5 mm at each measured TM, resulting in a modified implant. All margin modifications were done manually, without the aid of automated software. Patients' original (unmodified) D90s ranged from 111% to 154%, with a median of 116%. The original V100s ranged from 94% to 99%, with a median of 96%. No patient required placement of additional sources to meet a minimum D90 of 90% or a minimum V100 of 80%. In contrast, patients required from 7 to 17 additional sources (median: 11) to achieve minimum 5 mm TMs around the entire prostatic periphery. Additional sources equaled from 12% to 24% of the initial number of sources placed (median: 17%). By adding sufficient peripheral sources to bring the TMs to a minimum 5 mm, patients' average V100 increased from 96% to 100%, and the average D90 increased from 124% to 160% of prescription dose. In the course of achieving a minimum 5 mm TM, the average treatment margin for all patients combined increased from 5.5 to 9.9 mm. The number of sources needed to bring the TMs to a minimum 5 mm was loosely correlated with the

  20. Modification of prostate implants based on postimplant treatment margin assessment

    International Nuclear Information System (INIS)

    Mueller, Amy; Wallner, Kent; Merrick, Gregory; Couriveau, Jacques; Sutlief, Steven; Butler, Wayne; Gong, Lixin; Cho, Paul

    2002-01-01

    Purpose: To quantify the extent of additional source placement needed to perfect an implant after execution by standard techniques, assuming that uniform 5 mm treatment margins (TMs) is the criteria for perfection. Materials and Methods: Ten consecutive, unselected patients treated with I-125 brachytherapy were studied. Source placement is planned just inside or outside of the prostatic margin, to achieve a minimum 5 mm TM and a central dose of 150%-200% of the prescription dose. The preimplant prostate volumes ranged from 24 to 85 cc (median: 35 cc). The number of sources implanted ranged from 48 to 102 (median: 63). Axial CT images were acquired within 2 h postoperatively for postimplant dosimetry. After completion of standard dosimetric calculations, the TMs were measured and tabulated at 45 deg. intervals around the prostate periphery at 0.0, 1.0, 2.0, and 3.0 cm planes. Sources were then added to the periphery to bring the TMs to a minimum of 5 mm at each measured TM, resulting in a modified implant. All margin modifications were done manually, without the aid of automated software. Results: Patients' original (unmodified) D90s ranged from 111% to 154%, with a median of 116%. The original V100s ranged from 94% to 99%, with a median of 96%. No patient required placement of additional sources to meet a minimum D90 of 90% or a minimum V100 of 80%. In contrast, patients required from 7 to 17 additional sources (median: 11) to achieve minimum 5 mm TMs around the entire prostatic periphery. Additional sources equaled from 12% to 24% of the initial number of sources placed (median: 17%). By adding sufficient peripheral sources to bring the TMs to a minimum 5 mm, patients' average V100 increased from 96% to 100%, and the average D90 increased from 124% to 160% of prescription dose. In the course of achieving a minimum 5 mm TM, the average treatment margin for all patients combined increased from 5.5 to 9.9 mm. The number of sources needed to bring the TMs to a minimum

  1. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    International Nuclear Information System (INIS)

    Jiang Li; Shi Tielin; Xuan Jianping

    2012-01-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  2. Intelligent Recognition of Lung Nodule Combining Rule-based and C-SVM Classifiers

    Directory of Open Access Journals (Sweden)

    Bin Li

    2011-10-01

    Full Text Available Computer-aided detection(CAD system for lung nodules plays the important role in the diagnosis of lung cancer. In this paper, an improved intelligent recognition method of lung nodule in HRCT combing rule-based and costsensitive support vector machine(C-SVM classifiers is proposed for detecting both solid nodules and ground-glass opacity(GGO nodules(part solid and nonsolid. This method consists of several steps. Firstly, segmentation of regions of interest(ROIs, including pulmonary parenchyma and lung nodule candidates, is a difficult task. On one side, the presence of noise lowers the visibility of low-contrast objects. On the other side, different types of nodules, including small nodules, nodules connecting to vasculature or other structures, part-solid or nonsolid nodules, are complex, noisy, weak edge or difficult to define the boundary. In order to overcome the difficulties of obvious boundary-leak and slow evolvement speed problem in segmentatioin of weak edge, an overall segmentation method is proposed, they are: the lung parenchyma is extracted based on threshold and morphologic segmentation method; the image denoising and enhancing is realized by nonlinear anisotropic diffusion filtering(NADF method;candidate pulmonary nodules are segmented by the improved C-V level set method, in which the segmentation result of EM-based fuzzy threshold method is used as the initial contour of active contour model and a constrained energy term is added into the PDE of level set function. Then, lung nodules are classified by using the intelligent classifiers combining rules and C-SVM. Rule-based classification is first used to remove easily dismissible nonnodule objects, then C-SVM classification are used to further classify nodule candidates and reduce the number of false positive(FP objects. In order to increase the efficiency of SVM, an improved training method is used to train SVM, which uses the grid search method to search the optimal parameters

  3. Intelligent Recognition of Lung Nodule Combining Rule-based and C-SVM Classifiers

    Directory of Open Access Journals (Sweden)

    Bin Li

    2012-02-01

    Full Text Available Computer-aided detection(CAD system for lung nodules plays the important role in the diagnosis of lung cancer. In this paper, an improved intelligent recognition method of lung nodule in HRCT combing rule-based and cost-sensitive support vector machine(C-SVM classifiers is proposed for detecting both solid nodules and ground-glass opacity(GGO nodules(part solid and nonsolid. This method consists of several steps. Firstly, segmentation of regions of interest(ROIs, including pulmonary parenchyma and lung nodule candidates, is a difficult task. On one side, the presence of noise lowers the visibility of low-contrast objects. On the other side, different types of nodules, including small nodules, nodules connecting to vasculature or other structures, part-solid or nonsolid nodules, are complex, noisy, weak edge or difficult to define the boundary. In order to overcome the difficulties of obvious boundary-leak and slow evolvement speed problem in segmentatioin of weak edge, an overall segmentation method is proposed, they are: the lung parenchyma is extracted based on threshold and morphologic segmentation method; the image denoising and enhancing is realized by nonlinear anisotropic diffusion filtering(NADF method; candidate pulmonary nodules are segmented by the improved C-V level set method, in which the segmentation result of EM-based fuzzy threshold method is used as the initial contour of active contour model and a constrained energy term is added into the PDE of level set function. Then, lung nodules are classified by using the intelligent classifiers combining rules and C-SVM. Rule-based classification is first used to remove easily dismissible nonnodule objects, then C-SVM classification are used to further classify nodule candidates and reduce the number of false positive(FP objects. In order to increase the efficiency of SVM, an improved training method is used to train SVM, which uses the grid search method to search the optimal

  4. A comparison of rule-based and machine learning approaches for classifying patient portal messages.

    Science.gov (United States)

    Cronin, Robert M; Fabbri, Daniel; Denny, Joshua C; Rosenbloom, S Trent; Jackson, Gretchen Purcell

    2017-09-01

    Secure messaging through patient portals is an increasingly popular way that consumers interact with healthcare providers. The increasing burden of secure messaging can affect clinic staffing and workflows. Manual management of portal messages is costly and time consuming. Automated classification of portal messages could potentially expedite message triage and delivery of care. We developed automated patient portal message classifiers with rule-based and machine learning techniques using bag of words and natural language processing (NLP) approaches. To evaluate classifier performance, we used a gold standard of 3253 portal messages manually categorized using a taxonomy of communication types (i.e., main categories of informational, medical, logistical, social, and other communications, and subcategories including prescriptions, appointments, problems, tests, follow-up, contact information, and acknowledgement). We evaluated our classifiers' accuracies in identifying individual communication types within portal messages with area under the receiver-operator curve (AUC). Portal messages often contain more than one type of communication. To predict all communication types within single messages, we used the Jaccard Index. We extracted the variables of importance for the random forest classifiers. The best performing approaches to classification for the major communication types were: logistic regression for medical communications (AUC: 0.899); basic (rule-based) for informational communications (AUC: 0.842); and random forests for social communications and logistical communications (AUCs: 0.875 and 0.925, respectively). The best performing classification approach of classifiers for individual communication subtypes was random forests for Logistical-Contact Information (AUC: 0.963). The Jaccard Indices by approach were: basic classifier, Jaccard Index: 0.674; Naïve Bayes, Jaccard Index: 0.799; random forests, Jaccard Index: 0.859; and logistic regression, Jaccard

  5. nRC: non-coding RNA Classifier based on structural features.

    Science.gov (United States)

    Fiannaca, Antonino; La Rosa, Massimo; La Paglia, Laura; Rizzo, Riccardo; Urso, Alfonso

    2017-01-01

    Non-coding RNA (ncRNA) are small non-coding sequences involved in gene expression regulation of many biological processes and diseases. The recent discovery of a large set of different ncRNAs with biologically relevant roles has opened the way to develop methods able to discriminate between the different ncRNA classes. Moreover, the lack of knowledge about the complete mechanisms in regulative processes, together with the development of high-throughput technologies, has required the help of bioinformatics tools in addressing biologists and clinicians with a deeper comprehension of the functional roles of ncRNAs. In this work, we introduce a new ncRNA classification tool, nRC (non-coding RNA Classifier). Our approach is based on features extraction from the ncRNA secondary structure together with a supervised classification algorithm implementing a deep learning architecture based on convolutional neural networks. We tested our approach for the classification of 13 different ncRNA classes. We obtained classification scores, using the most common statistical measures. In particular, we reach an accuracy and sensitivity score of about 74%. The proposed method outperforms other similar classification methods based on secondary structure features and machine learning algorithms, including the RNAcon tool that, to date, is the reference classifier. nRC tool is freely available as a docker image at https://hub.docker.com/r/tblab/nrc/. The source code of nRC tool is also available at https://github.com/IcarPA-TBlab/nrc.

  6. Improved Collaborative Representation Classifier Based on l2-Regularized for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    Shirui Huo

    2017-01-01

    Full Text Available Human action recognition is an important recent challenging task. Projecting depth images onto three depth motion maps (DMMs and extracting deep convolutional neural network (DCNN features are discriminant descriptor features to characterize the spatiotemporal information of a specific action from a sequence of depth images. In this paper, a unified improved collaborative representation framework is proposed in which the probability that a test sample belongs to the collaborative subspace of all classes can be well defined and calculated. The improved collaborative representation classifier (ICRC based on l2-regularized for human action recognition is presented to maximize the likelihood that a test sample belongs to each class, then theoretical investigation into ICRC shows that it obtains a final classification by computing the likelihood for each class. Coupled with the DMMs and DCNN features, experiments on depth image-based action recognition, including MSRAction3D and MSRGesture3D datasets, demonstrate that the proposed approach successfully using a distance-based representation classifier achieves superior performance over the state-of-the-art methods, including SRC, CRC, and SVM.

  7. Classifying adolescent attention-deficit/hyperactivity disorder (ADHD) based on functional and structural imaging.

    Science.gov (United States)

    Iannaccone, Reto; Hauser, Tobias U; Ball, Juliane; Brandeis, Daniel; Walitza, Susanne; Brem, Silvia

    2015-10-01

    Attention-deficit/hyperactivity disorder (ADHD) is a common disabling psychiatric disorder associated with consistent deficits in error processing, inhibition and regionally decreased grey matter volumes. The diagnosis is based on clinical presentation, interviews and questionnaires, which are to some degree subjective and would benefit from verification through biomarkers. Here, pattern recognition of multiple discriminative functional and structural brain patterns was applied to classify adolescents with ADHD and controls. Functional activation features in a Flanker/NoGo task probing error processing and inhibition along with structural magnetic resonance imaging data served to predict group membership using support vector machines (SVMs). The SVM pattern recognition algorithm correctly classified 77.78% of the subjects with a sensitivity and specificity of 77.78% based on error processing. Predictive regions for controls were mainly detected in core areas for error processing and attention such as the medial and dorsolateral frontal areas reflecting deficient processing in ADHD (Hart et al., in Hum Brain Mapp 35:3083-3094, 2014), and overlapped with decreased activations in patients in conventional group comparisons. Regions more predictive for ADHD patients were identified in the posterior cingulate, temporal and occipital cortex. Interestingly despite pronounced univariate group differences in inhibition-related activation and grey matter volumes the corresponding classifiers failed or only yielded a poor discrimination. The present study corroborates the potential of task-related brain activation for classification shown in previous studies. It remains to be clarified whether error processing, which performed best here, also contributes to the discrimination of useful dimensions and subtypes, different psychiatric disorders, and prediction of treatment success across studies and sites.

  8. A novel approach for fire recognition using hybrid features and manifold learning-based classifier

    Science.gov (United States)

    Zhu, Rong; Hu, Xueying; Tang, Jiajun; Hu, Sheng

    2018-03-01

    Although image/video based fire recognition has received growing attention, an efficient and robust fire detection strategy is rarely explored. In this paper, we propose a novel approach to automatically identify the flame or smoke regions in an image. It is composed to three stages: (1) a block processing is applied to divide an image into several nonoverlapping image blocks, and these image blocks are identified as suspicious fire regions or not by using two color models and a color histogram-based similarity matching method in the HSV color space, (2) considering that compared to other information, the flame and smoke regions have significant visual characteristics, so that two kinds of image features are extracted for fire recognition, where local features are obtained based on the Scale Invariant Feature Transform (SIFT) descriptor and the Bags of Keypoints (BOK) technique, and texture features are extracted based on the Gray Level Co-occurrence Matrices (GLCM) and the Wavelet-based Analysis (WA) methods, and (3) a manifold learning-based classifier is constructed based on two image manifolds, which is designed via an improve Globular Neighborhood Locally Linear Embedding (GNLLE) algorithm, and the extracted hybrid features are used as input feature vectors to train the classifier, which is used to make decision for fire images or non fire images. Experiments and comparative analyses with four approaches are conducted on the collected image sets. The results show that the proposed approach is superior to the other ones in detecting fire and achieving a high recognition accuracy and a low error rate.

  9. Distributed Classification of Localization Attacks in Sensor Networks Using Exchange-Based Feature Extraction and Classifier

    Directory of Open Access Journals (Sweden)

    Su-Zhe Wang

    2016-01-01

    Full Text Available Secure localization under different forms of attack has become an essential task in wireless sensor networks. Despite the significant research efforts in detecting the malicious nodes, the problem of localization attack type recognition has not yet been well addressed. Motivated by this concern, we propose a novel exchange-based attack classification algorithm. This is achieved by a distributed expectation maximization extractor integrated with the PECPR-MKSVM classifier. First, the mixed distribution features based on the probabilistic modeling are extracted using a distributed expectation maximization algorithm. After feature extraction, by introducing the theory from support vector machine, an extensive contractive Peaceman-Rachford splitting method is derived to build the distributed classifier that diffuses the iteration calculation among neighbor sensors. To verify the efficiency of the distributed recognition scheme, four groups of experiments were carried out under various conditions. The average success rate of the proposed classification algorithm obtained in the presented experiments for external attacks is excellent and has achieved about 93.9% in some cases. These testing results demonstrate that the proposed algorithm can produce much greater recognition rate, and it can be also more robust and efficient even in the presence of excessive malicious scenario.

  10. Deep Classifiers-Based License Plate Detection, Localization and Recognition on GPU-Powered Mobile Platform

    Directory of Open Access Journals (Sweden)

    Syed Tahir Hussain Rizvi

    2017-10-01

    Full Text Available The realization of a deep neural architecture on a mobile platform is challenging, but can open up a number of possibilities for visual analysis applications. A neural network can be realized on a mobile platform by exploiting the computational power of the embedded GPU and simplifying the flow of a neural architecture trained on the desktop workstation or a GPU server. This paper presents an embedded platform-based Italian license plate detection and recognition system using deep neural classifiers. In this work, trained parameters of a highly precise automatic license plate recognition (ALPR system are imported and used to replicate the same neural classifiers on a Nvidia Shield K1 tablet. A CUDA-based framework is used to realize these neural networks. The flow of the trained architecture is simplified to perform the license plate recognition in real-time. Results show that the tasks of plate and character detection and localization can be performed in real-time on a mobile platform by simplifying the flow of the trained architecture. However, the accuracy of the simplified architecture would be decreased accordingly.

  11. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    Science.gov (United States)

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database.

  12. Automatic discrimination between safe and unsafe swallowing using a reputation-based classifier

    Directory of Open Access Journals (Sweden)

    Nikjoo Mohammad S

    2011-11-01

    Full Text Available Abstract Background Swallowing accelerometry has been suggested as a potential non-invasive tool for bedside dysphagia screening. Various vibratory signal features and complementary measurement modalities have been put forth in the literature for the potential discrimination between safe and unsafe swallowing. To date, automatic classification of swallowing accelerometry has exclusively involved a single-axis of vibration although a second axis is known to contain additional information about the nature of the swallow. Furthermore, the only published attempt at automatic classification in adult patients has been based on a small sample of swallowing vibrations. Methods In this paper, a large corpus of dual-axis accelerometric signals were collected from 30 older adults (aged 65.47 ± 13.4 years, 15 male referred to videofluoroscopic examination on the suspicion of dysphagia. We invoked a reputation-based classifier combination to automatically categorize the dual-axis accelerometric signals into safe and unsafe swallows, as labeled via videofluoroscopic review. From these participants, a total of 224 swallowing samples were obtained, 164 of which were labeled as unsafe swallows (swallows where the bolus entered the airway and 60 as safe swallows. Three separate support vector machine (SVM classifiers and eight different features were selected for classification. Results With selected time, frequency and information theoretic features, the reputation-based algorithm distinguished between safe and unsafe swallowing with promising accuracy (80.48 ± 5.0%, high sensitivity (97.1 ± 2% and modest specificity (64 ± 8.8%. Interpretation of the most discriminatory features revealed that in general, unsafe swallows had lower mean vibration amplitude and faster autocorrelation decay, suggestive of decreased hyoid excursion and compromised coordination, respectively. Further, owing to its performance-based weighting of component classifiers, the static

  13. Analysis of Web Spam for Non-English Content: Toward More Effective Language-Based Classifiers.

    Directory of Open Access Journals (Sweden)

    Mansour Alsaleh

    Full Text Available Web spammers aim to obtain higher ranks for their web pages by including spam contents that deceive search engines in order to include their pages in search results even when they are not related to the search terms. Search engines continue to develop new web spam detection mechanisms, but spammers also aim to improve their tools to evade detection. In this study, we first explore the effect of the page language on spam detection features and we demonstrate how the best set of detection features varies according to the page language. We also study the performance of Google Penguin, a newly developed anti-web spamming technique for their search engine. Using spam pages in Arabic as a case study, we show that unlike similar English pages, Google anti-spamming techniques are ineffective against a high proportion of Arabic spam pages. We then explore multiple detection features for spam pages to identify an appropriate set of features that yields a high detection accuracy compared with the integrated Google Penguin technique. In order to build and evaluate our classifier, as well as to help researchers to conduct consistent measurement studies, we collected and manually labeled a corpus of Arabic web pages, including both benign and spam pages. Furthermore, we developed a browser plug-in that utilizes our classifier to warn users about spam pages after clicking on a URL and by filtering out search engine results. Using Google Penguin as a benchmark, we provide an illustrative example to show that language-based web spam classifiers are more effective for capturing spam contents.

  14. Automated Detection of Driver Fatigue Based on AdaBoost Classifier with EEG Signals

    Directory of Open Access Journals (Sweden)

    Jianfeng Hu

    2017-08-01

    fatigue through the classification of EEG signals.Conclusion: By using combination of FE features and AdaBoost classifier to detect EEG-based driver fatigue, this paper ensured confidence in exploring the inherent physiological mechanisms and wearable application.

  15. Neutropenia Prediction Based on First-Cycle Blood Counts Using a FOS-3NN Classifier

    Directory of Open Access Journals (Sweden)

    Elize A. Shirdel

    2011-01-01

    Full Text Available Background. Delivery of full doses of adjuvant chemotherapy on schedule is key to optimal breast cancer outcomes. Neutropenia is a serious complication of chemotherapy and a common barrier to this goal, leading to dose reductions or delays in treatment. While past research has observed correlations between complete blood count data and neutropenic events, a reliable method of classifying breast cancer patients into low- and high-risk groups remains elusive. Patients and Methods. Thirty-five patients receiving adjuvant chemotherapy for early-stage breast cancer under the care of a single oncologist are examined in this study. FOS-3NN stratifies patient risk based on complete blood count data after the first cycle of treatment. All classifications are independent of breast cancer subtype and clinical markers, with risk level determined by the kinetics of patient blood count response to the first cycle of treatment. Results. In an independent test set of patients unseen by FOS-3NN, 19 out of 21 patients were correctly classified (Fisher’s exact test probability P<0.00023 [2 tailed], Matthews’ correlation coefficient +0.83. Conclusions. We have developed a model that accurately predicts neutropenic events in a population treated with adjuvant chemotherapy in the first cycle of a 6-cycle treatment.

  16. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    Science.gov (United States)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  17. A three-parameter model for classifying anurans into four genera based on advertisement calls.

    Science.gov (United States)

    Gingras, Bruno; Fitch, William Tecumseh

    2013-01-01

    The vocalizations of anurans are innate in structure and may therefore contain indicators of phylogenetic history. Thus, advertisement calls of species which are more closely related phylogenetically are predicted to be more similar than those of distant species. This hypothesis was evaluated by comparing several widely used machine-learning algorithms. Recordings of advertisement calls from 142 species belonging to four genera were analyzed. A logistic regression model, using mean values for dominant frequency, coefficient of variation of root-mean square energy, and spectral flux, correctly classified advertisement calls with regard to genus with an accuracy above 70%. Similar accuracy rates were obtained using these parameters with a support vector machine model, a K-nearest neighbor algorithm, and a multivariate Gaussian distribution classifier, whereas a Gaussian mixture model performed slightly worse. In contrast, models based on mel-frequency cepstral coefficients did not fare as well. Comparable accuracy levels were obtained on out-of-sample recordings from 52 of the 142 original species. The results suggest that a combination of low-level acoustic attributes is sufficient to discriminate efficiently between the vocalizations of these four genera, thus supporting the initial premise and validating the use of high-throughput algorithms on animal vocalizations to evaluate phylogenetic hypotheses.

  18. Yellowfin Tuna (Thunnusalbacares Fishing Ground Forecasting Model Based On Bayes Classifier In The South China Sea

    Directory of Open Access Journals (Sweden)

    Zhou Wei-feng

    2017-08-01

    Full Text Available Using the yellowfin tuna (Thunnusalbacares,YFTlongline fishing catch data in the open South China Sea (SCS provided by WCPFC, the optimum interpolation sea surface temperature (OISST from CPC/NOAA and multi-satellites altimetric monthly averaged product sea surface height (SSH released by CNES, eight alternative options based on Bayes classifier were made in this paper according to different strategies on the choice of environment factors and the levels of fishing zones to classify the YFT fishing ground in the open SCS. The classification results were compared with the actual ones for validation and analyzed to know how different plans impact on classification results and precision. The results of validation showed that the precision of the eight options were 71.4%, 75%, 70.8%, 74.4%, 66.7%, 68.5%, 57.7% and 63.7% in sequence, the first to sixth among them above 65% would meet the practical application needs basically. The alternatives which use SST and SSH simultaneously as the environmental factors have higher precision than which only use single SST environmental factor, and the consideration of adding SSH can improve the model precision to a certain extent. The options which use CPUE’s mean ± standard deviation as threshold have higher precision than which use CPUE’s 33.3%-quantile and 66.7%-quantile as the threshold

  19. An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Xiong Jintao

    2016-01-01

    Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.

  20. FEATURE SELECTION METHODS BASED ON MUTUAL INFORMATION FOR CLASSIFYING HETEROGENEOUS FEATURES

    Directory of Open Access Journals (Sweden)

    Ratri Enggar Pawening

    2016-06-01

    Full Text Available Datasets with heterogeneous features can affect feature selection results that are not appropriate because it is difficult to evaluate heterogeneous features concurrently. Feature transformation (FT is another way to handle heterogeneous features subset selection. The results of transformation from non-numerical into numerical features may produce redundancy to the original numerical features. In this paper, we propose a method to select feature subset based on mutual information (MI for classifying heterogeneous features. We use unsupervised feature transformation (UFT methods and joint mutual information maximation (JMIM methods. UFT methods is used to transform non-numerical features into numerical features. JMIM methods is used to select feature subset with a consideration of the class label. The transformed and the original features are combined entirely, then determine features subset by using JMIM methods, and classify them using support vector machine (SVM algorithm. The classification accuracy are measured for any number of selected feature subset and compared between UFT-JMIM methods and Dummy-JMIM methods. The average classification accuracy for all experiments in this study that can be achieved by UFT-JMIM methods is about 84.47% and Dummy-JMIM methods is about 84.24%. This result shows that UFT-JMIM methods can minimize information loss between transformed and original features, and select feature subset to avoid redundant and irrelevant features.

  1. Evaluating a k-nearest neighbours-based classifier for locating faulty areas in power systems

    Directory of Open Access Journals (Sweden)

    Juan José Mora Flórez

    2008-09-01

    Full Text Available This paper reports a strategy for identifying and locating faults in a power distribution system. The strategy was based on the K-nearest neighbours technique. This technique simply helps to estimate a distance from the features used for describing a particu-lar fault being classified to the faults presented during the training stage. If new data is presented to the proposed fault locator, it is classified according to the nearest example recovered. A characterisation of the voltage and current measurements obtained at one single line end is also presented in this document for assigning the area in the case of a fault in a power system. The pro-posed strategy was tested in a real power distribution system, average 93% confidence indexes being obtained which gives a good indicator of the proposal’s high performance. The results showed how a fault could be located by using features obtained from voltage and current, improving utility response and thereby improving system continuity indexes in power distribution sys-tems.

  2. Case based reasoning applied to medical diagnosis using multi-class classifier: A preliminary study

    Directory of Open Access Journals (Sweden)

    D. Viveros-Melo

    2017-02-01

    Full Text Available Case-based reasoning (CBR is a process used for computer processing that tries to mimic the behavior of a human expert in making decisions regarding a subject and learn from the experience of past cases. CBR has demonstrated to be appropriate for working with unstructured domains data or difficult knowledge acquisition situations, such as medical diagnosis, where it is possible to identify diseases such as: cancer diagnosis, epilepsy prediction and appendicitis diagnosis. Some of the trends that may be developed for CBR in the health science are oriented to reduce the number of features in highly dimensional data. An important contribution may be the estimation of probabilities of belonging to each class for new cases. In this paper, in order to adequately represent the database and to avoid the inconveniences caused by the high dimensionality, noise and redundancy, a number of algorithms are used in the preprocessing stage for performing both variable selection and dimension reduction procedures. Also, a comparison of the performance of some representative multi-class classifiers is carried out to identify the most effective one to include within a CBR scheme. Particularly, four classification techniques and two reduction techniques are employed to make a comparative study of multiclass classifiers on CBR

  3. ConSpeciFix: Classifying prokaryotic species based on gene flow.

    Science.gov (United States)

    Bobay, Louis-Marie; Ellis, Brian Shin-Hua; Ochman, Howard

    2018-05-16

    Classification of prokaryotic species is usually based on sequence similarity thresholds, which are easy to apply but lack a biologically-relevant foundation. Here, we present ConSpeciFix, a program that classifies prokaryotes into species using criteria set forth by the Biological Species Concept, thereby unifying species definition in all domains of life. ConSpeciFix's webserver is freely available at www.conspecifix.com. The local version of the program can be freely downloaded from https://github.com/Bobay-Ochman/ConSpeciFix. ConSpeciFix is written in Python 2.7 and requires the following dependencies: Usearch, MCL, MAFFT and RAxML. ljbobay@uncg.edu.

  4. A hybrid approach to select features and classify diseases based on medical data

    Science.gov (United States)

    AbdelLatif, Hisham; Luo, Jiawei

    2018-03-01

    Feature selection is popular problem in the classification of diseases in clinical medicine. Here, we developing a hybrid methodology to classify diseases, based on three medical datasets, Arrhythmia, Breast cancer, and Hepatitis datasets. This methodology called k-means ANOVA Support Vector Machine (K-ANOVA-SVM) uses K-means cluster with ANOVA statistical to preprocessing data and selection the significant features, and Support Vector Machines in the classification process. To compare and evaluate the performance, we choice three classification algorithms, decision tree Naïve Bayes, Support Vector Machines and applied the medical datasets direct to these algorithms. Our methodology was a much better classification accuracy is given of 98% in Arrhythmia datasets, 92% in Breast cancer datasets and 88% in Hepatitis datasets, Compare to use the medical data directly with decision tree Naïve Bayes, and Support Vector Machines. Also, the ROC curve and precision with (K-ANOVA-SVM) Achieved best results than other algorithms

  5. Spectral classifying base on color of live corals and dead corals covered with algae

    Science.gov (United States)

    Nurdin, Nurjannah; Komatsu, Teruhisa; Barille, Laurent; Akbar, A. S. M.; Sawayama, Shuhei; Fitrah, Muh. Nur; Prasyad, Hermansyah

    2016-05-01

    Pigments in the host tissues of corals can make a significant contribution to their spectral signature and can affect their apparent color as perceived by a human observer. The aim of this study is classifying the spectral reflectance of corals base on different color. It is expected that they can be used as references in discriminating between live corals, dead coral covered with algae Spectral reflectance data was collected in three small islands, Spermonde Archipelago, Indonesia by using a hyperspectral radiometer underwater. First and second derivative analysis resolved the wavelength locations of dominant features contributing to reflectance in corals and support the distinct differences in spectra among colour existed. Spectral derivative analysis was used to determine the specific wavelength regions ideal for remote identification of substrate type. The analysis results shown that yellow, green, brown and violet live corals are spectrally separable from each other, but they are similar with dead coral covered with algae spectral.

  6. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    Science.gov (United States)

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.

  7. An Unobtrusive Fall Detection and Alerting System Based on Kalman Filter and Bayes Network Classifier.

    Science.gov (United States)

    He, Jian; Bai, Shuang; Wang, Xiaoyi

    2017-06-16

    Falls are one of the main health risks among the elderly. A fall detection system based on inertial sensors can automatically detect fall event and alert a caregiver for immediate assistance, so as to reduce injuries causing by falls. Nevertheless, most inertial sensor-based fall detection technologies have focused on the accuracy of detection while neglecting quantization noise caused by inertial sensor. In this paper, an activity model based on tri-axial acceleration and gyroscope is proposed, and the difference between activities of daily living (ADLs) and falls is analyzed. Meanwhile, a Kalman filter is proposed to preprocess the raw data so as to reduce noise. A sliding window and Bayes network classifier are introduced to develop a wearable fall detection system, which is composed of a wearable motion sensor and a smart phone. The experiment shows that the proposed system distinguishes simulated falls from ADLs with a high accuracy of 95.67%, while sensitivity and specificity are 99.0% and 95.0%, respectively. Furthermore, the smart phone can issue an alarm to caregivers so as to provide timely and accurate help for the elderly, as soon as the system detects a fall.

  8. Identifying Different Transportation Modes from Trajectory Data Using Tree-Based Ensemble Classifiers

    Directory of Open Access Journals (Sweden)

    Zhibin Xiao

    2017-02-01

    Full Text Available Recognition of transportation modes can be used in different applications including human behavior research, transport management and traffic control. Previous work on transportation mode recognition has often relied on using multiple sensors or matching Geographic Information System (GIS information, which is not possible in many cases. In this paper, an approach based on ensemble learning is proposed to infer hybrid transportation modes using only Global Position System (GPS data. First, in order to distinguish between different transportation modes, we used a statistical method to generate global features and extract several local features from sub-trajectories after trajectory segmentation, before these features were combined in the classification stage. Second, to obtain a better performance, we used tree-based ensemble models (Random Forest, Gradient Boosting Decision Tree, and XGBoost instead of traditional methods (K-Nearest Neighbor, Decision Tree, and Support Vector Machines to classify the different transportation modes. The experiment results on the later have shown the efficacy of our proposed approach. Among them, the XGBoost model produced the best performance with a classification accuracy of 90.77% obtained on the GEOLIFE dataset, and we used a tree-based ensemble method to ensure accurate feature selection to reduce the model complexity.

  9. A comparison of graph- and kernel-based -omics data integration algorithms for classifying complex traits.

    Science.gov (United States)

    Yan, Kang K; Zhao, Hongyu; Pang, Herbert

    2017-12-06

    High-throughput sequencing data are widely collected and analyzed in the study of complex diseases in quest of improving human health. Well-studied algorithms mostly deal with single data source, and cannot fully utilize the potential of these multi-omics data sources. In order to provide a holistic understanding of human health and diseases, it is necessary to integrate multiple data sources. Several algorithms have been proposed so far, however, a comprehensive comparison of data integration algorithms for classification of binary traits is currently lacking. In this paper, we focus on two common classes of integration algorithms, graph-based that depict relationships with subjects denoted by nodes and relationships denoted by edges, and kernel-based that can generate a classifier in feature space. Our paper provides a comprehensive comparison of their performance in terms of various measurements of classification accuracy and computation time. Seven different integration algorithms, including graph-based semi-supervised learning, graph sharpening integration, composite association network, Bayesian network, semi-definite programming-support vector machine (SDP-SVM), relevance vector machine (RVM) and Ada-boost relevance vector machine are compared and evaluated with hypertension and two cancer data sets in our study. In general, kernel-based algorithms create more complex models and require longer computation time, but they tend to perform better than graph-based algorithms. The performance of graph-based algorithms has the advantage of being faster computationally. The empirical results demonstrate that composite association network, relevance vector machine, and Ada-boost RVM are the better performers. We provide recommendations on how to choose an appropriate algorithm for integrating data from multiple sources.

  10. A novel ultrasound based technique for classifying gas bubble sizes in liquids

    International Nuclear Information System (INIS)

    Hussein, Walid; Khan, Muhammad Salman; Zamorano, Juan; Espic, Felipe; Yoma, Nestor Becerra

    2014-01-01

    Characterizing gas bubbles in liquids is crucial to many biomedical, environmental and industrial applications. In this paper a novel method is proposed for the classification of bubble sizes using ultrasound analysis, which is widely acknowledged for being non-invasive, non-contact and inexpensive. This classification is based on 2D templates, i.e. the average spectrum of events representing the trace of bubbles when they cross an ultrasound field. The 2D patterns are obtained by capturing ultrasound signals reflected by bubbles. Frequency-domain based features are analyzed that provide discrimination between bubble sizes. These features are then fed to an artificial neural network, which is designed and trained to classify bubble sizes. The benefits of the proposed method are that it facilitates the processing of multiple bubbles simultaneously, the issues concerning masking interference among bubbles are potentially reduced and using a single sinusoidal component makes the transmitter–receiver electronics relatively simpler. Results from three bubble sizes indicate that the proposed scheme can achieve an accuracy in their classification that is as high as 99%. (paper)

  11. Double Marginalization in Performance-Based Advertising: Implications and Solutions

    OpenAIRE

    Chrysanthos Dellarocas

    2012-01-01

    An important current trend in advertising is the replacement of traditional pay-per-exposure (pay-per-impression) pricing models with performance-based mechanisms in which advertisers pay only for measurable actions by consumers. Such pay-per-action (PPA) mechanisms are becoming the predominant method of selling advertising on the Internet. Well-known examples include pay-per-click, pay-per-call, and pay-per-sale. This work highlights an important, and hitherto unrecognized, side effect of PP...

  12. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors

    International Nuclear Information System (INIS)

    Herschtal, A; Te Marvelde, L; Mengersen, K; Foroudi, F; Ball, D; Devereux, T; Pham, D; Greer, P B; Pichler, P; Eade, T; Kneebone, A; Bell, L; Caine, H; Hindson, B; Kron, T; Hosseinifard, Z

    2015-01-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts −19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements. (paper)

  13. Classification of Multiple Chinese Liquors by Means of a QCM-based E-Nose and MDS-SVM Classifier.

    Science.gov (United States)

    Li, Qiang; Gu, Yu; Jia, Jing

    2017-01-30

    Chinese liquors are internationally well-known fermentative alcoholic beverages. They have unique flavors attributable to the use of various bacteria and fungi, raw materials, and production processes. Developing a novel, rapid, and reliable method to identify multiple Chinese liquors is of positive significance. This paper presents a pattern recognition system for classifying ten brands of Chinese liquors based on multidimensional scaling (MDS) and support vector machine (SVM) algorithms in a quartz crystal microbalance (QCM)-based electronic nose (e-nose) we designed. We evaluated the comprehensive performance of the MDS-SVM classifier that predicted all ten brands of Chinese liquors individually. The prediction accuracy (98.3%) showed superior performance of the MDS-SVM classifier over the back-propagation artificial neural network (BP-ANN) classifier (93.3%) and moving average-linear discriminant analysis (MA-LDA) classifier (87.6%). The MDS-SVM classifier has reasonable reliability, good fitting and prediction (generalization) performance in classification of the Chinese liquors. Taking both application of the e-nose and validation of the MDS-SVM classifier into account, we have thus created a useful method for the classification of multiple Chinese liquors.

  14. Classification of Multiple Chinese Liquors by Means of a QCM-based E-Nose and MDS-SVM Classifier

    Directory of Open Access Journals (Sweden)

    Qiang Li

    2017-01-01

    Full Text Available Chinese liquors are internationally well-known fermentative alcoholic beverages. They have unique flavors attributable to the use of various bacteria and fungi, raw materials, and production processes. Developing a novel, rapid, and reliable method to identify multiple Chinese liquors is of positive significance. This paper presents a pattern recognition system for classifying ten brands of Chinese liquors based on multidimensional scaling (MDS and support vector machine (SVM algorithms in a quartz crystal microbalance (QCM-based electronic nose (e-nose we designed. We evaluated the comprehensive performance of the MDS-SVM classifier that predicted all ten brands of Chinese liquors individually. The prediction accuracy (98.3% showed superior performance of the MDS-SVM classifier over the back-propagation artificial neural network (BP-ANN classifier (93.3% and moving average-linear discriminant analysis (MA-LDA classifier (87.6%. The MDS-SVM classifier has reasonable reliability, good fitting and prediction (generalization performance in classification of the Chinese liquors. Taking both application of the e-nose and validation of the MDS-SVM classifier into account, we have thus created a useful method for the classification of multiple Chinese liquors.

  15. Implementing Computer-Based Procedures: Thinking Outside the Paper Margins

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna; Bly, Aaron

    2017-06-01

    In the past year there has been increased interest from the nuclear industry in adopting the use of electronic work packages and computer-based procedures (CBPs) in the field. The goal is to incorporate the use of technology in order to meet the Nuclear Promise requirements of reducing costs and improve efficiency and decrease human error rates of plant operations. Researchers, together with the nuclear industry, have been investigating the benefits an electronic work package system and specifically CBPs would have over current paper-based procedure practices. There are several classifications of CBPs ranging from a straight copy of the paper-based procedure in PDF format to a more intelligent dynamic CBP. A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping and correct component verification), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. The improvements can lead to reduction of the worker’s workload and human error by allowing the work to focus on the task at hand more. A team of human factors researchers at the Idaho National Laboratory studied and developed design concepts for CBPs for field workers between 2012 and 2016. The focus of the research was to present information in a procedure in a manner that leveraged the dynamic and computational capabilities of a handheld device allowing the worker to focus more on the task at hand than on the administrative processes currently applied when conducting work in the plant. As a part of the research the team identified type of work, instructions, and scenarios where the transition to a dynamic CBP system might not be as beneficial as it would for other types of work in the plant. In most cases the decision to use a dynamic CBP system and utilize the dynamic capabilities gained will be beneficial to the worker

  16. Quantifying the margin sharpness of lesions on radiological images for content-based image retrieval

    International Nuclear Information System (INIS)

    Xu Jiajing; Napel, Sandy; Greenspan, Hayit; Beaulieu, Christopher F.; Agrawal, Neeraj; Rubin, Daniel

    2012-01-01

    . Equivalence across deformations was assessed using Schuirmann's paired two one-sided tests. Results: In simulated images, the concordance correlation between measured gradient and actual gradient was 0.994. The mean (s.d.) and standard deviation NDCG score for the retrieval of K images, K = 5, 10, and 15, were 84% (8%), 85% (7%), and 85% (7%) for CT images containing liver lesions, and 82% (7%), 84% (6%), and 85% (4%) for CT images containing lung nodules, respectively. The authors’ proposed method outperformed the two existing margin characterization methods in average NDCG scores over all K, by 1.5% and 3% in datasets containing liver lesion, and 4.5% and 5% in datasets containing lung nodules. Equivalence testing showed that the authors’ feature is more robust across all margin deformations (p < 0.05) than the two existing methods for margin sharpness characterization in both simulated and clinical datasets. Conclusions: The authors have described a new image feature to quantify the margin sharpness of lesions. It has strong correlation with known margin sharpness in simulated images and in clinical CT images containing liver lesions and lung nodules. This image feature has excellent performance for retrieving images with similar margin characteristics, suggesting potential utility, in conjunction with other lesion features, for content-based image retrieval applications.

  17. Construction of Pancreatic Cancer Classifier Based on SVM Optimized by Improved FOA

    Science.gov (United States)

    Ma, Xiaoqi

    2015-01-01

    A novel method is proposed to establish the pancreatic cancer classifier. Firstly, the concept of quantum and fruit fly optimal algorithm (FOA) are introduced, respectively. Then FOA is improved by quantum coding and quantum operation, and a new smell concentration determination function is defined. Finally, the improved FOA is used to optimize the parameters of support vector machine (SVM) and the classifier is established by optimized SVM. In order to verify the effectiveness of the proposed method, SVM and other classification methods have been chosen as the comparing methods. The experimental results show that the proposed method can improve the classifier performance and cost less time. PMID:26543867

  18. Classifying Microorganisms

    DEFF Research Database (Denmark)

    Sommerlund, Julie

    2006-01-01

    This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris......This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological...... characteristics. The coexistence of the classification systems does not lead to a conflict between them. Rather, the systems seem to co-exist in different configurations, through which they are complementary, contradictory and inclusive in different situations-sometimes simultaneously. The systems come...

  19. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    Science.gov (United States)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  20. Possible world based consistency learning model for clustering and classifying uncertain data.

    Science.gov (United States)

    Liu, Han; Zhang, Xianchao; Zhang, Xiaotong

    2018-06-01

    Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Classifying Aerosols Based on Fuzzy Clustering and Their Optical and Microphysical Properties Study in Beijing, China

    Directory of Open Access Journals (Sweden)

    Wenhao Zhang

    2017-01-01

    Full Text Available Classification of Beijing aerosol is carried out based on clustering optical properties obtained from three Aerosol Robotic Network (AERONET sites. The fuzzy c-mean (FCM clustering algorithm is used to classify fourteen-year (2001–2014 observations, totally of 6,732 records, into six aerosol types. They are identified as fine particle nonabsorbing, two kinds of fine particle moderately absorbing (fine-MA1 and fine-MA2, fine particle highly absorbing, polluted dust, and desert dust aerosol. These aerosol types exhibit obvious optical characteristics difference. While five of them show similarities with aerosol types identified elsewhere, the polluted dust aerosol has no comparable prototype. Then the membership degree, a significant parameter provided by fuzzy clustering, is used to analyze internal variation of optical properties of each aerosol type. Finally, temporal variations of aerosol types are investigated. The dominant aerosol types are polluted dust and desert dust in spring, fine particle nonabsorbing aerosol in summer, and fine particle highly absorbing aerosol in winter. The fine particle moderately absorbing aerosol occurs during the whole year. Optical properties of the six types can also be used for radiative forcing estimation and satellite aerosol retrieval. Additionally, methodology of this study can be applied to identify aerosol types on a global scale.

  2. Short text sentiment classification based on feature extension and ensemble classifier

    Science.gov (United States)

    Liu, Yang; Zhu, Xie

    2018-05-01

    With the rapid development of Internet social media, excavating the emotional tendencies of the short text information from the Internet, the acquisition of useful information has attracted the attention of researchers. At present, the commonly used can be attributed to the rule-based classification and statistical machine learning classification methods. Although micro-blog sentiment analysis has made good progress, there still exist some shortcomings such as not highly accurate enough and strong dependence from sentiment classification effect. Aiming at the characteristics of Chinese short texts, such as less information, sparse features, and diverse expressions, this paper considers expanding the original text by mining related semantic information from the reviews, forwarding and other related information. First, this paper uses Word2vec to compute word similarity to extend the feature words. And then uses an ensemble classifier composed of SVM, KNN and HMM to analyze the emotion of the short text of micro-blog. The experimental results show that the proposed method can make good use of the comment forwarding information to extend the original features. Compared with the traditional method, the accuracy, recall and F1 value obtained by this method have been improved.

  3. Performances of the likelihood-ratio classifier based on different data modelings

    NARCIS (Netherlands)

    Chen, C.; Veldhuis, Raymond N.J.

    2008-01-01

    The classical likelihood ratio classifier easily collapses in many biometric applications especially with independent training-test subjects. The reason lies in the inaccurate estimation of the underlying user-specific feature density. Firstly, the feature density estimation suffers from

  4. Emergency Load Shedding Strategy Based on Sensitivity Analysis of Relay Operation Margin against Cascading Events

    DEFF Research Database (Denmark)

    Liu, Zhou; Chen, Zhe; Sun, Haishun Sun

    2012-01-01

    the runtime emergent states of related system component. Based on sensitivity analysis between the relay operation margin and power system state variables, an optimal load shedding strategy is applied to adjust the emergent states timely before the unwanted relay operation. Load dynamics is also taken...... into account to compensate load shedding amount calculation. And the multi-agent technology is applied for the whole strategy implementation. A test system is built in real time digital simulator (RTDS) and has demonstrated the effectiveness of the proposed strategy.......In order to prevent long term voltage instability and induced cascading events, a load shedding strategy based on the sensitivity of relay operation margin to load powers is discussed and proposed in this paper. The operation margin of critical impedance backup relay is defined to identify...

  5. Hot roller embossing system equipped with a temperature margin-based controller

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seyoung, E-mail: seyoungkim@kimm.re.kr; Son, Youngsu; Lee, Sunghee; Ham, Sangyong; Kim, Byungin [Department of Robotics and Mechatronics, Korea Institute of Machinery and Materials (KIMM), Daejeon (Korea, Republic of)

    2014-08-15

    A temperature control system was proposed for hot roller embossing. The roll surface was heated using induction coils and cooled with a circulating chilled water system. The temperature of the roll surface was precisely controlled by a temperature margin-based control algorithm that we developed. Implementation of the control system reduced deviations in the roll surface temperature to less than ±2 °C. The tight temperature control and the ability to rapidly increase and decrease the roll temperature will allow optimum operating parameters to be developed quickly. The temperature margin-based controller could also be used to optimize the time course of electrical power and shorten the cooling time by choosing an appropriate temperature margin, possibly for limited power consumption. The chiller-equipped heating roll with the proposed control algorithm is expected to decrease the time needed to determine the optimal embossing process.

  6. Hot roller embossing system equipped with a temperature margin-based controller

    International Nuclear Information System (INIS)

    Kim, Seyoung; Son, Youngsu; Lee, Sunghee; Ham, Sangyong; Kim, Byungin

    2014-01-01

    A temperature control system was proposed for hot roller embossing. The roll surface was heated using induction coils and cooled with a circulating chilled water system. The temperature of the roll surface was precisely controlled by a temperature margin-based control algorithm that we developed. Implementation of the control system reduced deviations in the roll surface temperature to less than ±2 °C. The tight temperature control and the ability to rapidly increase and decrease the roll temperature will allow optimum operating parameters to be developed quickly. The temperature margin-based controller could also be used to optimize the time course of electrical power and shorten the cooling time by choosing an appropriate temperature margin, possibly for limited power consumption. The chiller-equipped heating roll with the proposed control algorithm is expected to decrease the time needed to determine the optimal embossing process

  7. Distance and Density Similarity Based Enhanced k-NN Classifier for Improving Fault Diagnosis Performance of Bearings

    Directory of Open Access Journals (Sweden)

    Sharif Uddin

    2016-01-01

    Full Text Available An enhanced k-nearest neighbor (k-NN classification algorithm is presented, which uses a density based similarity measure in addition to a distance based similarity measure to improve the diagnostic performance in bearing fault diagnosis. Due to its use of distance based similarity measure alone, the classification accuracy of traditional k-NN deteriorates in case of overlapping samples and outliers and is highly susceptible to the neighborhood size, k. This study addresses these limitations by proposing the use of both distance and density based measures of similarity between training and test samples. The proposed k-NN classifier is used to enhance the diagnostic performance of a bearing fault diagnosis scheme, which classifies different fault conditions based upon hybrid feature vectors extracted from acoustic emission (AE signals. Experimental results demonstrate that the proposed scheme, which uses the enhanced k-NN classifier, yields better diagnostic performance and is more robust to variations in the neighborhood size, k.

  8. An advanced method for classifying atmospheric circulation types based on prototypes connectivity graph

    Science.gov (United States)

    Zagouras, Athanassios; Argiriou, Athanassios A.; Flocas, Helena A.; Economou, George; Fotopoulos, Spiros

    2012-11-01

    Classification of weather maps at various isobaric levels as a methodological tool is used in several problems related to meteorology, climatology, atmospheric pollution and to other fields for many years. Initially the classification was performed manually. The criteria used by the person performing the classification are features of isobars or isopleths of geopotential height, depending on the type of maps to be classified. Although manual classifications integrate the perceptual experience and other unquantifiable qualities of the meteorology specialists involved, these are typically subjective and time consuming. Furthermore, during the last years different approaches of automated methods for atmospheric circulation classification have been proposed, which present automated and so-called objective classifications. In this paper a new method of atmospheric circulation classification of isobaric maps is presented. The method is based on graph theory. It starts with an intelligent prototype selection using an over-partitioning mode of fuzzy c-means (FCM) algorithm, proceeds to a graph formulation for the entire dataset and produces the clusters based on the contemporary dominant sets clustering method. Graph theory is a novel mathematical approach, allowing a more efficient representation of spatially correlated data, compared to the classical Euclidian space representation approaches, used in conventional classification methods. The method has been applied to the classification of 850 hPa atmospheric circulation over the Eastern Mediterranean. The evaluation of the automated methods is performed by statistical indexes; results indicate that the classification is adequately comparable with other state-of-the-art automated map classification methods, for a variable number of clusters.

  9. The EB factory project. I. A fast, neural-net-based, general purpose light curve classifier optimized for eclipsing binaries

    International Nuclear Information System (INIS)

    Paegert, Martin; Stassun, Keivan G.; Burger, Dan M.

    2014-01-01

    We describe a new neural-net-based light curve classifier and provide it with documentation as a ready-to-use tool for the community. While optimized for identification and classification of eclipsing binary stars, the classifier is general purpose, and has been developed for speed in the context of upcoming massive surveys such as the Large Synoptic Survey Telescope. A challenge for classifiers in the context of neural-net training and massive data sets is to minimize the number of parameters required to describe each light curve. We show that a simple and fast geometric representation that encodes the overall light curve shape, together with a chi-square parameter to capture higher-order morphology information results in efficient yet robust light curve classification, especially for eclipsing binaries. Testing the classifier on the ASAS light curve database, we achieve a retrieval rate of 98% and a false-positive rate of 2% for eclipsing binaries. We achieve similarly high retrieval rates for most other periodic variable-star classes, including RR Lyrae, Mira, and delta Scuti. However, the classifier currently has difficulty discriminating between different sub-classes of eclipsing binaries, and suffers a relatively low (∼60%) retrieval rate for multi-mode delta Cepheid stars. We find that it is imperative to train the classifier's neural network with exemplars that include the full range of light curve quality to which the classifier will be expected to perform; the classifier performs well on noisy light curves only when trained with noisy exemplars. The classifier source code, ancillary programs, a trained neural net, and a guide for use, are provided.

  10. Comparison of Different Features and Classifiers for Driver Fatigue Detection Based on a Single EEG Channel

    Directory of Open Access Journals (Sweden)

    Jianfeng Hu

    2017-01-01

    Full Text Available Driver fatigue has become an important factor to traffic accidents worldwide, and effective detection of driver fatigue has major significance for public health. The purpose method employs entropy measures for feature extraction from a single electroencephalogram (EEG channel. Four types of entropies measures, sample entropy (SE, fuzzy entropy (FE, approximate entropy (AE, and spectral entropy (PE, were deployed for the analysis of original EEG signal and compared by ten state-of-the-art classifiers. Results indicate that optimal performance of single channel is achieved using a combination of channel CP4, feature FE, and classifier Random Forest (RF. The highest accuracy can be up to 96.6%, which has been able to meet the needs of real applications. The best combination of channel + features + classifier is subject-specific. In this work, the accuracy of FE as the feature is far greater than the Acc of other features. The accuracy using classifier RF is the best, while that of classifier SVM with linear kernel is the worst. The impact of channel selection on the Acc is larger. The performance of various channels is very different.

  11. Contemporary approaches to reducing the risks of central counterparties based on the use of marginal contributions

    Directory of Open Access Journals (Sweden)

    Utkin Viktor Sergeyevich

    2012-07-01

    Full Text Available To protect their own interests central counterparties has developed a number of procedures, including payment of guarantee margin by trading members as a means to ensure their positions. This article discusses a number of approaches, which attempt to simulate the risks of the Central Committee, as well as calculating the amount of margin and other resources in the event of insolvency. These approaches are based on the simulation of the three main types: (a statistical modeling; (b optimization modeling, and (c model of option pricing. The author incorporates the basic provisions of models.

  12. Marginal leakage of two newer glass-ionomer-based sealant materials assessed using micro-CT.

    NARCIS (Netherlands)

    Chen, X.; Cuijpers, V.M.J.I.; Fan, M.; Frencken, J.E.F.M.

    2010-01-01

    OBJECTIVES: To test newer glass-ionomer-based materials as sealant materials. One glass-ionomer sealant was light-cured to obtain an early setting reaction. The null-hypothesis tested was: there is no difference in marginal leakage of sealants produced with high-viscosity glass-ionomer, with and

  13. A Visual Basic program to classify sediments based on gravel-sand-silt-clay ratios

    Science.gov (United States)

    Poppe, L.J.; Eliason, A.H.; Hastings, M.E.

    2003-01-01

    Nomenclature describing size distributions is important to geologists because grain size is the most basic attribute of sediments. Traditionally, geologists have divided sediments into four size fractions that include gravel, sand, silt, and clay, and classified these sediments based on ratios of the various proportions of the fractions. Definitions of these fractions have long been standardized to the grade scale described by Wentworth (1922), and two main classification schemes have been adopted to describe the approximate relationship between the size fractions.Specifically, according to the Wentworth grade scale gravel-sized particles have a nominal diameter of ⩾2.0 mm; sand-sized particles have nominal diameters from <2.0 mm to ⩾62.5 μm; silt-sized particles have nominal diameters from <62.5 to ⩾4.0 μm; and clay is <4.0 μm. As for sediment classification, most sedimentologists use one of the systems described either by Shepard (1954) or Folk (1954, 1974). The original scheme devised by Shepard (1954) utilized a single ternary diagram with sand, silt, and clay in the corners to graphically show the relative proportions among these three grades within a sample. This scheme, however, does not allow for sediments with significant amounts of gravel. Therefore, Shepard's classification scheme (Fig. 1) was subsequently modified by the addition of a second ternary diagram to account for the gravel fraction (Schlee, 1973). The system devised by Folk (1954, 1974) is also based on two triangular diagrams (Fig. 2), but it has 23 major categories, and uses the term mud (defined as silt plus clay). The patterns within the triangles of both systems differ, as does the emphasis placed on gravel. For example, in the system described by Shepard, gravelly sediments have more than 10% gravel; in Folk's system, slightly gravelly sediments have as little as 0.01% gravel. Folk's classification scheme stresses gravel because its concentration is a function of

  14. A Critical Evaluation of Network and Pathway-Based Classifiers for Outcome Prediction in Breast Cancer

    NARCIS (Netherlands)

    C. Staiger (Christine); S. Cadot; R Kooter; M. Dittrich (Marcus); T. Müller (Tobias); G.W. Klau (Gunnar); L.F.A. Wessels (Lodewyk)

    2012-01-01

    htmlabstractRecently, several classifiers that combine primary tumor data, like gene expression data, and secondary data sources, such as protein-protein interaction networks, have been proposed for predicting outcome in breast cancer. In these approaches, new composite features are typically

  15. A supervised contextual classifier based on a region-growth algorithm

    DEFF Research Database (Denmark)

    Lira, Jorge; Maletti, Gabriela Mariel

    2002-01-01

    A supervised classification scheme to segment optical multi-spectral images has been developed. In this classifier, an automated region-growth algorithm delineates the training sets. This algorithm handles three parameters: an initial pixel seed, a window size and a threshold for each class. A su...

  16. Gas chimney detection based on improving the performance of combined multilayer perceptron and support vector classifier

    NARCIS (Netherlands)

    Hashemi, H.; Tax, D.M.J.; Duin, R.P.W.; Javaherian, A.; De Groot, P.

    2008-01-01

    Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a

  17. Gene expression-based classifiers identify Staphylococcus aureus infection in mice and humans.

    Directory of Open Access Journals (Sweden)

    Sun Hee Ahn

    Full Text Available Staphylococcus aureus causes a spectrum of human infection. Diagnostic delays and uncertainty lead to treatment delays and inappropriate antibiotic use. A growing literature suggests the host's inflammatory response to the pathogen represents a potential tool to improve upon current diagnostics. The hypothesis of this study is that the host responds differently to S. aureus than to E. coli infection in a quantifiable way, providing a new diagnostic avenue. This study uses Bayesian sparse factor modeling and penalized binary regression to define peripheral blood gene-expression classifiers of murine and human S. aureus infection. The murine-derived classifier distinguished S. aureus infection from healthy controls and Escherichia coli-infected mice across a range of conditions (mouse and bacterial strain, time post infection and was validated in outbred mice (AUC>0.97. A S. aureus classifier derived from a cohort of 94 human subjects distinguished S. aureus blood stream infection (BSI from healthy subjects (AUC 0.99 and E. coli BSI (AUC 0.84. Murine and human responses to S. aureus infection share common biological pathways, allowing the murine model to classify S. aureus BSI in humans (AUC 0.84. Both murine and human S. aureus classifiers were validated in an independent human cohort (AUC 0.95 and 0.92, respectively. The approach described here lends insight into the conserved and disparate pathways utilized by mice and humans in response to these infections. Furthermore, this study advances our understanding of S. aureus infection; the host response to it; and identifies new diagnostic and therapeutic avenues.

  18. A scaling transformation for classifier output based on likelihood ratio: Applications to a CAD workstation for diagnosis of breast cancer

    International Nuclear Information System (INIS)

    Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang Yulei

    2012-01-01

    Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the ''scale'' of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists' rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on ''matching'' classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist's ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic.

  19. Polsar Land Cover Classification Based on Hidden Polarimetric Features in Rotation Domain and Svm Classifier

    Science.gov (United States)

    Tao, C.-S.; Chen, S.-W.; Li, Y.-Z.; Xiao, S.-P.

    2017-09-01

    Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR) data utilization. Rollinvariant polarimetric features such as H / Ani / text-decoration: overline">α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets' scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM) classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification

  20. POLSAR LAND COVER CLASSIFICATION BASED ON HIDDEN POLARIMETRIC FEATURES IN ROTATION DOMAIN AND SVM CLASSIFIER

    Directory of Open Access Journals (Sweden)

    C.-S. Tao

    2017-09-01

    Full Text Available Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR data utilization. Rollinvariant polarimetric features such as H / Ani / α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets’ scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification accuracy

  1. Hyperspectral imaging based on compressive sensing to determine cancer margins in human pancreatic tissue ex vivo

    Science.gov (United States)

    Peller, Joseph; Thompson, Kyle J.; Siddiqui, Imran; Martinie, John; Iannitti, David A.; Trammell, Susan R.

    2017-02-01

    Pancreatic cancer is the fourth leading cause of cancer death in the US. Currently, surgery is the only treatment that offers a chance of cure, however, accurately identifying tumor margins in real-time is difficult. Research has demonstrated that optical spectroscopy can be used to distinguish between healthy and diseased tissue. The design of a single-pixel imaging system for cancer detection is discussed. The system differentiates between healthy and diseased tissue based on differences in the optical reflectance spectra of these regions. In this study, pancreatic tissue samples from 6 patients undergoing Whipple procedures are imaged with the system (total number of tissue sample imaged was N=11). Regions of healthy and unhealthy tissue are determined based on SAM analysis of these spectral images. Hyperspectral imaging results are then compared to white light imaging and histological analysis. Cancerous regions were clearly visible in the hyperspectral images. Margins determined via spectral imaging were in good agreement with margins identified by histology, indicating that hyperspectral imaging system can differentiate between healthy and diseased tissue. After imaging the system was able to detect cancerous regions with a sensitivity of 74.50±5.89% and a specificity of 75.53±10.81%. Possible applications of this imaging system include determination of tumor margins during surgery/biopsy and assistance with cancer diagnosis and staging.

  2. Can-Evo-Ens: Classifier stacking based evolutionary ensemble system for prediction of human breast cancer using amino acid sequences.

    Science.gov (United States)

    Ali, Safdar; Majid, Abdul

    2015-04-01

    The diagnostic of human breast cancer is an intricate process and specific indicators may produce negative results. In order to avoid misleading results, accurate and reliable diagnostic system for breast cancer is indispensable. Recently, several interesting machine-learning (ML) approaches are proposed for prediction of breast cancer. To this end, we developed a novel classifier stacking based evolutionary ensemble system "Can-Evo-Ens" for predicting amino acid sequences associated with breast cancer. In this paper, first, we selected four diverse-type of ML algorithms of Naïve Bayes, K-Nearest Neighbor, Support Vector Machines, and Random Forest as base-level classifiers. These classifiers are trained individually in different feature spaces using physicochemical properties of amino acids. In order to exploit the decision spaces, the preliminary predictions of base-level classifiers are stacked. Genetic programming (GP) is then employed to develop a meta-classifier that optimal combine the predictions of the base classifiers. The most suitable threshold value of the best-evolved predictor is computed using Particle Swarm Optimization technique. Our experiments have demonstrated the robustness of Can-Evo-Ens system for independent validation dataset. The proposed system has achieved the highest value of Area Under Curve (AUC) of ROC Curve of 99.95% for cancer prediction. The comparative results revealed that proposed approach is better than individual ML approaches and conventional ensemble approaches of AdaBoostM1, Bagging, GentleBoost, and Random Subspace. It is expected that the proposed novel system would have a major impact on the fields of Biomedical, Genomics, Proteomics, Bioinformatics, and Drug Development. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Super resolution reconstruction of infrared images based on classified dictionary learning

    Science.gov (United States)

    Liu, Fei; Han, Pingli; Wang, Yi; Li, Xuan; Bai, Lu; Shao, Xiaopeng

    2018-05-01

    Infrared images always suffer from low-resolution problems resulting from limitations of imaging devices. An economical approach to combat this problem involves reconstructing high-resolution images by reasonable methods without updating devices. Inspired by compressed sensing theory, this study presents and demonstrates a Classified Dictionary Learning method to reconstruct high-resolution infrared images. It classifies features of the samples into several reasonable clusters and trained a dictionary pair for each cluster. The optimal pair of dictionaries is chosen for each image reconstruction and therefore, more satisfactory results is achieved without the increase in computational complexity and time cost. Experiments and results demonstrated that it is a viable method for infrared images reconstruction since it improves image resolution and recovers detailed information of targets.

  4. Hybrid Radar Emitter Recognition Based on Rough k-Means Classifier and Relevance Vector Machine

    Science.gov (United States)

    Yang, Zhutian; Wu, Zhilu; Yin, Zhendong; Quan, Taifan; Sun, Hongjian

    2013-01-01

    Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for recognizing radar emitter signals. In this paper, a hybrid recognition approach is presented that classifies radar emitter signals by exploiting the different separability of samples. The proposed approach comprises two steps, namely the primary signal recognition and the advanced signal recognition. In the former step, a novel rough k-means classifier, which comprises three regions, i.e., certain area, rough area and uncertain area, is proposed to cluster the samples of radar emitter signals. In the latter step, the samples within the rough boundary are used to train the relevance vector machine (RVM). Then RVM is used to recognize the samples in the uncertain area; therefore, the classification accuracy is improved. Simulation results show that, for recognizing radar emitter signals, the proposed hybrid recognition approach is more accurate, and presents lower computational complexity than traditional approaches. PMID:23344380

  5. Tabular data base construction and analysis from thematic classified Landsat imagery of Portland, Oregon

    Science.gov (United States)

    Bryant, N. A.; George, A. J., Jr.; Hegdahl, R.

    1977-01-01

    A systematic verification of Landsat data classifications of the Portland, Oregon metropolitan area has been undertaken on the basis of census tract data. The degree of systematic misclassification due to the Bayesian classifier used to process the Landsat data was noted for the various suburban, industrialized and central business districts of the metropolitan area. The Landsat determinations of residential land use were employed to estimate the number of automobile trips generated in the region and to model air pollution hazards.

  6. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud.

    Science.gov (United States)

    Munisamy, Shyamala Devi; Chokkalingam, Arun

    2015-01-01

    Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider's premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  7. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud

    Directory of Open Access Journals (Sweden)

    Shyamala Devi Munisamy

    2015-01-01

    Full Text Available Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider’s premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  8. Marginal Generation Technology in the Chinese Power Market towards 2030 Based on Consequential Life Cycle Assessment

    Directory of Open Access Journals (Sweden)

    Guangling Zhao

    2016-09-01

    Full Text Available Electricity consumption is often the hotspot of life cycle assessment (LCA of products, industrial activities, or services. The objective of this paper is to provide a consistent, scientific, region-specific electricity-supply-based inventory of electricity generation technology for national and regional power grids. Marginal electricity generation technology is pivotal in assessing impacts related to additional consumption of electricity. China covers a large geographical area with regional supply grids; these are arguably equally or less integrated. Meanwhile, it is also a country with internal imbalances in regional energy supply and demand. Therefore, we suggest an approach to achieve a geographical subdivision of the Chinese electricity grid, corresponding to the interprovincial regional power grids, namely the North, the Northeast, the East, the Central, the Northwest, and the Southwest China Grids, and the China Southern Power Grid. The approach combines information from the Chinese national plans on for capacity changes in both production and distribution grids, and knowledge of resource availability. The results show that nationally, marginal technology is coal-fired electricity generation, which is the same scenario in the North and Northwest China Grid. In the Northeast, East, and Central China Grid, nuclear power gradually replaces coal-fired electricity and becomes the marginal technology. In the Southwest China Grid and the China Southern Power Grid, the marginal electricity is hydropower towards 2030.

  9. Naive Bayes as opinion classifier to evaluate students satisfaction based on student sentiment in Twitter Social Media

    Science.gov (United States)

    Candra Permana, Fahmi; Rosmansyah, Yusep; Setiawan Abdullah, Atje

    2017-10-01

    Students activity on social media can provide implicit knowledge and new perspectives for an educational system. Sentiment analysis is a part of text mining that can help to analyze and classify the opinion data. This research uses text mining and naive Bayes method as opinion classifier, to be used as an alternative methods in the process of evaluating studentss satisfaction for educational institution. Based on test results, this system can determine the opinion classification in Bahasa Indonesia using naive Bayes as opinion classifier with accuracy level of 84% correct, and the comparison between the existing system and the proposed system to evaluate students satisfaction in learning process, there is only a difference of 16.49%.

  10. Evaluation of a rapid LMP-based approach for calculating marginal unit emissions

    International Nuclear Information System (INIS)

    Rogers, Michelle M.; Wang, Yang; Wang, Caisheng; McElmurry, Shawn P.; Miller, Carol J.

    2013-01-01

    Graphical abstract: Display Omitted - Highlights: • Pollutant emissions estimated based on locational marginal price and eGRID data. • Stochastic model using IEEE RTS-96 system used to evaluate LMP approach. • Incorporating membership function enhanced reliability of pollutant estimate. • Error in pollutant estimate typically 2 and X and SO 2 . - Abstract: To evaluate the sustainability of systems that draw power from electrical grids there is a need to rapidly and accurately quantify pollutant emissions associated with power generation. Air emissions resulting from electricity generation vary widely among power plants based on the types of fuel consumed, the efficiency of the plant, and the type of pollution control systems in service. To address this need, methods for estimating real-time air emissions from power generation based on locational marginal prices (LMPs) have been developed. Based on LMPs the type of the marginal generating unit can be identified and pollutant emissions are estimated. While conceptually demonstrated, this LMP approach has not been rigorously tested. The purpose of this paper is to (1) improve the LMP method for predicting pollutant emissions and (2) evaluate the reliability of this technique through power system simulations. Previous LMP methods were expanded to include marginal emissions estimates using an LMP Emissions Estimation Method (LEEM). The accuracy of emission estimates was further improved by incorporating a probability distribution function that characterize generator fuel costs and a membership function (MF) capable of accounting for multiple marginal generation units. Emission estimates were compared to those predicted from power flow simulations. The improved LEEM was found to predict the marginal generation type approximately 70% of the time based on typical system conditions (e.g. loads and fuel costs) without the use of a MF. With the addition of a MF, the LEEM was found to provide emission estimates with

  11. Switchgrass-Based Bioethanol Productivity and Potential Environmental Impact from Marginal Lands in China

    Directory of Open Access Journals (Sweden)

    Xun Zhang

    2017-02-01

    Full Text Available Switchgrass displays an excellent potential to serve as a non-food bioenergy feedstock for bioethanol production in China due to its high potential yield on marginal lands. However, few studies have been conducted on the spatial distribution of switchgrass-based bioethanol production potential in China. This study created a land surface process model (Environmental Policy Integrated Climate GIS (Geographic Information System-based (GEPIC model coupled with a life cycle analysis (LCA to explore the spatial distribution of potential bioethanol production and present a comprehensive analysis of energy efficiency and environmental impacts throughout its whole life cycle. It provides a new approach to study the bioethanol productivity and potential environmental impact from marginal lands based on the high spatial resolution GIS data, and this applies not only to China, but also to other regions and to other types of energy plant. The results indicate that approximately 59 million ha of marginal land in China are suitable for planting switchgrass, and 22 million tons of ethanol can be produced from this land. Additionally, a potential net energy gain (NEG of 1.75 x 106 million MJ will be achieved if all of the marginal land can be used in China, and Yunnan Province offers the most significant one that accounts for 35% of the total. Finally, this study obtained that the total environmental effect index of switchgrass-based bioethanol is the equivalent of a population of approximately 20,300, and a reduction in the global warming potential (GWP is the most significant environmental impact.

  12. Design of a high-sensitivity classifier based on a genetic algorithm: application to computer-aided diagnosis

    International Nuclear Information System (INIS)

    Sahiner, Berkman; Chan, Heang-Ping; Petrick, Nicholas; Helvie, Mark A.; Goodsitt, Mitchell M.

    1998-01-01

    A genetic algorithm (GA) based feature selection method was developed for the design of high-sensitivity classifiers, which were tailored to yield high sensitivity with high specificity. The fitness function of the GA was based on the receiver operating characteristic (ROC) partial area index, which is defined as the average specificity above a given sensitivity threshold. The designed GA evolved towards the selection of feature combinations which yielded high specificity in the high-sensitivity region of the ROC curve, regardless of the performance at low sensitivity. This is a desirable quality of a classifier used for breast lesion characterization, since the focus in breast lesion characterization is to diagnose correctly as many benign lesions as possible without missing malignancies. The high-sensitivity classifier, formulated as the Fisher's linear discriminant using GA-selected feature variables, was employed to classify 255 biopsy-proven mammographic masses as malignant or benign. The mammograms were digitized at a pixel size of 0.1mmx0.1mm, and regions of interest (ROIs) containing the biopsied masses were extracted by an experienced radiologist. A recently developed image transformation technique, referred to as the rubber-band straightening transform, was applied to the ROIs. Texture features extracted from the spatial grey-level dependence and run-length statistics matrices of the transformed ROIs were used to distinguish malignant and benign masses. The classification accuracy of the high-sensitivity classifier was compared with that of linear discriminant analysis with stepwise feature selection (LDA sfs ). With proper GA training, the ROC partial area of the high-sensitivity classifier above a true-positive fraction of 0.95 was significantly larger than that of LDA sfs , although the latter provided a higher total area (A z ) under the ROC curve. By setting an appropriate decision threshold, the high-sensitivity classifier and LDA sfs correctly

  13. Memory-Based Specification of Verbal Features for Classifying Animals into Super-Ordinate and Sub-Ordinate Categories

    OpenAIRE

    Takahiro Soshi; Norio Fujimaki; Atsushi Matsumoto; Aya S. Ihara

    2017-01-01

    Accumulating evidence suggests that category representations are based on features. Distinguishing features are considered to define categories, because of all-or-none responses for objects in different categories; however, it is unclear how distinguishing features actually classify objects at various category levels. The present study included 75 animals within three classes (mammal, bird, and fish), along with 195 verbal features. Healthy adults participated in memory-based feature-animal m...

  14. Diagnostics of synchronous motor based on analysis of acoustic signals with application of MFCC and Nearest Mean classifier

    OpenAIRE

    Adam Głowacz; Witold Głowacz; Andrzej Głowacz

    2010-01-01

    The paper presents method of diagnostics of imminent failure conditions of synchronous motor. This method is based on a study ofacoustic signals generated by synchronous motor. Sound recognition system is based on algorithms of data processing, such as MFCC andNearest Mean classifier with cosine distance. Software to recognize the sounds of synchronous motor was implemented. The studies werecarried out for four imminent failure conditions of synchronous motor. The results confirm that the sys...

  15. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  16. Prediction of small molecule binding property of protein domains with Bayesian classifiers based on Markov chains.

    Science.gov (United States)

    Bulashevska, Alla; Stein, Martin; Jackson, David; Eils, Roland

    2009-12-01

    Accurate computational methods that can help to predict biological function of a protein from its sequence are of great interest to research biologists and pharmaceutical companies. One approach to assume the function of proteins is to predict the interactions between proteins and other molecules. In this work, we propose a machine learning method that uses a primary sequence of a domain to predict its propensity for interaction with small molecules. By curating the Pfam database with respect to the small molecule binding ability of its component domains, we have constructed a dataset of small molecule binding and non-binding domains. This dataset was then used as training set to learn a Bayesian classifier, which should distinguish members of each class. The domain sequences of both classes are modelled with Markov chains. In a Jack-knife test, our classification procedure achieved the predictive accuracies of 77.2% and 66.7% for binding and non-binding classes respectively. We demonstrate the applicability of our classifier by using it to identify previously unknown small molecule binding domains. Our predictions are available as supplementary material and can provide very useful information to drug discovery specialists. Given the ubiquitous and essential role small molecules play in biological processes, our method is important for identifying pharmaceutically relevant components of complete proteomes. The software is available from the author upon request.

  17. ELM BASED CAD SYSTEM TO CLASSIFY MAMMOGRAMS BY THE COMBINATION OF CLBP AND CONTOURLET

    Directory of Open Access Journals (Sweden)

    S Venkatalakshmi

    2017-05-01

    Full Text Available Breast cancer is a serious life threat to the womanhood, worldwide. Mammography is the promising screening tool, which can show the abnormality being detected. However, the physicians find it difficult to detect the affected regions, as the size of microcalcifications is very small. Hence it would be better, if a CAD system can accompany the physician in detecting the malicious regions. Taking this as a challenge, this paper presents a CAD system for mammogram classification which is proven to be accurate and reliable. The entire work is decomposed into four different stages and the outcome of a phase is passed as the input of the following phase. Initially, the mammogram is pre-processed by adaptive median filter and the segmentation is done by GHFCM. The features are extracted by combining the texture feature descriptors Completed Local Binary Pattern (CLBP and contourlet to frame the feature sets. In the training phase, Extreme Learning Machine (ELM is trained with the feature sets. During the testing phase, the ELM can classify between normal, malignant and benign type of cancer. The performance of the proposed approach is analysed by varying the classifier, feature extractors and parameters of the feature extractor. From the experimental analysis, it is evident that the proposed work outperforms the analogous techniques in terms of accuracy, sensitivity and specificity.

  18. Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure

    Directory of Open Access Journals (Sweden)

    Xiaodong Zeng

    2014-01-01

    Full Text Available A weighted accuracy and diversity (WAD method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.

  19. Tolerance to missing data using a likelihood ratio based classifier for computer-aided classification of breast cancer

    International Nuclear Information System (INIS)

    Bilska-Wolak, Anna O; Floyd, Carey E Jr

    2004-01-01

    While mammography is a highly sensitive method for detecting breast tumours, its ability to differentiate between malignant and benign lesions is low, which may result in as many as 70% of unnecessary biopsies. The purpose of this study was to develop a highly specific computer-aided diagnosis algorithm to improve classification of mammographic masses. A classifier based on the likelihood ratio was developed to accommodate cases with missing data. Data for development included 671 biopsy cases (245 malignant), with biopsy-proved outcome. Sixteen features based on the BI-RADS TM lexicon and patient history had been recorded for the cases, with 1.3 ± 1.1 missing feature values per case. Classifier evaluation methods included receiver operating characteristic and leave-one-out bootstrap sampling. The classifier achieved 32% specificity at 100% sensitivity on the 671 cases with 16 features that had missing values. Utilizing just the seven features present for all cases resulted in decreased performance at 100% sensitivity with average 19% specificity. No cases and no feature data were omitted during classifier development, showing that it is more beneficial to utilize cases with missing values than to discard incomplete cases that cannot be handled by many algorithms. Classification of mammographic masses was commendable at high sensitivity levels, indicating that benign cases could be potentially spared from biopsy

  20. Comparative analysis of instance selection algorithms for instance-based classifiers in the context of medical decision support

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Tourassi, Georgia D; Malof, Jordan M

    2011-01-01

    When constructing a pattern classifier, it is important to make best use of the instances (a.k.a. cases, examples, patterns or prototypes) available for its development. In this paper we present an extensive comparative analysis of algorithms that, given a pool of previously acquired instances, attempt to select those that will be the most effective to construct an instance-based classifier in terms of classification performance, time efficiency and storage requirements. We evaluate seven previously proposed instance selection algorithms and compare their performance to simple random selection of instances. We perform the evaluation using k-nearest neighbor classifier and three classification problems: one with simulated Gaussian data and two based on clinical databases for breast cancer detection and diagnosis, respectively. Finally, we evaluate the impact of the number of instances available for selection on the performance of the selection algorithms and conduct initial analysis of the selected instances. The experiments show that for all investigated classification problems, it was possible to reduce the size of the original development dataset to less than 3% of its initial size while maintaining or improving the classification performance. Random mutation hill climbing emerges as the superior selection algorithm. Furthermore, we show that some previously proposed algorithms perform worse than random selection. Regarding the impact of the number of instances available for the classifier development on the performance of the selection algorithms, we confirm that the selection algorithms are generally more effective as the pool of available instances increases. In conclusion, instance selection is generally beneficial for instance-based classifiers as it can improve their performance, reduce their storage requirements and improve their response time. However, choosing the right selection algorithm is crucial.

  1. Heterogeneous classifier fusion for ligand-based virtual screening: or, how decision making by committee can be a good thing.

    Science.gov (United States)

    Riniker, Sereina; Fechner, Nikolas; Landrum, Gregory A

    2013-11-25

    The concept of data fusion - the combination of information from different sources describing the same object with the expectation to generate a more accurate representation - has found application in a very broad range of disciplines. In the context of ligand-based virtual screening (VS), data fusion has been applied to combine knowledge from either different active molecules or different fingerprints to improve similarity search performance. Machine-learning (ML) methods based on fusion of multiple homogeneous classifiers, in particular random forests, have also been widely applied in the ML literature. The heterogeneous version of classifier fusion - fusing the predictions from different model types - has been less explored. Here, we investigate heterogeneous classifier fusion for ligand-based VS using three different ML methods, RF, naïve Bayes (NB), and logistic regression (LR), with four 2D fingerprints, atom pairs, topological torsions, RDKit fingerprint, and circular fingerprint. The methods are compared using a previously developed benchmarking platform for 2D fingerprints which is extended to ML methods in this article. The original data sets are filtered for difficulty, and a new set of challenging data sets from ChEMBL is added. Data sets were also generated for a second use case: starting from a small set of related actives instead of diverse actives. The final fused model consistently outperforms the other approaches across the broad variety of targets studied, indicating that heterogeneous classifier fusion is a very promising approach for ligand-based VS. The new data sets together with the adapted source code for ML methods are provided in the Supporting Information .

  2. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    Science.gov (United States)

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  3. Application of Machine Learning Approaches for Classifying Sitting Posture Based on Force and Acceleration Sensors

    Directory of Open Access Journals (Sweden)

    Roland Zemp

    2016-01-01

    Full Text Available Occupational musculoskeletal disorders, particularly chronic low back pain (LBP, are ubiquitous due to prolonged static sitting or nonergonomic sitting positions. Therefore, the aim of this study was to develop an instrumented chair with force and acceleration sensors to determine the accuracy of automatically identifying the user’s sitting position by applying five different machine learning methods (Support Vector Machines, Multinomial Regression, Boosting, Neural Networks, and Random Forest. Forty-one subjects were requested to sit four times in seven different prescribed sitting positions (total 1148 samples. Sixteen force sensor values and the backrest angle were used as the explanatory variables (features for the classification. The different classification methods were compared by means of a Leave-One-Out cross-validation approach. The best performance was achieved using the Random Forest classification algorithm, producing a mean classification accuracy of 90.9% for subjects with which the algorithm was not familiar. The classification accuracy varied between 81% and 98% for the seven different sitting positions. The present study showed the possibility of accurately classifying different sitting positions by means of the introduced instrumented office chair combined with machine learning analyses. The use of such novel approaches for the accurate assessment of chair usage could offer insights into the relationships between sitting position, sitting behaviour, and the occurrence of musculoskeletal disorders.

  4. Application of Machine Learning Approaches for Classifying Sitting Posture Based on Force and Acceleration Sensors.

    Science.gov (United States)

    Zemp, Roland; Tanadini, Matteo; Plüss, Stefan; Schnüriger, Karin; Singh, Navrag B; Taylor, William R; Lorenzetti, Silvio

    2016-01-01

    Occupational musculoskeletal disorders, particularly chronic low back pain (LBP), are ubiquitous due to prolonged static sitting or nonergonomic sitting positions. Therefore, the aim of this study was to develop an instrumented chair with force and acceleration sensors to determine the accuracy of automatically identifying the user's sitting position by applying five different machine learning methods (Support Vector Machines, Multinomial Regression, Boosting, Neural Networks, and Random Forest). Forty-one subjects were requested to sit four times in seven different prescribed sitting positions (total 1148 samples). Sixteen force sensor values and the backrest angle were used as the explanatory variables (features) for the classification. The different classification methods were compared by means of a Leave-One-Out cross-validation approach. The best performance was achieved using the Random Forest classification algorithm, producing a mean classification accuracy of 90.9% for subjects with which the algorithm was not familiar. The classification accuracy varied between 81% and 98% for the seven different sitting positions. The present study showed the possibility of accurately classifying different sitting positions by means of the introduced instrumented office chair combined with machine learning analyses. The use of such novel approaches for the accurate assessment of chair usage could offer insights into the relationships between sitting position, sitting behaviour, and the occurrence of musculoskeletal disorders.

  5. Application of Alkenone 14C-Based chronostratigraphy in carbonate barren sediments on the Peru Margin.

    Science.gov (United States)

    Higginson, M. J.; Altabet, M. A.; Herbert, T. D.

    2003-04-01

    Despite the availability of high-quality sediment cores in key locations, little paleoclimatic information exists for the Peru margin largely because poor carbonate preservation severely restricts the use of traditional carbonate-based proxies for stratigraphy, dating, and paleo-environmental reconstruction. Many sites also include hiatuses produced by the variable influence of undercurrents on sediment accumulation. To overcome these difficulties, we have developed (in collaboration with T. Eglinton, WHOI) a laboratory facility to successfully extract and purify haptophyte-derived alkenones for compound specific 14C AMS dating (modified from OHKOUCHI et al., 2002). This avoids potential problems with dating bulk organic carbon which we assume, even in an upwelling environment as highly productive as the Peru margin, is not a priori solely of marine origin. In a recently collected, mid-Peru Margin core (ODP Leg 201 Site 1228D), comparison of our alkenone 14C dates with bulk sediment organic carbon dates and known stratigraphic markers produces a very well constrained, curvilinear age-depth relationship for at least the last 14 Kyr. A discrete ash layer at Site 1228D with an adjacent alkenone 14C age of 3890 ± 350 yr, is within error identical to the 14C age of a prominent ash layer (3800 ± 50 yr) found west of the large Peruvian El Misti volcano (16^o18'S, 71^o24'W). In summary, these results show that the Peru margin alkenones are autochthonous (i.e. not from an older, distant source) and provide sufficient dating precision to permit, for the first time, high-resolution paleoceanographic studies in this highly important marine province. Based upon this new chronology, synchronous changes in alkenone-derived SST estimates in two of our independently-dated records are the first to record at high-resolution (a) a large LGM-Holocene SST range in the Tropics (up to 7.8 ^oC during brief events in this upwelling location); and (b) sharp coolings (4 ^oC) consistent with

  6. Marginal microleakage of class V resin-based composite restorations bonded with six one-step self-etch systems

    Directory of Open Access Journals (Sweden)

    Alfonso Sánchez-Ayala

    2013-06-01

    Full Text Available This study compared the microleakage of class V restorations bonded with various one-step self-etching adhesives. Seventy class V resin-based composite restorations were prepared on the buccal and lingual surfaces of 35 premolars, by using: Clearfil S 3 Bond, G-Bond, iBond, One Coat 7.0, OptiBond All-In-One, or Xeno IV. The Adper Single Bond etch-and-rinse two-step adhesive was employed as a control. Specimens were thermocycled for 500 cycles in separate water baths at 5°C and 55°C and loaded under 40 to 70 N for 50,000 cycles. Marginal microleakage was measured based on the penetration of a tracer agent. Although the control showed no microleakage at the enamel margins, there were no differences between groups (p = 0.06. None of the adhesives avoided microleakage at the dentin margins, and they displayed similar performances (p = 0.76. When both margins were compared, iBond® presented higher microleakage (p < 0.05 at the enamel margins (median, 1.00; Q3–Q1, 1.25–0.00 compared to the dentin margins (median, 0.00; Q3–Q1, 0.25–0.00. The study adhesives showed similar abilities to seal the margins of class V restorations, except for iBond®, which presented lower performance at the enamel margin.

  7. Fuzziness-based active learning framework to enhance hyperspectral image classification performance for discriminative and generative classifiers.

    Directory of Open Access Journals (Sweden)

    Muhammad Ahmad

    Full Text Available Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF, in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN. The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods.

  8. A Hierarchical Method for Transient Stability Prediction of Power Systems Using the Confidence of a SVM-Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Yanzhen Zhou

    2016-09-01

    Full Text Available Machine learning techniques have been widely used in transient stability prediction of power systems. When using the post-fault dynamic responses, it is difficult to draw a definite conclusion about how long the duration of response data used should be in order to balance the accuracy and speed. Besides, previous studies have the problem of lacking consideration for the confidence level. To solve these problems, a hierarchical method for transient stability prediction based on the confidence of ensemble classifier using multiple support vector machines (SVMs is proposed. Firstly, multiple datasets are generated by bootstrap sampling, then features are randomly picked up to compress the datasets. Secondly, the confidence indices are defined and multiple SVMs are built based on these generated datasets. By synthesizing the probabilistic outputs of multiple SVMs, the prediction results and confidence of the ensemble classifier will be obtained. Finally, different ensemble classifiers with different response times are built to construct different layers of the proposed hierarchical scheme. The simulation results show that the proposed hierarchical method can balance the accuracy and rapidity of the transient stability prediction. Moreover, the hierarchical method can reduce the misjudgments of unstable instances and cooperate with the time domain simulation to insure the security and stability of power systems.

  9. A Fuzzy Logic-Based Personalized Method to Classify Perceived Exertion in Workplaces Using a Wearable Heart Rate Sensor

    Directory of Open Access Journals (Sweden)

    Pablo Pancardo

    2018-01-01

    Full Text Available Knowing the perceived exertion of workers during their physical activities facilitates the decision-making of supervisors regarding the worker allocation in the appropriate job, actions to prevent accidents, and reassignment of tasks, among others. However, although wearable heart rate sensors represent an effective way to capture perceived exertion, ergonomic methods are generic and they do not consider the diffuse nature of the ranges that classify the efforts. Personalized monitoring is needed to enable a real and efficient classification of perceived individual efforts. In this paper, we propose a heart rate-based personalized method to assess perceived exertion; our method uses fuzzy logic as an option to manage imprecision and uncertainty in involved variables. We applied some experiments to cleaning staff and obtained results that highlight the importance of a custom method to classify perceived exertion of people doing physical work.

  10. Competent person for radiation protection. Practical radiation protection for base nuclear installations and installations classified for the environment protection

    International Nuclear Information System (INIS)

    Pin, A.; Perez, S.; Videcoq, J.; Ammerich, M.

    2008-01-01

    This book corresponds to the practical module devoted to the base nuclear installations and to the installations classified for the environment protection, that is to say the permanent nuclear installations susceptible to present risks for the public, environment or workers. Complied with the legislation that stipulates this module must allow to apply the acquired theoretical training to practical situations of work, it includes seven chapters as follow: generalities on access conditions in regulated areas of nuclear installation,s or installations classified for environment protection and clothing against contamination; use of control devices and management of damaged situations; methodology of working place studies, completed by the application to a real case of a study on an intervention on a containment wall; a part entitled 'take stock of the situation' ends every chapter and proposes to the reader to check its understanding and acquisition of treated knowledge. (N.C.)

  11. Marginal Generation Technology in the Chinese Power Market towards 2030 Based on Consequential Life Cycle Assessment

    DEFF Research Database (Denmark)

    Zhao, Guangling; Guerrero, Josep M.; Pei, Yingying

    2016-01-01

    Electricity consumption is often the hotspot of life cycle assessment (LCA) of products, industrial activities, or services. The objective of this paper is to provide a consistent, scientific, region-specific electricity-supply-based inventory of electricity generation technology for national...... and regional power grids. Marginal electricity generation technology is pivotal in assessing impacts related to additional consumption of electricity. China covers a large geographical area with regional supply grids; these are arguably equally or less integrated. Meanwhile, it is also a country with internal...

  12. Design of a Fuzzy Rule Base Expert System to Predict and Classify ...

    African Journals Online (AJOL)

    The main objective of design of a rule base expert system using fuzzy logic approach is to predict and forecast the risk level of cardiac patients to avoid sudden death. In this proposed system, uncertainty is captured using rule base and classification using fuzzy c-means clustering is discussed to overcome the risk level, ...

  13. Naive Bayes classifiers for verbal autopsies: comparison to physician-based classification for 21,000 child and adult deaths.

    Science.gov (United States)

    Miasnikof, Pierre; Giannakeas, Vasily; Gomes, Mireille; Aleksandrowicz, Lukasz; Shestopaloff, Alexander Y; Alam, Dewan; Tollman, Stephen; Samarikhalaj, Akram; Jha, Prabhat

    2015-11-25

    Verbal autopsies (VA) are increasingly used in low- and middle-income countries where most causes of death (COD) occur at home without medical attention, and home deaths differ substantially from hospital deaths. Hence, there is no plausible "standard" against which VAs for home deaths may be validated. Previous studies have shown contradictory performance of automated methods compared to physician-based classification of CODs. We sought to compare the performance of the classic naive Bayes classifier (NBC) versus existing automated classifiers, using physician-based classification as the reference. We compared the performance of NBC, an open-source Tariff Method (OTM), and InterVA-4 on three datasets covering about 21,000 child and adult deaths: the ongoing Million Death Study in India, and health and demographic surveillance sites in Agincourt, South Africa and Matlab, Bangladesh. We applied several training and testing splits of the data to quantify the sensitivity and specificity compared to physician coding for individual CODs and to test the cause-specific mortality fractions at the population level. The NBC achieved comparable sensitivity (median 0.51, range 0.48-0.58) to OTM (median 0.50, range 0.41-0.51), with InterVA-4 having lower sensitivity (median 0.43, range 0.36-0.47) in all three datasets, across all CODs. Consistency of CODs was comparable for NBC and InterVA-4 but lower for OTM. NBC and OTM achieved better performance when using a local rather than a non-local training dataset. At the population level, NBC scored the highest cause-specific mortality fraction accuracy across the datasets (median 0.88, range 0.87-0.93), followed by InterVA-4 (median 0.66, range 0.62-0.73) and OTM (median 0.57, range 0.42-0.58). NBC outperforms current similar COD classifiers at the population level. Nevertheless, no current automated classifier adequately replicates physician classification for individual CODs. There is a need for further research on automated

  14. Adaptive Marginal Costs-Based Distributed Economic Control of Microgrid Clusters Considering Line Loss

    Directory of Open Access Journals (Sweden)

    Xiaoqian Zhou

    2017-12-01

    Full Text Available When several microgrids (MG are interconnected into microgrid clusters (MGC, they have great potential to improve their reliability. Traditional droop control tends to make the total operating costs higher as the power is distributed by capacity ratios of distributed energy resources (DERs. This paper proposes an adaptive distributed economic control for islanded microgrids which considers line loss, specifically, an interesting marginal costs-based economic droop control is proposed, and consensus-based adaptive controller is applied, to deal with power limits and capacity constraints for storage. The whole expense can be effectively lowered by achieving identical marginal costs for DERs in MGC. Specially, the capacity constraints only for storages are also included to do further optimization. Moreover, consensus-based distributed secondary controllers are used to rapidly restore system frequency and voltage magnitudes. The above controllers only need to interact with neighbor DERs by a sparse communication network, eliminating the necessity of a central controller and enhancing the stability. A MGC, incorporating three microgrids, is used to verify the effectiveness of the proposed methods.

  15. A Core Set Based Large Vector-Angular Region and Margin Approach for Novelty Detection

    Directory of Open Access Journals (Sweden)

    Jiusheng Chen

    2016-01-01

    Full Text Available A large vector-angular region and margin (LARM approach is presented for novelty detection based on imbalanced data. The key idea is to construct the largest vector-angular region in the feature space to separate normal training patterns; meanwhile, maximize the vector-angular margin between the surface of this optimal vector-angular region and abnormal training patterns. In order to improve the generalization performance of LARM, the vector-angular distribution is optimized by maximizing the vector-angular mean and minimizing the vector-angular variance, which separates the normal and abnormal examples well. However, the inherent computation of quadratic programming (QP solver takes O(n3 training time and at least O(n2 space, which might be computational prohibitive for large scale problems. By (1+ε  and  (1-ε-approximation algorithm, the core set based LARM algorithm is proposed for fast training LARM problem. Experimental results based on imbalanced datasets have validated the favorable efficiency of the proposed approach in novelty detection.

  16. Classifying MCI Subtypes in Community-Dwelling Elderly Using Cross-Sectional and Longitudinal MRI-Based Biomarkers

    Directory of Open Access Journals (Sweden)

    Hao Guan

    2017-09-01

    Full Text Available Amnestic MCI (aMCI and non-amnestic MCI (naMCI are considered to differ in etiology and outcome. Accurately classifying MCI into meaningful subtypes would enable early intervention with targeted treatment. In this study, we employed structural magnetic resonance imaging (MRI for MCI subtype classification. This was carried out in a sample of 184 community-dwelling individuals (aged 73–85 years. Cortical surface based measurements were computed from longitudinal and cross-sectional scans. By introducing a feature selection algorithm, we identified a set of discriminative features, and further investigated the temporal patterns of these features. A voting classifier was trained and evaluated via 10 iterations of cross-validation. The best classification accuracies achieved were: 77% (naMCI vs. aMCI, 81% (aMCI vs. cognitively normal (CN and 70% (naMCI vs. CN. The best results for differentiating aMCI from naMCI were achieved with baseline features. Hippocampus, amygdala and frontal pole were found to be most discriminative for classifying MCI subtypes. Additionally, we observed the dynamics of classification of several MRI biomarkers. Learning the dynamics of atrophy may aid in the development of better biomarkers, as it may track the progression of cognitive impairment.

  17. Network Intrusion Detection System (NIDS in Cloud Environment based on Hidden Naïve Bayes Multiclass Classifier

    Directory of Open Access Journals (Sweden)

    Hafza A. Mahmood

    2018-04-01

    Full Text Available Cloud Environment is next generation internet based computing system that supplies customiza-ble services to the end user to work or access to the various cloud applications. In order to provide security and decrease the damage of information system, network and computer system it is im-portant to provide intrusion detection system (IDS. Now Cloud environment are under threads from network intrusions, as one of most prevalent and offensive means Denial of Service (DoS attacks that cause dangerous impact on cloud computing systems. This paper propose Hidden naïve Bayes (HNB Classifier to handle DoS attacks which is a data mining (DM model used to relaxes the conditional independence assumption of Naïve Bayes classifier (NB, proposed sys-tem used HNB Classifier supported with discretization and feature selection where select the best feature enhance the performance of the system and reduce consuming time. To evaluate the per-formance of proposal system, KDD 99 CUP and NSL KDD Datasets has been used. The experi-mental results show that the HNB classifier improves the performance of NIDS in terms of accu-racy and detecting DoS attacks, where the accuracy of detect DoS is 100% in three test KDD cup 99 dataset by used only 12 feature that selected by use gain ratio while in NSL KDD Dataset the accuracy of detect DoS attack is 90 % in three Experimental NSL KDD dataset by select 10 fea-ture only.

  18. Classifying Normal and Abnormal Status Based on Video Recordings of Epileptic Patients

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-01-01

    Full Text Available Based on video recordings of the movement of the patients with epilepsy, this paper proposed a human action recognition scheme to detect distinct motion patterns and to distinguish the normal status from the abnormal status of epileptic patients. The scheme first extracts local features and holistic features, which are complementary to each other. Afterwards, a support vector machine is applied to classification. Based on the experimental results, this scheme obtains a satisfactory classification result and provides a fundamental analysis towards the human-robot interaction with socially assistive robots in caring the patients with epilepsy (or other patients with brain disorders in order to protect them from injury.

  19. A heuristic ranking approach on capacity benefit margin determination using Pareto-based evolutionary programming technique.

    Science.gov (United States)

    Othman, Muhammad Murtadha; Abd Rahman, Nurulazmi; Musirin, Ismail; Fotuhi-Firuzabad, Mahmud; Rajabi-Ghahnavieh, Abbas

    2015-01-01

    This paper introduces a novel multiobjective approach for capacity benefit margin (CBM) assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE) to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP) technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE) in various conditions. Eventually, the power transfer based available transfer capability (ATC) is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

  20. A Heuristic Ranking Approach on Capacity Benefit Margin Determination Using Pareto-Based Evolutionary Programming Technique

    Directory of Open Access Journals (Sweden)

    Muhammad Murtadha Othman

    2015-01-01

    Full Text Available This paper introduces a novel multiobjective approach for capacity benefit margin (CBM assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE in various conditions. Eventually, the power transfer based available transfer capability (ATC is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

  1. Reliability-Based Marginal Cost Pricing Problem Case with Both Demand Uncertainty and Travelers’ Perception Errors

    Directory of Open Access Journals (Sweden)

    Shaopeng Zhong

    2013-01-01

    Full Text Available Focusing on the first-best marginal cost pricing (MCP in a stochastic network with both travel demand uncertainty and stochastic perception errors within the travelers’ route choice decision processes, this paper develops a perceived risk-based stochastic network marginal cost pricing (PRSN-MCP model. Numerical examples based on an integrated method combining the moment analysis approach, the fitting distribution method, and the reliability measures are also provided to demonstrate the importance and properties of the proposed model. The main finding is that ignoring the effect of travel time reliability and travelers’ perception errors may significantly reduce the performance of the first-best MCP tolls, especially under high travelers’ confidence and network congestion levels. The analysis result could also enhance our understanding of (1 the effect of stochastic perception error (SPE on the perceived travel time distribution and the components of road toll; (2 the effect of road toll on the actual travel time distribution and its reliability measures; (3 the effect of road toll on the total network travel time distribution and its statistics; and (4 the effect of travel demand level and the value of reliability (VoR level on the components of road toll.

  2. Measuring the impact of marginal tax rate reform on the revenue base of South Africa using a microsimulation tax model

    Directory of Open Access Journals (Sweden)

    Yolande Jordaan

    2015-08-01

    Full Text Available This paper is primarily concerned with the revenue and tax efficiency effects of adjustments to marginal tax rates on individual income as an instrument of possible tax reform. The hypothesis is that changes to marginal rates affect not only the revenue base, but also tax efficiency and the optimum level of taxes that supports economic growth. Using an optimal revenue-maximising rate (based on Laffer analysis, the elasticity of taxable income is derived with respect to marginal tax rates for each taxable-income category. These elasticities are then used to quantify the impact of changes in marginal rates on the revenue base and tax efficiency using a microsimulation (MS tax model. In this first paper on the research results, much attention is paid to the structure of the model and the way in which the database has been compiled. The model allows for the dissemination of individual taxpayers by income groups, gender, educational level, age group, etc. Simulations include a scenario with higher marginal rates which is also more progressive (as in the 1998/1999 fiscal year, in which case tax revenue increases but the increase is overshadowed by a more than proportional decrease in tax efficiency as measured by its deadweight loss. On the other hand, a lowering of marginal rates (to bring South Africa’s marginal rates more in line with those of its peers improves tax efficiency but also results in a substantial revenue loss. The estimated optimal individual tax to gross domestic product (GDP ratio in order to maximise economic growth (6.7 per cent shows a strong response to changes in marginal rates, and the results from this research indicate that a lowering of marginal rates would also move the actual ratio closer to its optimum level. Thus, the trade-off between revenue collected and tax efficiency should be carefully monitored when personal income tax reform is being considered.

  3. hMuLab: A Biomedical Hybrid MUlti-LABel Classifier Based on Multiple Linear Regression.

    Science.gov (United States)

    Wang, Pu; Ge, Ruiquan; Xiao, Xuan; Zhou, Manli; Zhou, Fengfeng

    2017-01-01

    Many biomedical classification problems are multi-label by nature, e.g., a gene involved in a variety of functions and a patient with multiple diseases. The majority of existing classification algorithms assumes each sample with only one class label, and the multi-label classification problem remains to be a challenge for biomedical researchers. This study proposes a novel multi-label learning algorithm, hMuLab, by integrating both feature-based and neighbor-based similarity scores. The multiple linear regression modeling techniques make hMuLab capable of producing multiple label assignments for a query sample. The comparison results over six commonly-used multi-label performance measurements suggest that hMuLab performs accurately and stably for the biomedical datasets, and may serve as a complement to the existing literature.

  4. Use of machine learning methods to classify Universities based on the income structure

    Science.gov (United States)

    Terlyga, Alexandra; Balk, Igor

    2017-10-01

    In this paper we discuss use of machine learning methods such as self organizing maps, k-means and Ward’s clustering to perform classification of universities based on their income. This classification will allow us to quantitate classification of universities as teaching, research, entrepreneur, etc. which is important tool for government, corporations and general public alike in setting expectation and selecting universities to achieve different goals.

  5. Development of System Based Code: Case Study of Life-Cycle Margin Evaluation

    International Nuclear Information System (INIS)

    Tai Asayama; Masaki Morishita; Masanori Tashimo

    2006-01-01

    For a leap of progress in structural deign of nuclear plant components, The late Professor Emeritus Yasuhide Asada proposed the System Based Code. The key concepts of the System Based Code are; (1) life-cycle margin optimization, (2) expansion of technical options as well as combinations of technical options beyond the current codes and standards, and (3) designing to clearly defined target reliabilities. Those concepts are very new to most of the nuclear power plant designers who are naturally obliged to design to current codes and standards; the application of the concepts of the System Based Code to design will lead to entire change of practices that designers have long been accustomed to. On the other hand, experienced designers are supposed to have expertise that can support and accelerate the development of the System Based Code. Therefore, interfacing with experienced designers is of crucial importance for the development of the System Based Code. The authors conducted a survey on the acceptability of the System Based Code concept. The results were analyzed from the possibility of improving structural design both in terms of reliability and cost effectiveness by the introduction of the System Based Code concept. It was concluded that the System Based Code is beneficial for those purposes. Also described is the expertise elicited from the results of the survey that can be reflected to the development of the System Based Code. (authors)

  6. Two-stage Framework for a Topology-Based Projection and Visualization of Classified Document Collections

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick; Scheuermann, Gerik; Teresniak, Sven; Heyer, Gerhard; Koch, Steffen; Ertl, Thomas; Weber, Gunther H.

    2010-07-19

    During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point clouds given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.

  7. A Novel Algorithm for Feature Level Fusion Using SVM Classifier for Multibiometrics-Based Person Identification

    Directory of Open Access Journals (Sweden)

    Ujwalla Gawande

    2013-01-01

    Full Text Available Recent times witnessed many advancements in the field of biometric and ultimodal biometric fields. This is typically observed in the area, of security, privacy, and forensics. Even for the best of unimodal biometric systems, it is often not possible to achieve a higher recognition rate. Multimodal biometric systems overcome various limitations of unimodal biometric systems, such as nonuniversality, lower false acceptance, and higher genuine acceptance rates. More reliable recognition performance is achievable as multiple pieces of evidence of the same identity are available. The work presented in this paper is focused on multimodal biometric system using fingerprint and iris. Distinct textual features of the iris and fingerprint are extracted using the Haar wavelet-based technique. A novel feature level fusion algorithm is developed to combine these unimodal features using the Mahalanobis distance technique. A support-vector-machine-based learning algorithm is used to train the system using the feature extracted. The performance of the proposed algorithms is validated and compared with other algorithms using the CASIA iris database and real fingerprint database. From the simulation results, it is evident that our algorithm has higher recognition rate and very less false rejection rate compared to existing approaches.

  8. Landscape object-based analysis of wetland plant functional types: the effects of spatial scale, vegetation classes and classifier methods

    Science.gov (United States)

    Dronova, I.; Gong, P.; Wang, L.; Clinton, N.; Fu, W.; Qi, S.

    2011-12-01

    Remote sensing-based vegetation classifications representing plant function such as photosynthesis and productivity are challenging in wetlands with complex cover and difficult field access. Recent advances in object-based image analysis (OBIA) and machine-learning algorithms offer new classification tools; however, few comparisons of different algorithms and spatial scales have been discussed to date. We applied OBIA to delineate wetland plant functional types (PFTs) for Poyang Lake, the largest freshwater lake in China and Ramsar wetland conservation site, from 30-m Landsat TM scene at the peak of spring growing season. We targeted major PFTs (C3 grasses, C3 forbs and different types of C4 grasses and aquatic vegetation) that are both key players in system's biogeochemical cycles and critical providers of waterbird habitat. Classification results were compared among: a) several object segmentation scales (with average object sizes 900-9000 m2); b) several families of statistical classifiers (including Bayesian, Logistic, Neural Network, Decision Trees and Support Vector Machines) and c) two hierarchical levels of vegetation classification, a generalized 3-class set and more detailed 6-class set. We found that classification benefited from object-based approach which allowed including object shape, texture and context descriptors in classification. While a number of classifiers achieved high accuracy at the finest pixel-equivalent segmentation scale, the highest accuracies and best agreement among algorithms occurred at coarser object scales. No single classifier was consistently superior across all scales, although selected algorithms of Neural Network, Logistic and K-Nearest Neighbors families frequently provided the best discrimination of classes at different scales. The choice of vegetation categories also affected classification accuracy. The 6-class set allowed for higher individual class accuracies but lower overall accuracies than the 3-class set because

  9. Condition Assessment of Metal Oxide Surge Arrester Based on Multi-Layer SVM Classifier

    Directory of Open Access Journals (Sweden)

    M Khodsuz

    2015-12-01

    Full Text Available This paper introduces the indicators for surge arrester condition assessment based on the leakage current analysis. Maximum amplitude of fundamental harmonic of the resistive leakage current, maximum amplitude of third harmonic of the resistive leakage current and maximum amplitude of fundamental harmonic of the capacitive leakage current were used as indicators for surge arrester condition monitoring. Also, the effects of operating voltage fluctuation, third harmonic of voltage, overvoltage and surge arrester aging on these indicators were studied. Then, obtained data are applied to the multi-layer support vector machine for recognizing of surge arrester conditions. Obtained results show that introduced indicators have the high ability for evaluation of surge arrester conditions.

  10. Demand Response Design and Use Based on Network Locational Marginal Prices

    DEFF Research Database (Denmark)

    Morais, Hugo; Faria, Pedro; Vale, Zita

    2014-01-01

    Power systems have been experiencing huge changes mainly due to the substantial increase of distributed generation (DG) and the operation in competitive environments. Virtual Power Players (VPP) can aggregate several players, namely a diversity of energy resources, including distributed generation...... (DG) based on several technologies, electric storage systems (ESS) and demand response (DR). Energy resources management gains an increasing relevance in this competitive context. This makes the DR use more interesting and flexible, giving place to a wide range of new opportunities. This paper...... proposes a methodology to support VPPs in the DR programs’ management, considering all the existing energy resources (generation and storage units) and the distribution network. The proposed method is based on locational marginal prices (LMP) values. The evaluation of the impact of using DR specific...

  11. Seismic margin assessment and earthquake experience based methods for WWER-440/213 type NPPs

    International Nuclear Information System (INIS)

    Masopust, R.

    1996-01-01

    This report covers the review of the already completed studies, namely, safe shutdown system identification and classification for Bohunice NPP and the comparative study of standards and criteria. It contains a report on currently ongoing studies concerning seismic margin assessment and earthquake experience based methods in application for seismic evaluation and verification of structures and equipment components of the operating WWER-440/213 type NPPs. This is based on experiences obtained from Paks NPP. The work plan for the remaining period of Benchmark CRP and the new proposals are included. These are concerned with seismic evaluation of selected safety related mechanical equipment and pipes of Paks NPP, and the actual seismic issues of the Temelin WWER-1000 type NPP

  12. Three-Phase AC Optimal Power Flow Based Distribution Locational Marginal Price: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Rui; Zhang, Yingchen

    2017-05-17

    Designing market mechanisms for electricity distribution systems has been a hot topic due to the increased presence of smart loads and distributed energy resources (DERs) in distribution systems. The distribution locational marginal pricing (DLMP) methodology is one of the real-time pricing methods to enable such market mechanisms and provide economic incentives to active market participants. Determining the DLMP is challenging due to high power losses, the voltage volatility, and the phase imbalance in distribution systems. Existing DC Optimal Power Flow (OPF) approaches are unable to model power losses and the reactive power, while single-phase AC OPF methods cannot capture the phase imbalance. To address these challenges, in this paper, a three-phase AC OPF based approach is developed to define and calculate DLMP accurately. The DLMP is modeled as the marginal cost to serve an incremental unit of demand at a specific phase at a certain bus, and is calculated using the Lagrange multipliers in the three-phase AC OPF formulation. Extensive case studies have been conducted to understand the impact of system losses and the phase imbalance on DLMPs as well as the potential benefits of flexible resources.

  13. Marginal Contribution-Based Distributed Subchannel Allocation in Small Cell Networks.

    Science.gov (United States)

    Shah, Shashi; Kittipiyakul, Somsak; Lim, Yuto; Tan, Yasuo

    2018-05-10

    The paper presents a game theoretic solution for distributed subchannel allocation problem in small cell networks (SCNs) analyzed under the physical interference model. The objective is to find a distributed solution that maximizes the welfare of the SCNs, defined as the total system capacity. Although the problem can be addressed through best-response (BR) dynamics, the existence of a steady-state solution, i.e., a pure strategy Nash equilibrium (NE), cannot be guaranteed. Potential games (PGs) ensure convergence to a pure strategy NE when players rationally play according to some specified learning rules. However, such a performance guarantee comes at the expense of complete knowledge of the SCNs. To overcome such requirements, properties of PGs are exploited for scalable implementations, where we utilize the concept of marginal contribution (MC) as a tool to design learning rules of players’ utility and propose the marginal contribution-based best-response (MCBR) algorithm of low computational complexity for the distributed subchannel allocation problem. Finally, we validate and evaluate the proposed scheme through simulations for various performance metrics.

  14. Marginal Contribution-Based Distributed Subchannel Allocation in Small Cell Networks

    Directory of Open Access Journals (Sweden)

    Shashi Shah

    2018-05-01

    Full Text Available The paper presents a game theoretic solution for distributed subchannel allocation problem in small cell networks (SCNs analyzed under the physical interference model. The objective is to find a distributed solution that maximizes the welfare of the SCNs, defined as the total system capacity. Although the problem can be addressed through best-response (BR dynamics, the existence of a steady-state solution, i.e., a pure strategy Nash equilibrium (NE, cannot be guaranteed. Potential games (PGs ensure convergence to a pure strategy NE when players rationally play according to some specified learning rules. However, such a performance guarantee comes at the expense of complete knowledge of the SCNs. To overcome such requirements, properties of PGs are exploited for scalable implementations, where we utilize the concept of marginal contribution (MC as a tool to design learning rules of players’ utility and propose the marginal contribution-based best-response (MCBR algorithm of low computational complexity for the distributed subchannel allocation problem. Finally, we validate and evaluate the proposed scheme through simulations for various performance metrics.

  15. Seismic Margin Assessment for Research Reactor using Fragility based Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwag, Shinyoung; Oh, Jinho; Lee, Jong-Min; Ryu, Jeong-Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The research reactor has been often subjected to external hazards during the design lifetime. Especially, a seismic event can be one of significant threats to the failure of structure system of the research reactor. This failure is possibly extended to the direct core damage of the reactor. For this purpose, the fault tree for structural system failure leading to the core damage under an earthquake accident is developed. The failure probabilities of basic events are evaluated as fragility curves of log-normal distributions. Finally, the plant-level seismic margin is investigated by the fault tree analysis combining with fragility data and the critical path is identified. The plant-level probabilistic seismic margin assessment using the fragility based fault tree analysis was performed for quantifying the safety of research reactor to a seismic hazard. For this, the fault tree for structural system failure leading to the core damage of the reactor under a seismic accident was developed. The failure probabilities of basic events were evaluated as fragility curves of log-normal distributions.

  16. Predicting Protein-Protein Interaction Sites with a Novel Membership Based Fuzzy SVM Classifier.

    Science.gov (United States)

    Sriwastava, Brijesh K; Basu, Subhadip; Maulik, Ujjwal

    2015-01-01

    Predicting residues that participate in protein-protein interactions (PPI) helps to identify, which amino acids are located at the interface. In this paper, we show that the performance of the classical support vector machine (SVM) algorithm can further be improved with the use of a custom-designed fuzzy membership function, for the partner-specific PPI interface prediction problem. We evaluated the performances of both classical SVM and fuzzy SVM (F-SVM) on the PPI databases of three different model proteomes of Homo sapiens, Escherichia coli and Saccharomyces Cerevisiae and calculated the statistical significance of the developed F-SVM over classical SVM algorithm. We also compared our performance with the available state-of-the-art fuzzy methods in this domain and observed significant performance improvements. To predict interaction sites in protein complexes, local composition of amino acids together with their physico-chemical characteristics are used, where the F-SVM based prediction method exploits the membership function for each pair of sequence fragments. The average F-SVM performance (area under ROC curve) on the test samples in 10-fold cross validation experiment are measured as 77.07, 78.39, and 74.91 percent for the aforementioned organisms respectively. Performances on independent test sets are obtained as 72.09, 73.24 and 82.74 percent respectively. The software is available for free download from http://code.google.com/p/cmater-bioinfo.

  17. Classifying orofacial pains: a new proposal of taxonomy based on ontology

    Science.gov (United States)

    NIXDORF, D. R.; DRANGSHOLT, M. T.; ETTLIN, D. A.; GAUL, C.; DE LEEUW, R.; SVENSSON, P.; ZAKRZEWSKA, J. M.; DE LAAT, A.; CEUSTERS, W.

    2012-01-01

    SUMMARY Propose a new taxonomy model based on ontological principles for disorders that manifest themselves through the symptom of persistent orofacial pain and are commonly seen in clinical practice and difficult to manage. Consensus meeting of eight experts from various geographic areas representing different perspectives (orofacial pain, headache, oral medicine and ontology) as an initial step towards improving the taxonomy. Ontological principles were introduced, reviewed and applied during the consensus building process. Diagnostic criteria for persistent dento-alveolar pain disorder (PDAP) were formulated as an example to be used to model the taxonomical structure of all orofacial pain conditions. These criteria have the advantage of being (i) anatomically defined, (ii) in accordance with other classification systems for the provision of clinical care, (iii) descriptive and succinct, (iv) easy to adapt for applications in varying settings, (v) scalable and (vi) transferable for the description of pain disorders in other orofacial regions of interest. Limitations are that the criteria introduce new terminology, do not have widespread acceptance and have yet to be tested. These results were presented to the greater conference membership and were unanimously accepted. Consensus for the diagnostic criteria of PDAP was established within this working group. This is an initial first step towards developing a coherent taxonomy for orofacial pain disorders, which is needed to improve clinical research and care. PMID:21848527

  18. Classifying orofacial pains: a new proposal of taxonomy based on ontology.

    Science.gov (United States)

    Nixdorf, D R; Drangsholt, M T; Ettlin, D A; Gaul, C; De Leeuw, R; Svensson, P; Zakrzewska, J M; De Laat, A; Ceusters, W

    2012-03-01

    We propose a new taxonomy model based on ontological principles for disorders that manifest themselves through the symptom of persistent orofacial pain and are commonly seen in clinical practice and difficult to manage. Consensus meeting of eight experts from various geographic areas representing different perspectives (orofacial pain, headache, oral medicine and ontology) as an initial step towards improving the taxonomy. Ontological principles were introduced, reviewed and applied during the consensus building process. Diagnostic criteria for persistent dento-alveolar pain disorder (PDAP) were formulated as an example to be used to model the taxonomical structure of all orofacial pain conditions. These criteria have the advantage of being (i) anatomically defined, (ii) in accordance with other classification systems for the provision of clinical care, (iii) descriptive and succinct, (iv) easy to adapt for applications in varying settings, (v) scalable and (vi) transferable for the description of pain disorders in other orofacial regions of interest. Limitations are that the criteria introduce new terminology, do not have widespread acceptance and have yet to be tested. These results were presented to the greater conference membership and were unanimously accepted. Consensus for the diagnostic criteria of PDAP was established within this working group. This is an initial first step towards developing a coherent taxonomy for orofacial pain disorders, which is needed to improve clinical research and care. © 2011 Blackwell Publishing Ltd.

  19. Classifying eating disorders based on "healthy" and "unhealthy" perfectionism and impulsivity.

    Science.gov (United States)

    Slof-Op't Landt, Margarita C T; Claes, Laurence; van Furth, Eric F

    2016-07-01

    Perfectionism and impulsivity are associated with eating disorders (EDs). The current study examines whether clinically relevant subgroups of women with EDs can be identified based on "healthy" and "unhealthy" perfectionism and impulsivity. Latent profile analyses (LPA) were performed on data of 844 patients (DSM-IV diagnosis: 381 anorexia nervosa, 146 bulimia nervosa, 56 binge-eating disorder, 261 ED not otherwise specified). "Healthy" and "unhealthy" forms of perfectionism and impulsivity were assessed by the Frost Multidimensional Perfectionism Scale and the Dickman Impulsivity Inventory, respectively. The Eating Disorder Examination Questionnaire was completed to assess ED psychopathology. Furthermore, in 229 patients additional ED symptoms, depression, self-esteem, obsessive-compulsive symptoms, and personality features were assessed. The LPA revealed four profiles; 1. "Healthy Impulsivity" (HI; n = 191), 2. "Unhealthy Impulsivity" (UI; n = 238), 3. "Healthy and Unhealthy Perfectionism" (HP + UP; n = 153), 4. "Healthy Perfectionism" (HP; n = 262). Patients belonging to the "HP + UP" and the "UI" classes reported higher levels of ED psychopathology. More severe comorbid symptoms (depressive, obsessive-compulsive and self-esteem) were found in the patients belonging to the "HP + UP" class. Patients from the "HP + UP" and "HP" classes had higher scores for the personality features Harm Avoidance, Persistence and Cooperativeness. Women with EDs could be meaningfully grouped according to perfectionism and impulsivity. These findings can be used to improve treatment matching and intervention strategies. The use of dimensional features, like perfectionism and impulsivity, in ED research, may enable the identification of fundamental underlying mechanisms and provide more insight into potential mechanisms that may drive or maintain disordered eating. © 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2016; 49:673-680). © 2016 Wiley

  20. Multi-Probe Based Artificial DNA Encoding and Matching Classifier for Hyperspectral Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Ke Wu

    2016-08-01

    Full Text Available In recent years, a novel matching classification strategy inspired by the artificial deoxyribonucleic acid (DNA technology has been proposed for hyperspectral remote sensing imagery. Such a method can describe brightness and shape information of a spectrum by encoding the spectral curve into a DNA strand, providing a more comprehensive way for spectral similarity comparison. However, it suffers from two problems: data volume is amplified when all of the bands participate in the encoding procedure and full-band comparison degrades the importance of bands carrying key information. In this paper, a new multi-probe based artificial DNA encoding and matching (MADEM method is proposed. In this method, spectral signatures are first transformed into DNA code words with a spectral feature encoding operation. After that, multiple probes for interesting classes are extracted to represent the specific fragments of DNA strands. During the course of spectral matching, the different probes are compared to obtain the similarity of different types of land covers. By computing the absolute vector distance (AVD between different probes of an unclassified spectrum and the typical DNA code words from the database, the class property of each pixel is set as the minimum distance class. The main benefit of this strategy is that the risk of redundant bands can be deeply reduced and critical spectral discrepancies can be enlarged. Two hyperspectral image datasets were tested. Comparing with the other classification methods, the overall accuracy can be improved from 1.22% to 10.09% and 1.19% to 15.87%, respectively. Furthermore, the kappa coefficient can be improved from 2.05% to 15.29% and 1.35% to 19.59%, respectively. This demonstrated that the proposed algorithm outperformed other traditional classification methods.

  1. Species-Level Differences in Hyperspectral Metrics among Tropical Rainforest Trees as Determined by a Tree-Based Classifier

    Directory of Open Access Journals (Sweden)

    Dar A. Roberts

    2012-06-01

    Full Text Available This study explores a method to classify seven tropical rainforest tree species from full-range (400–2,500 nm hyperspectral data acquired at tissue (leaf and bark, pixel and crown scales using laboratory and airborne sensors. Metrics that respond to vegetation chemistry and structure were derived using narrowband indices, derivative- and absorption-based techniques, and spectral mixture analysis. We then used the Random Forests tree-based classifier to discriminate species with minimally-correlated, importance-ranked metrics. At all scales, best overall accuracies were achieved with metrics derived from all four techniques and that targeted chemical and structural properties across the visible to shortwave infrared spectrum (400–2500 nm. For tissue spectra, overall accuracies were 86.8% for leaves, 74.2% for bark, and 84.9% for leaves plus bark. Variation in tissue metrics was best explained by an axis of red absorption related to photosynthetic leaves and an axis distinguishing bark water and other chemical absorption features. Overall accuracies for individual tree crowns were 71.5% for pixel spectra, 70.6% crown-mean spectra, and 87.4% for a pixel-majority technique. At pixel and crown scales, tree structure and phenology at the time of image acquisition were important factors that determined species spectral separability.

  2. A novel implementation of kNN classifier based on multi-tupled meteorological input data for wind power prediction

    International Nuclear Information System (INIS)

    Yesilbudak, Mehmet; Sagiroglu, Seref; Colak, Ilhami

    2017-01-01

    Highlights: • An accurate wind power prediction model is proposed for very short-term horizon. • The k-nearest neighbor classifier is implemented based on the multi-tupled inputs. • The variation of wind power prediction errors is evaluated in various aspects. • Our approach shows the superior prediction performance over the persistence method. - Abstract: With the growing share of wind power production in the electric power grids, many critical challenges to the grid operators have been emerged in terms of the power balance, power quality, voltage support, frequency stability, load scheduling, unit commitment and spinning reserve calculations. To overcome such problems, numerous studies have been conducted to predict the wind power production, but a small number of them have attempted to improve the prediction accuracy by employing the multidimensional meteorological input data. The novelties of this study lie in the proposal of an efficient and easy to implement very short-term wind power prediction model based on the k-nearest neighbor classifier (kNN), in the usage of wind speed, wind direction, barometric pressure and air temperature parameters as the multi-tupled meteorological inputs and in the comparison of wind power prediction results with respect to the persistence reference model. As a result of the achieved patterns, we characterize the variation of wind power prediction errors according to the input tuples, distance measures and neighbor numbers, and uncover the most influential and the most ineffective meteorological parameters on the optimization of wind power prediction results.

  3. Two Classifiers Based on Serum Peptide Pattern for Prediction of HBV-Induced Liver Cirrhosis Using MALDI-TOF MS

    Directory of Open Access Journals (Sweden)

    Yuan Cao

    2013-01-01

    Full Text Available Chronic infection with hepatitis B virus (HBV is associated with the majority of cases of liver cirrhosis (LC in China. Although liver biopsy is the reference method for evaluation of cirrhosis, it is an invasive procedure with inherent risk. The aim of this study is to discover novel noninvasive specific serum biomarkers for the diagnosis of HBV-induced LC. We performed bead fractionation/MALDI-TOF MS analysis on sera from patients with LC. Thirteen feature peaks which had optimal discriminatory performance were obtained by using support-vector-machine-(SVM- based strategy. Based on the previous results, five supervised machine learning methods were employed to construct classifiers that discriminated proteomic spectra of patients with HBV-induced LC from those of controls. Here, we describe two novel methods for prediction of HBV-induced LC, termed LC-NB and LC-MLP, respectively. We obtained a sensitivity of 90.9%, a specificity of 94.9%, and overall accuracy of 93.8% on an independent test set. Comparisons with the existing methods showed that LC-NB and LC-MLP held better accuracy. Our study suggests that potential serum biomarkers can be determined for discriminating LC and non-LC cohorts by using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. These two classifiers could be used for clinical practice in HBV-induced LC assessment.

  4. A fixed incore based system for an on line core margin monitoring

    International Nuclear Information System (INIS)

    Mourlevat, J. L.; Carrasco, M.

    2002-01-01

    In order to comply with the needs of Utilities for improvements in the economic competitiveness of nuclear energy, one of the solutions proposed is to reduce the cost of the fuel cycle. To this aim, increasing the lifetime of cycles by introducing so-called low leakage fuel loading patterns to the reactor is a rather promising solution. However, these loading patterns lead to an increase in the core hostspot factors and therefore to a reduction in the core operating margins. For many years FRAMATOME-ANP has developed and proposed solutions aiming at increasing and therefore restoring these margins, namely; the improvement in design methods based on three-dimensional modelling of the core,on kinetic representation of transients and on neutron-thermohydraulic coupling, or the improvement in the fuel with the introduction of intermediate mixing girds. A third approach is to improve the core instrumentation associated with the system for monitoring the core operating limits: it is this approach that is described in this presentation. The core operating limits monitoring function calls on realtime knowledge of the power distribution. At present time, for most of the PWRs operated in the world, this knowledge is based on the measurement of the axial power distribution made by two-section neutron detectors located outside the pressure vessel. This kind of detectors is only able to provide the operators with a rustic picture of the axial power distribution through the axial dissymmetry index so called axial-offset. During normal core operation operators have to control the axial power distribution that means to keep the axial-offset value inside a pre-determined domain of which the width is a function of the mean power level. This pre-determined domain is calculated or checked during the nuclear design phase of the reload and due to th emethodology used to calculate it, a consderable potential for improving the core operating margin does ewxist. This the reason why

  5. Comparison of several chemometric methods of libraries and classifiers for the analysis of expired drugs based on Raman spectra.

    Science.gov (United States)

    Gao, Qun; Liu, Yan; Li, Hao; Chen, Hui; Chai, Yifeng; Lu, Feng

    2014-06-01

    Some expired drugs are difficult to detect by conventional means. If they are repackaged and sold back into market, they will constitute a new public health challenge. For the detection of repackaged expired drugs within specification, paracetamol tablet from a manufacturer was used as a model drug in this study for comparison of Raman spectra-based library verification and classification methods. Raman spectra of different batches of paracetamol tablets were collected and a library including standard spectra of unexpired batches of tablets was established. The Raman spectrum of each sample was identified by cosine and correlation with the standard spectrum. The average HQI of the suspicious samples and the standard spectrum were calculated. The optimum threshold values were 0.997 and 0.998 respectively as a result of ROC and four evaluations, for which the accuracy was up to 97%. Three supervised classifiers, PLS-DA, SVM and k-NN, were chosen to establish two-class classification models and compared subsequently. They were used to establish a classification of expired batches and an unexpired batch, and predict the suspect samples. The average accuracy was 90.12%, 96.80% and 89.37% respectively. Different pre-processing techniques were tried to find that first derivative was optimal for methods of libraries and max-min normalization was optimal for that of classifiers. The results obtained from these studies indicated both libraries and classifier methods could detect the expired drugs effectively, and they should be used complementarily in the fast-screening. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. SU-F-J-102: Lower Esophagus Margin Implications Based On Rapid Computational Algorithm for SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Cardenas, M; Mazur, T; Li, H; Mutic, S; Bradley, J; Tsien, C; Green, O [Washington University School of Medicine, Saint Louis, MO (United States)

    2016-06-15

    Purpose: To quantify inter-fraction esophagus-variation. Methods: Computed tomography and daily on-treatment 0.3-T MRI data sets for 7 patients were analyzed using a novel Matlab-based (Mathworks, Natick, MA) rapid computational method. Rigid registration was performed from the cricoid to the gastro-esophageal junction. CT and MR-based contours were compared at slice intervals of 3mm. Variation was quantified by “expansion,” defined as additional length in any radial direction from CT contour to MR contour. Expansion computations were performed with 360° of freedom in each axial slice. We partitioned expansions into left anterior, right anterior, right posterior, and left posterior quadrants (LA, RA, RP, and LP, respectively). Sample means were compared by analysis of variance (ANOVA) and Fisher’s Protected Least Significant Difference test. Results: Fifteen fractions and 1121 axial slices from 7 patients undergoing SBRT for primary lung cancer (3) and metastatic lung disease (4) were analyzed, generating 41,970 measurements. Mean LA, RA, RP, and LP expansions were 4.30±0.05 mm, 3.71±0.05mm, 3.17±0.07, and 3.98±0.06mm, respectively. 50.13% of all axial slices showed variation > 5 mm in one or more directions. Variation was greatest in lower esophagus with mean LA, RA, RP, and LP expansion (5.98±0.09 mm, 4.59±0.09 mm, 4.04±0.16 mm, and 5.41±0.16 mm, respectively). The difference was significant compared to mid and upper esophagus (p<.0001). The 95th percentiles of expansion for LA, RA, RP, LP were 13.36 mm, 9.97 mm, 11.29 mm, and 12.19 mm, respectively. Conclusion: Analysis of on-treatment MR imaging of the lower esophagus during thoracic SBRT suggests margin expansions of 13.36 mm LA, 9.97 mm RA, 11.29 mm RP, 12.19 mm LP would account for 95% of measurements. Our novel algorithm for rapid assessment of margin expansion for critical structures with 360° of freedom in each axial slice enables continuously adaptive patient-specific margins which may

  7. Designing a Web Spam Classifier Based on Feature Fusion in the Layered Multi-Population Genetic Programming Framework

    Directory of Open Access Journals (Sweden)

    Amir Hosein KEYHANIPOUR

    2013-11-01

    Full Text Available Nowadays, Web spam pages are a critical challenge for Web retrieval systems which have drastic influence on the performance of such systems. Although these systems try to combat the impact of spam pages on their final results list, spammers increasingly use more sophisticated techniques to increase the number of views for their intended pages in order to have more commercial success. This paper employs the recently proposed Layered Multi-population Genetic Programming model for Web spam detection task as well application of correlation coefficient analysis for feature space reduction. Based on our tentative results, the designed classifier, which is based on a combination of easy to compute features, has a very reasonable performance in comparison with similar methods.

  8. Reliability-based approaches for safety margin assessment in the French nuclear industry

    International Nuclear Information System (INIS)

    Ardillon, E.; Barthelet, B.; Meister, E.; Cambefort, P.; Hornet, P.; Le Delliou, P.

    2003-01-01

    The prevention of the fast fracture damage of the mechanical equipment important for the safety of nuclear islands of the French PWR relies on deterministic rules. These rules include flaw acceptance criteria involving safety factors applied to characteristic values (implicit margins) of the physical variables. The sets of safety factors that are currently under application in the industrial analyses with the agreement of the Safety Authority, are distributed across the two main physical parameters and have partly been based on a semi-probabilistic approach. After presenting the generic probabilistic pro-codification approach this paper shows its application to the evaluation of the performances of the existing regulatory flaw acceptance criteria. This application can be carried out in a realistic manner or in a more simplified one. These two approaches are applied to representative mechanical components. Their results are consistent. (author)

  9. Using a Marginal Structural Model to Design a Theory-Based Mass Media Campaign.

    Directory of Open Access Journals (Sweden)

    Hiromu Nishiuchi

    Full Text Available The essential first step in the development of mass media health campaigns is to identify specific beliefs of the target audience. The challenge is to prioritize suitable beliefs derived from behavioral theory. The purpose of this study was to identify suitable beliefs to target in a mass media campaign to change behavior using a new method to estimate the possible effect size of a small set of beliefs.Data were drawn from the 2010 Japanese Young Female Smoker Survey (n = 500, conducted by the Japanese Ministry of Health, Labor and Welfare. Survey measures included intention to quit smoking, psychological beliefs (attitude, norms, and perceived control based on the theory of planned behavior and socioeconomic status (age, education, household income, and marital status. To identify suitable candidate beliefs for a mass media health campaign, we estimated the possible effect size required to change the intention to quit smoking among the population of young Japanese women using the population attributable fraction from a marginal structural model.Thirteen percent of study participants intended to quit smoking. The marginal structural model estimated a population attributable fraction of 47 psychological beliefs (21 attitudes, 6 norms, and 19 perceived controls after controlling for socioeconomic status. The belief, "I could quit smoking if my husband or significant other recommended it" suggested a promising target for a mass media campaign (population attributable fraction = 0.12, 95% CI = 0.02-0.23. Messages targeting this belief could possibly improve intention rates by up to 12% among this population. The analysis also suggested the potential for regulatory action.This study proposed a method by which campaign planners can develop theory-based mass communication strategies to change health behaviors at the population level. This method might contribute to improving the quality of future mass health communication strategies and further

  10. Using a Marginal Structural Model to Design a Theory-Based Mass Media Campaign.

    Science.gov (United States)

    Nishiuchi, Hiromu; Taguri, Masataka; Ishikawa, Yoshiki

    2016-01-01

    The essential first step in the development of mass media health campaigns is to identify specific beliefs of the target audience. The challenge is to prioritize suitable beliefs derived from behavioral theory. The purpose of this study was to identify suitable beliefs to target in a mass media campaign to change behavior using a new method to estimate the possible effect size of a small set of beliefs. Data were drawn from the 2010 Japanese Young Female Smoker Survey (n = 500), conducted by the Japanese Ministry of Health, Labor and Welfare. Survey measures included intention to quit smoking, psychological beliefs (attitude, norms, and perceived control) based on the theory of planned behavior and socioeconomic status (age, education, household income, and marital status). To identify suitable candidate beliefs for a mass media health campaign, we estimated the possible effect size required to change the intention to quit smoking among the population of young Japanese women using the population attributable fraction from a marginal structural model. Thirteen percent of study participants intended to quit smoking. The marginal structural model estimated a population attributable fraction of 47 psychological beliefs (21 attitudes, 6 norms, and 19 perceived controls) after controlling for socioeconomic status. The belief, "I could quit smoking if my husband or significant other recommended it" suggested a promising target for a mass media campaign (population attributable fraction = 0.12, 95% CI = 0.02-0.23). Messages targeting this belief could possibly improve intention rates by up to 12% among this population. The analysis also suggested the potential for regulatory action. This study proposed a method by which campaign planners can develop theory-based mass communication strategies to change health behaviors at the population level. This method might contribute to improving the quality of future mass health communication strategies and further research is needed.

  11. Histopathological Validation of the Surface-Intermediate-Base Margin Score for Standardized Reporting of Resection Technique during Nephron Sparing Surgery.

    Science.gov (United States)

    Minervini, Andrea; Campi, Riccardo; Kutikov, Alexander; Montagnani, Ilaria; Sessa, Francesco; Serni, Sergio; Raspollini, Maria Rosaria; Carini, Marco

    2015-10-01

    The surface-intermediate-base margin score is a novel standardized reporting system of resection techniques during nephron sparing surgery. We validated the surgeon assessed surface-intermediate-base score with microscopic histopathological assessment of partial nephrectomy specimens. Between June and August 2014 data were prospectively collected from 40 consecutive patients undergoing nephron sparing surgery. The surface-intermediate-base score was assigned to all cases. The score specific areas were color coded with tissue margin ink and sectioned for histological evaluation of healthy renal margin thickness. Maximum, minimum and mean thickness of healthy renal margin for each score specific area grade (surface [S] = 0, S = 1 ; intermediate [I] or base [B] = 0, I or B = 1, I or B = 2) was reported. The Mann-Whitney U and Kruskal-Wallis tests were used to compare the thickness of healthy renal margin in S = 0 vs 1 and I or B = 0 vs 1 vs 2 grades, respectively. Maximum, minimum and mean thickness of healthy renal margin was significantly different among score specific area grades S = 0 vs 1, and I or B = 0 vs 1, 0 vs 2 and 1 vs 2 (p <0.001). The main limitations of the study are the low number of the I or B = 1 and I or B = 2 samples and the assumption that each microscopic slide reflects the entire score specific area for histological analysis. The surface-intermediate-base scoring method can be readily harnessed in real-world clinical practice and accurately mirrors histopathological analysis for quantification and reporting of healthy renal margin thickness removed during tumor excision. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  12. A web-based neurological pain classifier tool utilizing Bayesian decision theory for pain classification in spinal cord injury patients

    Science.gov (United States)

    Verma, Sneha K.; Chun, Sophia; Liu, Brent J.

    2014-03-01

    Pain is a common complication after spinal cord injury with prevalence estimates ranging 77% to 81%, which highly affects a patient's lifestyle and well-being. In the current clinical setting paper-based forms are used to classify pain correctly, however, the accuracy of diagnoses and optimal management of pain largely depend on the expert reviewer, which in many cases is not possible because of very few experts in this field. The need for a clinical decision support system that can be used by expert and non-expert clinicians has been cited in literature, but such a system has not been developed. We have designed and developed a stand-alone tool for correctly classifying pain type in spinal cord injury (SCI) patients, using Bayesian decision theory. Various machine learning simulation methods are used to verify the algorithm using a pilot study data set, which consists of 48 patients data set. The data set consists of the paper-based forms, collected at Long Beach VA clinic with pain classification done by expert in the field. Using the WEKA as the machine learning tool we have tested on the 48 patient dataset that the hypothesis that attributes collected on the forms and the pain location marked by patients have very significant impact on the pain type classification. This tool will be integrated with an imaging informatics system to support a clinical study that will test the effectiveness of using Proton Beam radiotherapy for treating spinal cord injury (SCI) related neuropathic pain as an alternative to invasive surgical lesioning.

  13. Planning Target Margin Calculations for Prostate Radiotherapy Based on Intrafraction and Interfraction Motion Using Four Localization Methods

    International Nuclear Information System (INIS)

    Beltran, Chris; Herman, Michael G.; Davis, Brian J.

    2008-01-01

    Purpose: To determine planning target volume (PTV) margins for prostate radiotherapy based on the internal margin (IM) (intrafractional motion) and the setup margin (SM) (interfractional motion) for four daily localization methods: skin marks (tattoo), pelvic bony anatomy (bone), intraprostatic gold seeds using a 5-mm action threshold, and using no threshold. Methods and Materials: Forty prostate cancer patients were treated with external radiotherapy according to an online localization protocol using four intraprostatic gold seeds and electronic portal images (EPIs). Daily localization and treatment EPIs were obtained. These data allowed inter- and intrafractional analysis of prostate motion. The SM for the four daily localization methods and the IM were determined. Results: A total of 1532 fractions were analyzed. Tattoo localization requires a SM of 6.8 mm left-right (LR), 7.2 mm inferior-superior (IS), and 9.8 mm anterior-posterior (AP). Bone localization requires 3.1, 8.9, and 10.7 mm, respectively. The 5-mm threshold localization requires 4.0, 3.9, and 3.7 mm. No threshold localization requires 3.4, 3.2, and 3.2 mm. The intrafractional prostate motion requires an IM of 2.4 mm LR, 3.4 mm IS and AP. The PTV margin using the 5-mm threshold, including interobserver uncertainty, IM, and SM, is 4.8 mm LR, 5.4 mm IS, and 5.2 mm AP. Conclusions: Localization based on EPI with implanted gold seeds allows a large PTV margin reduction when compared with tattoo localization. Except for the LR direction, bony anatomy localization does not decrease the margins compared with tattoo localization. Intrafractional prostate motion is a limiting factor on margin reduction

  14. Comparison of the Classifier Oriented Gait Score and the Gait Profile Score based on imitated gait impairments.

    Science.gov (United States)

    Christian, Josef; Kröll, Josef; Schwameder, Hermann

    2017-06-01

    Common summary measures of gait quality such as the Gait Profile Score (GPS) are based on the principle of measuring a distance from the mean pattern of a healthy reference group in a gait pattern vector space. The recently introduced Classifier Oriented Gait Score (COGS) is a pathology specific score that measures this distance in a unique direction, which is indicated by a linear classifier. This approach has potentially improved the discriminatory power to detect subtle changes in gait patterns but does not incorporate a profile of interpretable sub-scores like the GPS. The main aims of this study were to extend the COGS by decomposing it into interpretable sub-scores as realized in the GPS and to compare the discriminative power of the GPS and COGS. Two types of gait impairments were imitated to enable a high level of control of the gait patterns. Imitated impairments were realized by restricting knee extension and inducing leg length discrepancy. The results showed increased discriminatory power of the COGS for differentiating diverse levels of impairment. Comparison of the GPS and COGS sub-scores and their ability to indicate changes in specific variables supports the validity of both scores. The COGS is an overall measure of gait quality with increased power to detect subtle changes in gait patterns and might be well suited for tracing the effect of a therapeutic treatment over time. The newly introduced sub-scores improved the interpretability of the COGS, which is helpful for practical applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Effect of gingival fluid on marginal adaptation of Class II resin-based composite restorations.

    Science.gov (United States)

    Spahr, A; Schön, F; Haller, B

    2000-10-01

    To evaluate in vitro the marginal quality of Class II composite restorations at the gingival enamel margins as affected by contamination of the cavities with gingival fluid (GF) during different steps of resin bonding procedures. 70 Class II cavities were prepared in extracted human molars and restored with composite using a multi-component bonding system (OptiBond FL/Herculite XRV; OPTI) or a single-bottle adhesive (Syntac Sprint/Tetric Ceram; SYN). The cavities were contaminated with human GF: C1 after acid etching, C2 after application of the primer (OPTI) or light-curing of the primer-adhesive (SYN), and C3 after light-curing of the resin adhesive (OPTI). Uncontaminated cavities were used as the control (C0). The restored teeth were subjected to thermocycling (TC) and replicated for SEM analysis of marginal gap formation. Microleakage at the gingival margins was determined by dye penetration with basic fuchsin. non-parametric tests (Kruskal-Wallis test, Mann-Whitney test with Bonferroni correction). In both bonding systems, contamination with GF after acid etching (C1) did not impair the marginal quality; the mean percentages of continuous margin/mean depths of dye penetration were: OPTI: C0: 88.5%/0.10 mm, C1: 95.6%/0.04 mm; SYN: C0: 90.9%/0.08 mm, C1: 97.0%/0.05 mm. Marginal adaptation was adversely affected when GF contamination was performed after

  16. Decision-Based Marginal Total Variation Diffusion for Impulsive Noise Removal in Color Images

    Directory of Open Access Journals (Sweden)

    Hongyao Deng

    2017-01-01

    Full Text Available Impulsive noise removal for color images usually employs vector median filter, switching median filter, the total variation L1 method, and variants. These approaches, however, often introduce excessive smoothing and can result in extensive visual feature blurring and thus are suitable only for images with low density noise. A marginal method to reduce impulsive noise is proposed in this paper that overcomes this limitation that is based on the following facts: (i each channel in a color image is contaminated independently, and contaminative components are independent and identically distributed; (ii in a natural image the gradients of different components of a pixel are similar to one another. This method divides components into different categories based on different noise characteristics. If an image is corrupted by salt-and-pepper noise, the components are divided into the corrupted and the noise-free components; if the image is corrupted by random-valued impulses, the components are divided into the corrupted, noise-free, and the possibly corrupted components. Components falling into different categories are processed differently. If a component is corrupted, modified total variation diffusion is applied; if it is possibly corrupted, scaled total variation diffusion is applied; otherwise, the component is left unchanged. Simulation results demonstrate its effectiveness.

  17. Gravity Matching Aided Inertial Navigation Technique Based on Marginal Robust Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Ming Liu

    2015-01-01

    Full Text Available This paper is concerned with the topic of gravity matching aided inertial navigation technology using Kalman filter. The dynamic state space model for Kalman filter is constructed as follows: the error equation of the inertial navigation system is employed as the process equation while the local gravity model based on 9-point surface interpolation is employed as the observation equation. The unscented Kalman filter is employed to address the nonlinearity of the observation equation. The filter is refined in two ways as follows. The marginalization technique is employed to explore the conditionally linear substructure to reduce the computational load; specifically, the number of the needed sigma points is reduced from 15 to 5 after this technique is used. A robust technique based on Chi-square test is employed to make the filter insensitive to the uncertainties in the above constructed observation model. Numerical simulation is carried out, and the efficacy of the proposed method is validated by the simulation results.

  18. Influence of incorrect application of a water-based adhesive system on the marginal adaptation of Class V restorations.

    Science.gov (United States)

    Peschke, A; Blunck, U; Roulet, J F

    2000-10-01

    To determine the influence of incorrectly performed steps during the application of the water-based adhesive system OptiBond FL on the marginal adaptation of Class V composite restorations. In 96 extracted human teeth Class V cavities were prepared. Half of the margin length was situated in dentin. The teeth were randomly divided into 12 groups. The cavities were filled with Prodigy resin-based composite in combination with OptiBond FL according to the manufacturer's instructions (Group O) and including several incorrect application steps: Group A: prolonged etching (60 s); Group B: no etching of dentin; Group C: excessive drying after etching; Group D: short rewetting after excessive drying; Group E: air drying and rewetting; Group F: blot drying; Group G: saliva contamination; Group H: application of primer and immediate drying; group I: application of only primer; group J: application of only adhesive; Group K: no light curing of the adhesive before the application of composite. After thermocycling, replicas were taken and the margins were quantitatively analyzed in the SEM. Statistical analysis of the results was performed using non-parametric procedures. With exception of the "rewetting groups" (D and E) and the group with saliva contamination (G), all other application procedures showed a significantly higher amount of marginal openings in dentin compared to the control group (O). Margin quality in enamel was only affected when the primer was not applied.

  19. A web-based non-intrusive ambient system to measure and classify activities of daily living.

    Science.gov (United States)

    Stucki, Reto A; Urwyler, Prabitha; Rampa, Luca; Müri, René; Mosimann, Urs P; Nef, Tobias

    2014-07-21

    The number of older adults in the global population is increasing. This demographic shift leads to an increasing prevalence of age-associated disorders, such as Alzheimer's disease and other types of dementia. With the progression of the disease, the risk for institutional care increases, which contrasts with the desire of most patients to stay in their home environment. Despite doctors' and caregivers' awareness of the patient's cognitive status, they are often uncertain about its consequences on activities of daily living (ADL). To provide effective care, they need to know how patients cope with ADL, in particular, the estimation of risks associated with the cognitive decline. The occurrence, performance, and duration of different ADL are important indicators of functional ability. The patient's ability to cope with these activities is traditionally assessed with questionnaires, which has disadvantages (eg, lack of reliability and sensitivity). Several groups have proposed sensor-based systems to recognize and quantify these activities in the patient's home. Combined with Web technology, these systems can inform caregivers about their patients in real-time (e.g., via smartphone). We hypothesize that a non-intrusive system, which does not use body-mounted sensors, video-based imaging, and microphone recordings would be better suited for use in dementia patients. Since it does not require patient's attention and compliance, such a system might be well accepted by patients. We present a passive, Web-based, non-intrusive, assistive technology system that recognizes and classifies ADL. The components of this novel assistive technology system were wireless sensors distributed in every room of the participant's home and a central computer unit (CCU). The environmental data were acquired for 20 days (per participant) and then stored and processed on the CCU. In consultation with medical experts, eight ADL were classified. In this study, 10 healthy participants (6 women

  20. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    Science.gov (United States)

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Memory-Based Specification of Verbal Features for Classifying Animals into Super-Ordinate and Sub-Ordinate Categories

    Directory of Open Access Journals (Sweden)

    Takahiro Soshi

    2017-09-01

    Full Text Available Accumulating evidence suggests that category representations are based on features. Distinguishing features are considered to define categories, because of all-or-none responses for objects in different categories; however, it is unclear how distinguishing features actually classify objects at various category levels. The present study included 75 animals within three classes (mammal, bird, and fish, along with 195 verbal features. Healthy adults participated in memory-based feature-animal matching verification tests. Analyses included a hierarchical clustering analysis, support vector machine, and independent component analysis to specify features effective for classifications. Quantitative and qualitative comparisons for significant features were conducted between super-ordinate and sub-ordinate levels. The number of significant features was larger for super-ordinate than sub-ordinate levels. Qualitatively, the proportion of biological features was larger than cultural/affective features in both the levels, while the proportion of affective features increased at the sub-ordinate level. To summarize, the two types of features differentially function to establish category representations.

  2. Buying on margin, selling short in an agent-based market model

    Science.gov (United States)

    Zhang, Ting; Li, Honggang

    2013-09-01

    Credit trading, or leverage trading, which includes buying on margin and selling short, plays an important role in financial markets, where agents tend to increase their leverages for increased profits. This paper presents an agent-based asset market model to study the effect of the permissive leverage level on traders’ wealth and overall market indicators. In this model, heterogeneous agents can assume fundamental value-converging expectations or trend-persistence expectations, and their effective demands of assets depend both on demand willingness and wealth constraints, where leverage can relieve the wealth constraints to some extent. The asset market price is determined by a market maker, who watches the market excess demand, and is influenced by noise factors. By simulations, we examine market results for different leverage ratios. At the individual level, we focus on how the leverage ratio influences agents’ wealth accumulation. At the market level, we focus on how the leverage ratio influences changes in the asset price, volatility, and trading volume. Qualitatively, our model provides some meaningful results supported by empirical facts. More importantly, we find a continuous phase transition as we increase the leverage threshold, which may provide a further prospective of credit trading.

  3. Damage Identification of a Derrick Steel Structure Based on the HHT Marginal Spectrum Amplitude Curvature Difference

    Directory of Open Access Journals (Sweden)

    Dongying Han

    2017-01-01

    Full Text Available For the damage identification of derrick steel structures, traditional methods often require high-order vibration information of structures to identify damage accurately. However, the high-order vibration information of structures is difficult to acquire. Based on the technology of signal feature extraction, only using the low-order vibration information, taking the right front leg as an example, we analyzed the selection of HHT marginal spectrum amplitude and the calculation process of its curvature in practical application, designed the damage conditions of a derrick steel structure, used the index and intrinsic mode function (IMF instantaneous energy curvature method to perform the damage simulation calculation and comparison, and verified the effect of identifying the damage location in a noisy environment. The results show that the index can accurately determine the location of the damage element and weak damage element and can be used to qualitatively analyze the damage degree of the element; under the impact load, the noise hardly affects the identification of the damage location. Finally, this method was applied to the ZJ70 derrick steel structure laboratory model and compared with the IMF instantaneous energy curvature method. We verified the feasibility of this method in the damage location simulation experiment.

  4. Margins of freedom: a field-theoretic approach to class-based health dispositions and practices.

    Science.gov (United States)

    Burnett, Patrick John; Veenstra, Gerry

    2017-09-01

    Pierre Bourdieu's theory of practice situates social practices in the relational interplay between experiential mental phenomena (habitus), resources (capitals) and objective social structures (fields). When applied to class-based practices in particular, the overarching field of power within which social classes are potentially made manifest is the primary field of interest. Applying relational statistical techniques to original survey data from Toronto and Vancouver, Canada, we investigated whether smoking, engaging in physical activity and consuming fruit and vegetables are dispersed in a three-dimensional field of power shaped by economic and cultural capitals and cultural dispositions and practices. We find that aesthetic dispositions and flexibility of developing and established dispositions are associated with positioning in the Canadian field of power and embedded in the logics of the health practices dispersed in the field. From this field-theoretic perspective, behavioural change requires the disruption of existing relations of harmony between the habitus of agents, the fields within which the practices are enacted and the capitals that inform and enforce the mores and regularities of the fields. The three-dimensional model can be explored at: http://relational-health.ca/margins-freedom. © 2017 Foundation for the Sociology of Health & Illness.

  5. BitterSweetForest: A random forest based binary classifier to predict bitterness and sweetness of chemical compounds

    Science.gov (United States)

    Banerjee, Priyanka; Preissner, Robert

    2018-04-01

    Taste of a chemical compounds present in food stimulates us to take in nutrients and avoid poisons. However, the perception of taste greatly depends on the genetic as well as evolutionary perspectives. The aim of this work was the development and validation of a machine learning model based on molecular fingerprints to discriminate between sweet and bitter taste of molecules. BitterSweetForest is the first open access model based on KNIME workflow that provides platform for prediction of bitter and sweet taste of chemical compounds using molecular fingerprints and Random Forest based classifier. The constructed model yielded an accuracy of 95% and an AUC of 0.98 in cross-validation. In independent test set, BitterSweetForest achieved an accuracy of 96 % and an AUC of 0.98 for bitter and sweet taste prediction. The constructed model was further applied to predict the bitter and sweet taste of natural compounds, approved drugs as well as on an acute toxicity compound data set. BitterSweetForest suggests 70% of the natural product space, as bitter and 10 % of the natural product space as sweet with confidence score of 0.60 and above. 77 % of the approved drug set was predicted as bitter and 2% as sweet with a confidence scores of 0.75 and above. Similarly, 75% of the total compounds from acute oral toxicity class were predicted only as bitter with a minimum confidence score of 0.75, revealing toxic compounds are mostly bitter. Furthermore, we applied a Bayesian based feature analysis method to discriminate the most occurring chemical features between sweet and bitter compounds from the feature space of a circular fingerprint.

  6. Marginalization of the Youth

    DEFF Research Database (Denmark)

    Jensen, Niels Rosendal

    2009-01-01

    The article is based on a key note speach in Bielefeld on the subject "welfare state and marginalized youth", focusing upon the high ambition of expanding schooling in Denmark from 9 to 12 years. The unintended effect may be a new kind of marginalization.......The article is based on a key note speach in Bielefeld on the subject "welfare state and marginalized youth", focusing upon the high ambition of expanding schooling in Denmark from 9 to 12 years. The unintended effect may be a new kind of marginalization....

  7. Effect of placement of droop based generators in distribution network on small signal stability margin and network loss

    DEFF Research Database (Denmark)

    Dheer, D.K.; Doolla, S.; Bandyopadhyay, S.

    2017-01-01

    , small signal stability margin is on the fore. The present research studied the effect of location of droop-controlled DGs on small signal stability margin and network loss on a modified IEEE 13 bus system, an IEEE 33-bus distribution system and a practical 22-bus radial distribution network. A complete...... loss and stability margin is further investigated by identifying the Pareto fronts for modified IEEE 13 bus, IEEE 33 and practical 22-bus radial distribution network with application of Reference point based Non-dominated Sorting Genetic Algorithm (R-NSGA). Results were validated by time domain......For a utility-connected system, issues related to small signal stability with Distributed Generators (DGs) are insignificant due to the presence of a very strong grid. Optimally placed sources in utility connected microgrid system may not be optimal/stable in islanded condition. Among others issues...

  8. Effect of metal selection and porcelain firing on the marginal accuracy of titanium-based metal ceramic restorations.

    Science.gov (United States)

    Shokry, Tamer E; Attia, Mazen; Mosleh, Ihab; Elhosary, Mohamed; Hamza, Tamer; Shen, Chiayi

    2010-01-01

    Titanium is the most biocompatible metal used for dental casting; however, there is concern about its marginal accuracy after porcelain application since this aspect has direct influence on marginal fit. The purpose of this study was to determine the effect that metal selection and the porcelain firing procedure have on the marginal accuracy of metal ceramic prostheses. Cast CP Ti, milled CP Ti, cast Ti-6Al-7Nb, and cast Ni-Cr copings (n=5) were fired with compatible porcelains (Triceram for titanium-based metals and VITA VMK 95 for Ni-Cr alloy). The Ni-Cr alloy fired with its porcelain served as the control. Photographs of metal copings placed on a master die were made. Marginal discrepancy was determined on the photographs using an image processing program at 8 predetermined locations before airborne-particle abrasion for porcelain application, after firing of the opaque layer, and after firing of the dentin layer. Repeated-measures 2-way ANOVA was used to investigate the effect of metal selection and firing stage, and paired t tests were used to determine the effect of each firing stage within each material group (alpha=.05). ANOVA showed that both metal selection and firing stage significantly influenced the measured marginal discrepancy (Pcast Ti-6Al-7Nb alloy (P=.003). Titanium copings fabricated by CAD/CAM demonstrated the least marginal discrepancy among all groups, while the base metal (Ni-Cr) groups exhibited the most discrepancy of all groups tested. Copyright 2010 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  9. SNRFCB: sub-network based random forest classifier for predicting chemotherapy benefit on survival for cancer treatment.

    Science.gov (United States)

    Shi, Mingguang; He, Jianmin

    2016-04-01

    Adjuvant chemotherapy (CTX) should be individualized to provide potential survival benefit and avoid potential harm to cancer patients. Our goal was to establish a computational approach for making personalized estimates of the survival benefit from adjuvant CTX. We developed Sub-Network based Random Forest classifier for predicting Chemotherapy Benefit (SNRFCB) based gene expression datasets of lung cancer. The SNRFCB approach was then validated in independent test cohorts for identifying chemotherapy responder cohorts and chemotherapy non-responder cohorts. SNRFCB involved the pre-selection of gene sub-network signatures based on the mutations and on protein-protein interaction data as well as the application of the random forest algorithm to gene expression datasets. Adjuvant CTX was significantly associated with the prolonged overall survival of lung cancer patients in the chemotherapy responder group (P = 0.008), but it was not beneficial to patients in the chemotherapy non-responder group (P = 0.657). Adjuvant CTX was significantly associated with the prolonged overall survival of lung cancer squamous cell carcinoma (SQCC) subtype patients in the chemotherapy responder cohorts (P = 0.024), but it was not beneficial to patients in the chemotherapy non-responder cohorts (P = 0.383). SNRFCB improved prediction performance as compared to the machine learning method, support vector machine (SVM). To test the general applicability of the predictive model, we further applied the SNRFCB approach to human breast cancer datasets and also observed superior performance. SNRFCB could provide recurrent probability for individual patients and identify which patients may benefit from adjuvant CTX in clinical trials.

  10. Single classifier, OvO, OvA and RCC multiclass classification method in handheld based smartphone gait identification

    Science.gov (United States)

    Raziff, Abdul Rafiez Abdul; Sulaiman, Md Nasir; Mustapha, Norwati; Perumal, Thinagaran

    2017-10-01

    Gait recognition is widely used in many applications. In the application of the gait identification especially in people, the number of classes (people) is many which may comprise to more than 20. Due to the large amount of classes, the usage of single classification mapping (direct classification) may not be suitable as most of the existing algorithms are mostly designed for the binary classification. Furthermore, having many classes in a dataset may result in the possibility of having a high degree of overlapped class boundary. This paper discusses the application of multiclass classifier mappings such as one-vs-all (OvA), one-vs-one (OvO) and random correction code (RCC) on handheld based smartphone gait signal for person identification. The results is then compared with a single J48 decision tree for benchmark. From the result, it can be said that using multiclass classification mapping method thus partially improved the overall accuracy especially on OvO and RCC with width factor more than 4. For OvA, the accuracy result is worse than a single J48 due to a high number of classes.

  11. The employment of Support Vector Machine to classify high and low performance archers based on bio-physiological variables

    Science.gov (United States)

    Taha, Zahari; Muazu Musa, Rabiu; Majeed, Anwar P. P. Abdul; Razali Abdullah, Mohamad; Amirul Abdullah, Muhammad; Hasnun Arif Hassan, Mohd; Khalil, Zubair

    2018-04-01

    The present study employs a machine learning algorithm namely support vector machine (SVM) to classify high and low potential archers from a collection of bio-physiological variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. The bio-physiological variables namely resting heart rate, resting respiratory rate, resting diastolic blood pressure, resting systolic blood pressure, as well as calories intake, were measured prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models i.e. linear, quadratic and cubic kernel functions, were trained on the aforementioned variables. The k-means clustered the archers into high (HPA) and low potential archers (LPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy with a classification accuracy of 94% in comparison the other tested models. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected bio-physiological variables examined.

  12. Prototyping a GNSS-Based Passive Radar for UAVs: An Instrument to Classify the Water Content Feature of Lands

    Directory of Open Access Journals (Sweden)

    Micaela Troglia Gamba

    2015-11-01

    Full Text Available Global Navigation Satellite Systems (GNSS broadcast signals for positioning and navigation, which can be also employed for remote sensing applications. Indeed, the satellites of any GNSS can be seen as synchronized sources of electromagnetic radiation, and specific processing of the signals reflected back from the ground can be used to estimate the geophysical properties of the Earth’s surface. Several experiments have successfully demonstrated GNSS-reflectometry (GNSS-R, whereas new applications are continuously emerging and are presently under development, either from static or dynamic platforms. GNSS-R can be implemented at a low cost, primarily if small devices are mounted on-board unmanned aerial vehicles (UAVs, which today can be equipped with several types of sensors for environmental monitoring. So far, many instruments for GNSS-R have followed the GNSS bistatic radar architecture and consisted of custom GNSS receivers, often requiring a personal computer and bulky systems to store large amounts of data. This paper presents the development of a GNSS-based sensor for UAVs and small manned aircraft, used to classify lands according to their soil water content. The paper provides details on the design of the major hardware and software components, as well as the description of the results obtained through field tests.

  13. Intelligent Garbage Classifier

    Directory of Open Access Journals (Sweden)

    Ignacio Rodríguez Novelle

    2008-12-01

    Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  14. Marginal Abatement Cost of CO2 in China Based on Directional Distance Function: An Industry Perspective

    Directory of Open Access Journals (Sweden)

    Bowen Xiao

    2017-01-01

    Full Text Available Industrial sectors account for around 70% of the total energy-related CO2 emissions in China. It is of great importance to measure the potential for CO2 emissions reduction and calculate the carbon price in industrial sectors covered in the Emissions Trading Scheme and carbon tax. This paper employs the directional distance function to calculate the marginal abatement costs of CO2 emissions during 2005–2011 and makes a comparative analysis between our study and the relevant literature. Our empirical results show that the marginal abatement costs vary greatly from industry to industry: high marginal abatement costs occur in industries with low carbon intensity, and vice versa. In the application of the marginal abatement cost, the abatement distribution scheme with minimum cost is established under different abatement targets. The conclusions of abatement distribution scheme indicate that those heavy industries with low MACs and high carbon intensity should take more responsibility for emissions reduction and vice versa. Finally, the policy implications for marginal abatement cost are provided.

  15. Marginal adaptation of a low-shrinkage silorane-based composite: A SEM-analysis

    DEFF Research Database (Denmark)

    Schmidt, Malene; Bindslev, Preben Hørsted; Poulsen, Sven

    2012-01-01

    shrinkage, has been marketed. Objective. To investigate whether reduced polymerization shrinkage improves the marginal adaptation of composite restorations. Material and methods. A total of 156 scanning electron microscopy (SEM) pictures (78 baseline, 78 follow-up) of the occlusal part of Class II......-casts of the restorations were used for SEM pictures at x 16 magnification. Pictures from baseline and follow-up (398 days, SD 29 days) were randomized and the examiner was blinded to the material and the age of the restoration. Stereologic measurements were used to calculate the length and the width of the marginal...

  16. A review and experimental study on the application of classifiers and evolutionary algorithms in EEG-based brain-machine interface systems

    Science.gov (United States)

    Tahernezhad-Javazm, Farajollah; Azimirad, Vahid; Shoaran, Maryam

    2018-04-01

    Objective. Considering the importance and the near-future development of noninvasive brain-machine interface (BMI) systems, this paper presents a comprehensive theoretical-experimental survey on the classification and evolutionary methods for BMI-based systems in which EEG signals are used. Approach. The paper is divided into two main parts. In the first part, a wide range of different types of the base and combinatorial classifiers including boosting and bagging classifiers and evolutionary algorithms are reviewed and investigated. In the second part, these classifiers and evolutionary algorithms are assessed and compared based on two types of relatively widely used BMI systems, sensory motor rhythm-BMI and event-related potentials-BMI. Moreover, in the second part, some of the improved evolutionary algorithms as well as bi-objective algorithms are experimentally assessed and compared. Main results. In this study two databases are used, and cross-validation accuracy (CVA) and stability to data volume (SDV) are considered as the evaluation criteria for the classifiers. According to the experimental results on both databases, regarding the base classifiers, linear discriminant analysis and support vector machines with respect to CVA evaluation metric, and naive Bayes with respect to SDV demonstrated the best performances. Among the combinatorial classifiers, four classifiers, Bagg-DT (bagging decision tree), LogitBoost, and GentleBoost with respect to CVA, and Bagging-LR (bagging logistic regression) and AdaBoost (adaptive boosting) with respect to SDV had the best performances. Finally, regarding the evolutionary algorithms, single-objective invasive weed optimization (IWO) and bi-objective nondominated sorting IWO algorithms demonstrated the best performances. Significance. We present a general survey on the base and the combinatorial classification methods for EEG signals (sensory motor rhythm and event-related potentials) as well as their optimization methods

  17. A Gaussian mixture model based adaptive classifier for fNIRS brain-computer interfaces and its testing via simulation

    Science.gov (United States)

    Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe

    2017-08-01

    Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus  brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.

  18. Unified framework for triaxial accelerometer-based fall event detection and classification using cumulants and hierarchical decision tree classifier.

    Science.gov (United States)

    Kambhampati, Satya Samyukta; Singh, Vishal; Manikandan, M Sabarimalai; Ramkumar, Barathram

    2015-08-01

    In this Letter, the authors present a unified framework for fall event detection and classification using the cumulants extracted from the acceleration (ACC) signals acquired using a single waist-mounted triaxial accelerometer. The main objective of this Letter is to find suitable representative cumulants and classifiers in effectively detecting and classifying different types of fall and non-fall events. It was discovered that the first level of the proposed hierarchical decision tree algorithm implements fall detection using fifth-order cumulants and support vector machine (SVM) classifier. In the second level, the fall event classification algorithm uses the fifth-order cumulants and SVM. Finally, human activity classification is performed using the second-order cumulants and SVM. The detection and classification results are compared with those of the decision tree, naive Bayes, multilayer perceptron and SVM classifiers with different types of time-domain features including the second-, third-, fourth- and fifth-order cumulants and the signal magnitude vector and signal magnitude area. The experimental results demonstrate that the second- and fifth-order cumulant features and SVM classifier can achieve optimal detection and classification rates of above 95%, as well as the lowest false alarm rate of 1.03%.

  19. Classifying brain metastases by their primary site of origin using a radiomics approach based on texture analysis: a feasibility study.

    Science.gov (United States)

    Ortiz-Ramón, Rafael; Larroza, Andrés; Ruiz-España, Silvia; Arana, Estanislao; Moratal, David

    2018-05-14

    To examine the capability of MRI texture analysis to differentiate the primary site of origin of brain metastases following a radiomics approach. Sixty-seven untreated brain metastases (BM) were found in 3D T1-weighted MRI of 38 patients with cancer: 27 from lung cancer, 23 from melanoma and 17 from breast cancer. These lesions were segmented in 2D and 3D to compare the discriminative power of 2D and 3D texture features. The images were quantized using different number of gray-levels to test the influence of quantization. Forty-three rotation-invariant texture features were examined. Feature selection and random forest classification were implemented within a nested cross-validation structure. Classification was evaluated with the area under receiver operating characteristic curve (AUC) considering two strategies: multiclass and one-versus-one. In the multiclass approach, 3D texture features were more discriminative than 2D features. The best results were achieved for images quantized with 32 gray-levels (AUC = 0.873 ± 0.064) using the top four features provided by the feature selection method based on the p-value. In the one-versus-one approach, high accuracy was obtained when differentiating lung cancer BM from breast cancer BM (four features, AUC = 0.963 ± 0.054) and melanoma BM (eight features, AUC = 0.936 ± 0.070) using the optimal dataset (3D features, 32 gray-levels). Classification of breast cancer and melanoma BM was unsatisfactory (AUC = 0.607 ± 0.180). Volumetric MRI texture features can be useful to differentiate brain metastases from different primary cancers after quantizing the images with the proper number of gray-levels. • Texture analysis is a promising source of biomarkers for classifying brain neoplasms. • MRI texture features of brain metastases could help identifying the primary cancer. • Volumetric texture features are more discriminative than traditional 2D texture features.

  20. Marginal adaptation, fracture load and macroscopic failure mode of adhesively luted PMMA-based CAD/CAM inlays.

    Science.gov (United States)

    Ender, Andreas; Bienz, Stefan; Mörmann, Werner; Mehl, Albert; Attin, Thomas; Stawarczyk, Bogna

    2016-02-01

    To evaluate marginal adaptation, fracture load and failure types of CAD/CAM polymeric inlays. Standardized prepared human molars (48) were divided into four groups (n=12): (A) PCG (positive control group); adhesively luted glass-ceramic inlays, (B) TRX; CAD/CAM polymeric inlays luted using a self-adhesive resin cement, (C) TAC; CAD/CAM polymeric inlays luted using a conventional resin cement, and (D) NCG (negative control group); direct-filled resin-based composite restorations. All specimens were subjected to a chewing simulator. Before and after chewing fatigue, marginal adaptation was assessed at two interfaces: (1) between dental hard tissues and luting cement and (2) between luting cement and restoration. Thereafter, the specimens were loaded and the fracture loads, as well as the failure types, were determined. The data were analysed using three- and one-way ANOVA with post hoc Scheffé test, two sample Student's t-test (pmarginal adaptation for interface 1 showed significantly better results for TRX and PCG than for TAC (p=0.001-0.02) and NCG (p=0.001-0.047). For interface 2, marginal adaptation for TAC was significantly inferior to TRX (pmarginal adaptation of TAC and NCG. No significant differences in fracture load were found between all tested groups. Self-adhesive luted polymeric CAD/CAM inlays showed similar marginal adaptation and fracture load values compared to adhesively luted glass-ceramic inlays. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  1. The performance of an automatic acoustic-based program classifier compared to hearing aid users' manual selection of listening programs.

    Science.gov (United States)

    Searchfield, Grant D; Linford, Tania; Kobayashi, Kei; Crowhen, David; Latzel, Matthias

    2018-03-01

    To compare preference for and performance of manually selected programmes to an automatic sound classifier, the Phonak AutoSense OS. A single blind repeated measures study. Participants were fit with Phonak Virto V90 ITE aids; preferences for different listening programmes were compared across four different sound scenarios (speech in: quiet, noise, loud noise and a car). Following a 4-week trial preferences were reassessed and the users preferred programme was compared to the automatic classifier for sound quality and hearing in noise (HINT test) using a 12 loudspeaker array. Twenty-five participants with symmetrical moderate-severe sensorineural hearing loss. Participant preferences of manual programme for scenarios varied considerably between and within sessions. A HINT Speech Reception Threshold (SRT) advantage was observed for the automatic classifier over participant's manual selection for speech in quiet, loud noise and car noise. Sound quality ratings were similar for both manual and automatic selections. The use of a sound classifier is a viable alternative to manual programme selection.

  2. The effect of repeated preheating of dimethacrylate and silorane-based composite resins on marginal gap of class V restorations.

    Science.gov (United States)

    Alizadeh Oskoee, Parnian; Pournaghi Azar, Fatemeh; Jafari Navimipour, Elmira; Ebrahimi Chaharom, Mohammad Esmaeel; Naser Alavi, Fereshteh; Salari, Ashkan

    2017-01-01

    Background. One of the problems with composite resin restorations is gap formation at resin‒tooth interface. The present study evaluated the effect of preheating cycles of silorane- and dimethacrylate-based composite resins on gap formation at the gingival margins of Class V restorations. Methods. In this in vitro study, standard Class V cavities were prepared on the buccal surfaces of 48 bovine incisors. For restorative procedure, the samples were randomly divided into 2 groups based on the type of composite resin (group 1: di-methacrylate composite [Filtek Z250]; group 2: silorane composite [Filtek P90]) and each group was randomly divided into 2 subgroups based on the composite temperature (A: room temperature; B: after 40 preheating cycles up to 55°C). Marginal gaps were measured using a stereomicroscope at ×40 and analyzed with two-way ANOVA. Inter- and intra-group comparisons were analyzed with post-hoc Tukey tests. Significance level was defined at P composite resin type, preheating and interactive effect of these variables on gap formation were significant (Pcomposite resins (Pcomposite resins at room temperature compared to composite resins after 40 preheating cycles (Pcomposite re-sins. Preheating of silorane-based composites can result in the best marginal adaptation.

  3. Machined part sales price build-up based on the contribution margin concept

    OpenAIRE

    Lucato, Wagner Cesar; Baptista, Elesandro Antonio; Coppini, Nivaldo Lemos

    2009-01-01

    One of the main competitive moves observed in the last two decades was the change in product pricing, evolving from a cost plus margin paradigm to a market-driven one. In the present days, the customer defines how much he or she is willing to pay for a given product or service. As a result, traditional cost accounting procedures and their related pricing formulas cannot accommodate that kind of change without significant turnaround in practices and concepts. Taking that into consideration, th...

  4. New neural network classifier of fall-risk based on the Mahalanobis distance and kinematic parameters assessed by a wearable device

    International Nuclear Information System (INIS)

    Giansanti, Daniele; Macellari, Velio; Maccioni, Giovanni

    2008-01-01

    Fall prevention lacks easy, quantitative and wearable methods for the classification of fall-risk (FR). Efforts must be thus devoted to the choice of an ad hoc classifier both to reduce the size of the sample used to train the classifier and to improve performances. A new methodology that uses a neural network (NN) and a wearable device are hereby proposed for this purpose. The NN uses kinematic parameters assessed by a wearable device with accelerometers and rate gyroscopes during a posturography protocol. The training of the NN was based on the Mahalanobis distance and was carried out on two groups of 30 elderly subjects with varying fall-risk Tinetti scores. The validation was done on two groups of 100 subjects with different fall-risk Tinetti scores and showed that, both in terms of specificity and sensitivity, the NN performed better than other classifiers (naive Bayes, Bayes net, multilayer perceptron, support vector machines, statistical classifiers). In particular, (i) the proposed NN methodology improved the specificity and sensitivity by a mean of 3% when compared to the statistical classifier based on the Mahalanobis distance (SCMD) described in Giansanti (2006 Physiol. Meas. 27 1081–90); (ii) the assessed specificity was 97%, the assessed sensitivity was 98% and the area under receiver operator characteristics was 0.965. (note)

  5. Study on seismic design margin based upon inelastic shaking test of the piping and support system

    International Nuclear Information System (INIS)

    Ishiguro, Takami; Eto, Kazutoshi; Ikeda, Kazutoyo; Yoshii, Toshiaki; Kondo, Masami; Tai, Koichi

    2009-01-01

    In Japan, according to the revised Regulatory Guide for Aseismic Design of Nuclear Power Reactor Facilities, September 2006, criteria of design basis earthquakes of Nuclear Power Reactor Facilities become more severe. Then, evaluating seismic design margin took on a great importance and it has been profoundly discussed. Since seismic safety is one of the major key issues of nuclear power plant safety, it has been demonstrated that nuclear piping system possesses large safety margins by various durability test reports for piping in ultimate conditions. Though the knowledge of safety margin has been accumulated from these reports, there still remain some technical uncertainties about the phenomenon when both piping and support structures show inelastic behavior in extremely high seismic excitation level. In order to obtain the influences of inelastic behavior of the support structures to the whole piping system response when both piping and support structures show inelastic behavior, we examined seismic proving tests and we conducted simulation analyses for the piping system which focused on the inelastic behavior of the support to the whole piping system response. This paper introduces major results of the seismic shaking tests of the piping and support system and the simulation analyses of these tests. (author)

  6. Parotid gland sparing effect by computed tomography-based modified lower field margin in whole brain radiotherapy

    International Nuclear Information System (INIS)

    Cho, Oyeon; Chun, Mi Son; Oh, Young Taek; Kim, Mi Hwa; Park, Hae Jin; Nam, Sang Soo; Heo, Jae Sung; Noh, O Kyu; Park, Sung Ho

    2013-01-01

    Parotid gland can be considered as a risk organ in whole brain radiotherapy (WBRT). The purpose of this study is to evaluate the parotid gland sparing effect of computed tomography (CT)-based WBRT compared to 2-dimensional plan with conventional field margin. From January 2008 to April 2011, 53 patients underwent WBRT using CT-based simulation. Bilateral two-field arrangement was used and the prescribed dose was 30 Gy in 10 fractions. We compared the parotid dose between 2 radiotherapy plans using different lower field margins: conventional field to the lower level of the atlas (CF) and modified field fitted to the brain tissue (MF). Averages of mean parotid dose of the 2 protocols with CF and MF were 17.4 Gy and 8.7 Gy, respectively (p 98% of prescribed dose were 99.7% for CF and 99.5% for MF. Compared to WBRT with CF, CT-based lower field margin modification is a simple and effective technique for sparing the parotid gland, while providing similar dose coverage of the whole brain.

  7. A Fuzzy Logic-Based Personalized Method to Classify Perceived Exertion in Workplaces Using a Wearable Heart Rate Sensor

    OpenAIRE

    Pancardo, Pablo; Hernández-Nolasco, J. A.; Acosta-Escalante, Francisco

    2018-01-01

    Knowing the perceived exertion of workers during their physical activities facilitates the decision-making of supervisors regarding the worker allocation in the appropriate job, actions to prevent accidents, and reassignment of tasks, among others. However, although wearable heart rate sensors represent an effective way to capture perceived exertion, ergonomic methods are generic and they do not consider the diffuse nature of the ranges that classify the efforts. Personalized monitoring is ne...

  8. Optimal beam margins in linac-based VMAT stereotactic ablative body radiotherapy: a Pareto front analysis for liver metastases.

    Science.gov (United States)

    Cilla, Savino; Ianiro, Anna; Deodato, Francesco; Macchia, Gabriella; Digesù, Cinzia; Valentini, Vincenzo; Morganti, Alessio G

    2017-11-27

    We explored the Pareto fronts mathematical strategy to determine the optimal block margin and prescription isodose for stereotactic body radiotherapy (SBRT) treatments of liver metastases using the volumetric-modulated arc therapy (VMAT) technique. Three targets (planning target volumes [PTVs] = 20, 55, and 101 cc) were selected. A single fraction dose of 26 Gy was prescribed (prescription dose [PD]). VMAT plans were generated for 3 different beam energies. Pareto fronts based on (1) different multileaf collimator (MLC) block margin around PTV and (2) different prescription isodose lines (IDL) were produced. For each block margin, the greatest IDL fulfilling the criteria (95% of PTV reached 100%) was considered as providing the optimal clinical plan for PTV coverage. Liver D mean , V7Gy, and V12Gy were used against the PTV coverage to generate the fronts. Gradient indexes (GI and mGI), homogeneity index (HI), and healthy liver irradiation in terms of D mean , V7Gy, and V12Gy were calculated to compare different plans. In addition, each target was also optimized with a full-inverse planning engine to obtain a direct comparison with anatomy-based treatment planning system (TPS) results. About 900 plans were calculated to generate the fronts. GI and mGI show a U-shaped behavior as a function of beam margin with minimal values obtained with a +1 mm MLC margin. For these plans, the IDL ranges from 74% to 86%. GI and mGI show also a V-shaped behavior with respect to HI index, with minimum values at 1 mm for all metrics, independent of tumor dimensions and beam energy. Full-inversed optimized plans reported worse results with respect to Pareto plans. In conclusion, Pareto fronts provide a rigorous strategy to choose clinical optimal plans in SBRT treatments. We show that a 1-mm MLC block margin provides the best results with regard to healthy liver tissue irradiation and steepness of dose fallout. Copyright © 2017 American Association of Medical Dosimetrists

  9. Matthew and marginality

    Directory of Open Access Journals (Sweden)

    Denis C. Duling

    1995-12-01

    Full Text Available This article explores marginality theory as it was first proposed in  the social sciences, that is related to persons caught between two competing cultures (Park; Stonequist, and, then, as it was developed in sociology as related to the poor (Germani and in anthropology as it was related to involuntary marginality and voluntary marginality (Victor Turner. It then examines a (normative scheme' in antiquity that creates involuntary marginality at the macrosocial level, namely, Lenski's social stratification model in an agrarian society, and indicates how Matthean language might fit with a sample inventory  of socioreligious roles. Next, it examines some (normative schemes' in  antiquity for voluntary margi-nality at the microsocial level, namely, groups, and examines how the Matthean gospel would fit based on indications of factions and leaders. The article ,shows that the author of the Gospel of Matthew has an ideology of (voluntary marginality', but his gospel includes some hope for (involuntary  marginals' in  the  real world, though it is somewhat tempered. It also suggests that the writer of the Gospel is a (marginal man', especially in the sense defined by the early theorists (Park; Stone-quist.

  10. An Ensemble of Classifiers based Approach for Prediction of Alzheimer's Disease using fMRI Images based on Fusion of Volumetric, Textural and Hemodynamic Features

    Directory of Open Access Journals (Sweden)

    MALIK, F.

    2018-02-01

    Full Text Available Alzheimer's is a neurodegenerative disease caused by the destruction and death of brain neurons resulting in memory loss, impaired thinking ability, and in certain behavioral changes. Alzheimer disease is a major cause of dementia and eventually death all around the world. Early diagnosis of the disease is crucial which can help the victims to maintain their level of independence for comparatively longer time and live a best life possible. For early detection of Alzheimer's disease, we are proposing a novel approach based on fusion of multiple types of features including hemodynamic, volumetric and textural features of the brain. Our approach uses non-invasive fMRI with ensemble of classifiers, for the classification of the normal controls and the Alzheimer patients. For performance evaluation, ten-fold cross validation is used. Individual feature sets and fusion of features have been investigated with ensemble classifiers for successful classification of Alzheimer's patients from normal controls. It is observed that fusion of features resulted in improved results for accuracy, specificity and sensitivity.

  11. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    Directory of Open Access Journals (Sweden)

    Cuihong Wen

    Full Text Available Optical Music Recognition (OMR has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM. The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM, which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs and Neural Networks (NNs.

  12. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    Science.gov (United States)

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs).

  13. Marginal Matter

    Science.gov (United States)

    van Hecke, Martin

    2013-03-01

    All around us, things are falling apart. The foam on our cappuccinos appears solid, but gentle stirring irreversibly changes its shape. Skin, a biological fiber network, is firm when you pinch it, but soft under light touch. Sand mimics a solid when we walk on the beach but a liquid when we pour it out of our shoes. Crucially, a marginal point separates the rigid or jammed state from the mechanical vacuum (freely flowing) state - at their marginal points, soft materials are neither solid nor liquid. Here I will show how the marginal point gives birth to a third sector of soft matter physics: intrinsically nonlinear mechanics. I will illustrate this with shock waves in weakly compressed granular media, the nonlinear rheology of foams, and the nonlinear mechanics of weakly connected elastic networks.

  14. The Effect of Water or Wax-based Binders on the Chemical and Morphological Characteristics of the Margin Ceramic-Framework Interface.

    Science.gov (United States)

    Güler, Umut; de Queiroz, José Renato Cavalcanti; de Oliveira, Luiz Fernando Cappa; Canay, Senay; Ozcan, Mutlu

    2015-09-01

    This study evaluated the effect of binder choice in mixing ceramic powder on the chemical and morphological features between the margin ceramic-framework interfaces. Titanium and zirconia frameworks (15 x 5 x 0.5 mm3) were veneered with margin ceramics prepared with two different binders, namely a) water/conventional or b) wax-based. For each zirconia framework material, four different margin ceramics were used: a- Creation Zi (Creation Willi Geller International); b- GC Initial Zr (GC America); Triceram (Dentaurum); and d- IPS emax (voclar Vivadent). For the titanium framework, three different margin ceramics were used: a- Creation Ti (Creation Willi Geller International); b- Triceram (Dentaurum); and c- VITA Titaniumkeramik (Vita Zahnfabrik). The chemical composition of the framework-margin ceramic interface was analyzed using Energy Dispersive X-ray Spectroscopy (EDS) and porosity level was quantified within the margin ceramic using an image program (ImageJ) from four random areas (100 x 100 pixels) on each SEM image. EDS analysis showed the presence of Carbon at the margin ceramic-framework interface in the groups where wax-based binder technique was used with the concentration being the highest for the IPS emax ZirCAD group. While IPS system (IPS ZirCAD and IPS Emax) presented higher porosity concentration using wax binder, in the other groups wax-based binder reduced the porosity of margin ceramic, except for Titanium - Triceram combination.

  15. Margination of Fluorescent Polylactic Acid–Polyaspartamide based Nanoparticles in Microcapillaries In Vitro: the Effect of Hematocrit and Pressure

    Directory of Open Access Journals (Sweden)

    Emanuela Fabiola Craparo

    2017-10-01

    Full Text Available The last decade has seen the emergence of vascular-targeted drug delivery systems as a promising approach for the treatment of many diseases, such as cardiovascular diseases and cancer. In this field, one of the major challenges is carrier margination propensity (i.e., particle migration from blood flow to vessel walls; indeed, binding of these particles to targeted cells and tissues is only possible if there is direct carrier–wall interaction. Here, a microfluidic system mimicking the hydrodynamic conditions of human microcirculation in vitro is used to investigate the effect of red blood cells (RBCs on a carrier margination in relation to RBC concentration (hematocrit and pressure drop. As model drug carriers, fluorescent polymeric nanoparticles (FNPs were chosen, which were obtained by using as starting material a pegylated polylactic acid–polyaspartamide copolymer. The latter was synthesized by derivatization of α,β-poly(N-2-hydroxyethyl-d,l-aspartamide (PHEA with Rhodamine (RhB, polylactic acid (PLA and then poly(ethyleneglycol (PEG chains. It was found that the carrier concentration near the wall increases with increasing pressure drop, independently of RBC concentration, and that the tendency for FNP margination decreases with increasing hematocrit. This work highlights the importance of taking into account RBC–drug carrier interactions and physiological conditions in microcirculation when planning a drug delivery strategy based on systemically administered carriers.

  16. Portable optical fiber probe-based spectroscopic scanner for rapid cancer diagnosis: a new tool for intraoperative margin assessment.

    Directory of Open Access Journals (Sweden)

    Niyom Lue

    Full Text Available There continues to be a significant clinical need for rapid and reliable intraoperative margin assessment during cancer surgery. Here we describe a portable, quantitative, optical fiber probe-based, spectroscopic tissue scanner designed for intraoperative diagnostic imaging of surgical margins, which we tested in a proof of concept study in human tissue for breast cancer diagnosis. The tissue scanner combines both diffuse reflectance spectroscopy (DRS and intrinsic fluorescence spectroscopy (IFS, and has hyperspectral imaging capability, acquiring full DRS and IFS spectra for each scanned image pixel. Modeling of the DRS and IFS spectra yields quantitative parameters that reflect the metabolic, biochemical and morphological state of tissue, which are translated into disease diagnosis. The tissue scanner has high spatial resolution (0.25 mm over a wide field of view (10 cm × 10 cm, and both high spectral resolution (2 nm and high spectral contrast, readily distinguishing tissues with widely varying optical properties (bone, skeletal muscle, fat and connective tissue. Tissue-simulating phantom experiments confirm that the tissue scanner can quantitatively measure spectral parameters, such as hemoglobin concentration, in a physiologically relevant range with a high degree of accuracy (<5% error. Finally, studies using human breast tissues showed that the tissue scanner can detect small foci of breast cancer in a background of normal breast tissue. This tissue scanner is simpler in design, images a larger field of view at higher resolution and provides a more physically meaningful tissue diagnosis than other spectroscopic imaging systems currently reported in literatures. We believe this spectroscopic tissue scanner can provide real-time, comprehensive diagnostic imaging of surgical margins in excised tissues, overcoming the sampling limitation in current histopathology margin assessment. As such it is a significant step in the development of a

  17. Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier.

    Directory of Open Access Journals (Sweden)

    Nogol Memari

    Full Text Available The structure and appearance of the blood vessel network in retinal fundus images is an essential part of diagnosing various problems associated with the eyes, such as diabetes and hypertension. In this paper, an automatic retinal vessel segmentation method utilizing matched filter techniques coupled with an AdaBoost classifier is proposed. The fundus image is enhanced using morphological operations, the contrast is increased using contrast limited adaptive histogram equalization (CLAHE method and the inhomogeneity is corrected using Retinex approach. Then, the blood vessels are enhanced using a combination of B-COSFIRE and Frangi matched filters. From this preprocessed image, different statistical features are computed on a pixel-wise basis and used in an AdaBoost classifier to extract the blood vessel network inside the image. Finally, the segmented images are postprocessed to remove the misclassified pixels and regions. The proposed method was validated using publicly accessible Digital Retinal Images for Vessel Extraction (DRIVE, Structured Analysis of the Retina (STARE and Child Heart and Health Study in England (CHASE_DB1 datasets commonly used for determining the accuracy of retinal vessel segmentation methods. The accuracy of the proposed segmentation method was comparable to other state of the art methods while being very close to the manual segmentation provided by the second human observer with an average accuracy of 0.972, 0.951 and 0.948 in DRIVE, STARE and CHASE_DB1 datasets, respectively.

  18. Detection of mitotic nuclei in breast histopathology images using localized ACM and Random Kitchen Sink based classifier.

    Science.gov (United States)

    Beevi, K Sabeena; Nair, Madhu S; Bindu, G R

    2016-08-01

    The exact measure of mitotic nuclei is a crucial parameter in breast cancer grading and prognosis. This can be achieved by improving the mitotic detection accuracy by careful design of segmentation and classification techniques. In this paper, segmentation of nuclei from breast histopathology images are carried out by Localized Active Contour Model (LACM) utilizing bio-inspired optimization techniques in the detection stage, in order to handle diffused intensities present along object boundaries. Further, the application of a new optimal machine learning algorithm capable of classifying strong non-linear data such as Random Kitchen Sink (RKS), shows improved classification performance. The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS) dataset provided for MITOS-ATYPIA CONTEST 2014. The proposed framework achieved 95% recall, 98% precision and 96% F-score.

  19. Customized Computed Tomography-Based Boost Volumes in Breast-Conserving Therapy: Use of Three-Dimensional Histologic Information for Clinical Target Volume Margins

    International Nuclear Information System (INIS)

    Hanbeukers, Bianca; Borger, Jacques; Ende, Piet van den; Ent, Fred van der; Houben, Ruud; Jager, Jos; Keymeulen, Kristien; Murrer, Lars; Sastrowijoto, Suprapto; Vijver, Koen van de; Boersma, Liesbeth

    2009-01-01

    Purpose: To determine the difference in size between computed tomography (CT)-based irradiated boost volumes and simulator-based irradiated volumes in patients treated with breast-conserving therapy and to analyze whether the use of anisotropic three-dimensional clinical target volume (CTV) margins using the histologically determined free resection margins allows for a significant reduction of the CT-based boost volumes. Patients and Methods: The CT data from 49 patients were used to delineate a planning target volume (PTV) with isotropic CTV margins and to delineate a PTV sim that mimicked the PTV as delineated in the era of conventional simulation. For 17 patients, a PTV with anisotropic CTV margins was defined by applying customized three-dimensional CTV margins, according to the free excision margins in six directions. Boost treatment plans consisted of conformal portals for the CT-based PTVs and rectangular fields for the PTV sim . Results: The irradiated volume (volume receiving ≥95% of the prescribed dose [V 95 ]) for the PTV with isotropic CTV margins was 1.6 times greater than that for the PTV sim : 228 cm 3 vs. 147 cm 3 (p 95 was similar to the V 95 for the PTV sim (190 cm 3 vs. 162 cm 3 ; p = NS). The main determinant for the irradiated volume was the size of the excision cavity (p < .001), which was mainly related to the interval between surgery and the planning CT scan (p = .029). Conclusion: CT-based PTVs with isotropic margins for the CTV yield much greater irradiated volumes than fluoroscopically based PTVs. Applying individualized anisotropic CTV margins allowed for a significant reduction of the irradiated boost volume.

  20. On-line detection of apnea/hypopnea events using SpO2 signal: a rule-based approach employing binary classifier models.

    Science.gov (United States)

    Koley, Bijoy Laxmi; Dey, Debangshu

    2014-01-01

    This paper presents an online method for automatic detection of apnea/hypopnea events, with the help of oxygen saturation (SpO2) signal, measured at fingertip by Bluetooth nocturnal pulse oximeter. Event detection is performed by identifying abnormal data segments from the recorded SpO2 signal, employing a binary classifier model based on a support vector machine (SVM). Thereafter the abnormal segment is further analyzed to detect different states within the segment, i.e., steady, desaturation, and resaturation, with the help of another SVM-based binary ensemble classifier model. Finally, a heuristically obtained rule-based system is used to identify the apnea/hypopnea events from the time-sequenced decisions of these classifier models. In the developmental phase, a set of 34 time domain-based features was extracted from the segmented SpO2 signal using an overlapped windowing technique. Later, an optimal set of features was selected on the basis of recursive feature elimination technique. A total of 34 subjects were included in the study. The results show average event detection accuracies of 96.7% and 93.8% for the offline and the online tests, respectively. The proposed system provides direct estimation of the apnea/hypopnea index with the help of a relatively inexpensive and widely available pulse oximeter. Moreover, the system can be monitored and accessed by physicians through LAN/WAN/Internet and can be extended to deploy in Bluetooth-enabled mobile phones.

  1. The effect of gingival wall location on the marginal seal of class ii restorations prepared with a flowable bulk-fill resin-based composite.

    Science.gov (United States)

    Segal, P; Candotto, V; Ben-Amar, A; Eger, M; Matalon, S; Lauritano, D; Ormianer, Z

    2018-01-01

    SureFil SDR is a flowable resin-based composite that allows a single incremental bulk placement. The marginal seal of SureFil SDR at the gingival margins of class II restorations located apical to the cemento-enamel-junction (CEJ) has not been adequately evaluated compared to those located occlusal to the CEJ. Forty class II cavities were prepared in human molars. The gingival margins of 20 preparations were located 0.5 mm occlusal to the CEJ, and the other 20 preparations were located 0.5 mm apical to the CEJ. The cavities surfaces were bonded with XenoV dental adhesive and filled with SDR in one bulk increment up to 4 mm, after which they were covered with CeramX. The teeth were subjected to thermo-and load-cycling, and their gingival margins were exposed to 0.5% basic-fuchsin solution. The specimens were sectioned mesio-distally and scored for microleakage. A Wilcoxon test for pairwise comparison was performed to determine significance. Dye penetration was observed in 30% of the 20 restorations with cavo-surface margins located occlusal to the CEJ and in 55% of the 20 restorations with cavo-surface margins located apical to the CEJ. The bulk-fill flowable resin base SureFil SDR with XenoV dental adhesive provided a better marginal seal in class II restorations with gingival margins above the CEJ compared to restorations with gingival margins below the CEJ. SDR should not be recommended for class II cavity preparations with gingival margins located below the CEJ.

  2. Composite Classifiers for Automatic Target Recognition

    National Research Council Canada - National Science Library

    Wang, Lin-Cheng

    1998-01-01

    ...) using forward-looking infrared (FLIR) imagery. Two existing classifiers, one based on learning vector quantization and the other on modular neural networks, are used as the building blocks for our composite classifiers...

  3. Experimental validation of a structural damage detection method based on marginal Hilbert spectrum

    Science.gov (United States)

    Banerji, Srishti; Roy, Timir B.; Sabamehr, Ardalan; Bagchi, Ashutosh

    2017-04-01

    Structural Health Monitoring (SHM) using dynamic characteristics of structures is crucial for early damage detection. Damage detection can be performed by capturing and assessing structural responses. Instrumented structures are monitored by analyzing the responses recorded by deployed sensors in the form of signals. Signal processing is an important tool for the processing of the collected data to diagnose anomalies in structural behavior. The vibration signature of the structure varies with damage. In order to attain effective damage detection, preservation of non-linear and non-stationary features of real structural responses is important. Decomposition of the signals into Intrinsic Mode Functions (IMF) by Empirical Mode Decomposition (EMD) and application of Hilbert-Huang Transform (HHT) addresses the time-varying instantaneous properties of the structural response. The energy distribution among different vibration modes of the intact and damaged structure depicted by Marginal Hilbert Spectrum (MHS) detects location and severity of the damage. The present work investigates damage detection analytically and experimentally by employing MHS. The testing of this methodology for different damage scenarios of a frame structure resulted in its accurate damage identification. The sensitivity of Hilbert Spectral Analysis (HSA) is assessed with varying frequencies and damage locations by means of calculating Damage Indices (DI) from the Hilbert spectrum curves of the undamaged and damaged structures.

  4. A case study of a precision fertilizer application task generation for wheat based on classified hyperspectral data from UAV combined with farm history data

    Science.gov (United States)

    Kaivosoja, Jere; Pesonen, Liisa; Kleemola, Jouko; Pölönen, Ilkka; Salo, Heikki; Honkavaara, Eija; Saari, Heikki; Mäkynen, Jussi; Rajala, Ari

    2013-10-01

    Different remote sensing methods for detecting variations in agricultural fields have been studied in last two decades. There are already existing systems for planning and applying e.g. nitrogen fertilizers to the cereal crop fields. However, there are disadvantages such as high costs, adaptability, reliability, resolution aspects and final products dissemination. With an unmanned aerial vehicle (UAV) based airborne methods, data collection can be performed cost-efficiently with desired spatial and temporal resolutions, below clouds and under diverse weather conditions. A new Fabry-Perot interferometer based hyperspectral imaging technology implemented in an UAV has been introduced. In this research, we studied the possibilities of exploiting classified raster maps from hyperspectral data to produce a work task for a precision fertilizer application. The UAV flight campaign was performed in a wheat test field in Finland in the summer of 2012. Based on the campaign, we have classified raster maps estimating the biomass and nitrogen contents at approximately stage 34 in the Zadoks scale. We combined the classified maps with farm history data such as previous yield maps. Then we generalized the combined results and transformed it to a vectorized zonal task map suitable for farm machinery. We present the selected weights for each dataset in the processing chain and the resultant variable rate application (VRA) task. The additional fertilization according to the generated task was shown to be beneficial for the amount of yield. However, our study is indicating that there are still many uncertainties within the process chain.

  5. A High-Resolution Tile-Based Approach for Classifying Biological Regions in Whole-Slide Histopathological Images.

    Science.gov (United States)

    Hoffman, R A; Kothari, S; Phan, J H; Wang, M D

    Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x10 6 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered.

  6. Development of the system based code. v. 5. Method of margin exchange. pt. 2. Determination of quality assurance index based on a 'Vector Method'

    International Nuclear Information System (INIS)

    Asayama, Tai

    2003-03-01

    For the commercialization of fast breeder reactors, 'System Based Code', a completely new scheme of a code on structural integrity, is being developed. One of the distinguished features of the System Based Code is that it is able to determine a reasonable total margin on a structural of system, by allowing the exchanges of margins between various technical items. Detailed estimation of failure probability of a given combination of technical items and its comparison with a target value is one way to achieve this. However, simpler and easier methods that allow margin exchange without detailed calculation of failure probability are desirable in design. The authors have developed a simplified method such as a 'design factor method' from this viewpoint. This report describes a 'Vector Method', which was been newly developed. Following points are reported: 1) The Vector Method allows margin exchange evaluation on an 'equi-quality assurance plane' using vector calculation. Evaluation is easy and sufficient accuracy is achieved. The equi-quality assurance plane is obtained by a projection of an 'equi-failure probability surface in a n-dimensional space, which is calculated beforehand for typical combinations of design variables. 2) The Vector Method is considered to give the 'Quality Assurance Index Method' a probabilistic interpretation. 3) An algebraic method was proposed for the calculation of failure probabilities, which is necessary to obtain a equi-failure probability surface. This method calculates failure probabilities without using numerical methods such as Monte Carlo simulation or numerical integration. Under limited conditions, this method is quite effective compared to numerical methods. 4) An illustration of the procedure of margin exchange evaluation is given. It may be possible to use this method to optimize ISI plans; even it is not fully implemented in the System Based Code. (author)

  7. On the decision threshold of eigenvalue ratio detector based on moments of joint and marginal distributions of extreme eigenvalues

    KAUST Repository

    Shakir, Muhammad Zeeshan

    2013-03-01

    Eigenvalue Ratio (ER) detector based on the two extreme eigenvalues of the received signal covariance matrix is currently one of the most effective solution for spectrum sensing. However, the analytical results of such scheme often depend on asymptotic assumptions since the distribution of the ratio of two extreme eigenvalues is exceptionally complex to compute. In this paper, a non-asymptotic spectrum sensing approach for ER detector is introduced to approximate the marginal and joint distributions of the two extreme eigenvalues. The two extreme eigenvalues are considered as dependent Gaussian random variables such that their joint probability density function (PDF) is approximated by a bivariate Gaussian distribution function for any number of cooperating secondary users and received samples. The PDF approximation approach is based on the moment matching method where we calculate the exact analytical moments of joint and marginal distributions of the two extreme eigenvalues. The decision threshold is calculated by exploiting the statistical mean and the variance of each of the two extreme eigenvalues and the correlation coefficient between them. The performance analysis of our newly proposed approximation approach is compared with the already published asymptotic Tracy-Widom approximation approach. It has been shown that our results are in perfect agreement with the simulation results for any number of secondary users and received samples. © 2002-2012 IEEE.

  8. Clustering based gene expression feature selection method: A computational approach to enrich the classifier efficiency of differentially expressed genes

    KAUST Repository

    Abusamra, Heba

    2016-07-20

    The native nature of high dimension low sample size of gene expression data make the classification task more challenging. Therefore, feature (gene) selection become an apparent need. Selecting a meaningful and relevant genes for classifier not only decrease the computational time and cost, but also improve the classification performance. Among different approaches of feature selection methods, however most of them suffer from several problems such as lack of robustness, validation issues etc. Here, we present a new feature selection technique that takes advantage of clustering both samples and genes. Materials and methods We used leukemia gene expression dataset [1]. The effectiveness of the selected features were evaluated by four different classification methods; support vector machines, k-nearest neighbor, random forest, and linear discriminate analysis. The method evaluate the importance and relevance of each gene cluster by summing the expression level for each gene belongs to this cluster. The gene cluster consider important, if it satisfies conditions depend on thresholds and percentage otherwise eliminated. Results Initial analysis identified 7120 differentially expressed genes of leukemia (Fig. 15a), after applying our feature selection methodology we end up with specific 1117 genes discriminating two classes of leukemia (Fig. 15b). Further applying the same method with more stringent higher positive and lower negative threshold condition, number reduced to 58 genes have be tested to evaluate the effectiveness of the method (Fig. 15c). The results of the four classification methods are summarized in Table 11. Conclusions The feature selection method gave good results with minimum classification error. Our heat-map result shows distinct pattern of refines genes discriminating between two classes of leukemia.

  9. Classification and localization of early-stage Alzheimer's disease in magnetic resonance images using a patch-based classifier ensemble

    International Nuclear Information System (INIS)

    Simoes, Rita; Slump, Cornelis H.; Cappellen van Walsum, Anne-Marie van

    2014-01-01

    Classification methods have been proposed to detect Alzheimer's disease (AD) using magnetic resonance images. Most rely on features such as the shape/volume of brain structures that need to be defined a priori. In this work, we propose a method that does not require either the segmentation of specific brain regions or the nonlinear alignment to a template. Besides classification, we also analyze which brain regions are discriminative between a group of normal controls and a group of AD patients. We perform 3D texture analysis using Local Binary Patterns computed at local image patches in the whole brain, combined in a classifier ensemble. We evaluate our method in a publicly available database including very mild-to-mild AD subjects and healthy elderly controls. For the subject cohort including only mild AD subjects, the best results are obtained using a combination of large (30 x 30 x 30 and 40 x 40 x 40 voxels) patches. A spatial analysis on the best performing patches shows that these are located in the medial-temporal lobe and in the periventricular regions. When very mild AD subjects are included in the dataset, the small (10 x 10 x 10 voxels) patches perform best, with the most discriminative ones being located near the left hippocampus. We show that our method is able not only to perform accurate classification, but also to localize discriminative brain regions, which are in accordance with the medical literature. This is achieved without the need to segment-specific brain structures and without performing nonlinear registration to a template, indicating that the method may be suitable for a clinical implementation that can help to diagnose AD at an earlier stage.

  10. A dynamic water accounting framework based on marginal resource opportunity cost

    Science.gov (United States)

    Tilmant, A.; Marques, G.; Mohamed, Y.

    2015-03-01

    Many river basins throughout the world are increasingly under pressure as water demands keep rising due to population growth, industrialization, urbanization and rising living standards. In the past, the typical answer to meet those demands focused on the supply side and involved the construction of hydraulic infrastructures to capture more water from surface water bodies and from aquifers. As river basins have become more and more developed, downstream water users and ecosystems have become increasingly dependent on the management actions taken by upstream users. The increased interconnectedness between water users, aquatic ecosystems and the built environment is further compounded by climate change and its impact on the water cycle. Those pressures mean that it has become increasingly important to measure and account for changes in water fluxes and their corresponding economic value as they progress throughout the river system. Such basin water accounting should provide policy makers with important information regarding the relative contribution of each water user, infrastructure and management decision to the overall economic value of the river basin. This paper presents a dynamic water accounting approach whereby the entire river basin is considered as a value chain with multiple services including production and storage. Water users and reservoir operators are considered as economic agents who can exchange water with their hydraulic neighbors at a price corresponding to the marginal value of water. Effective water accounting is made possible by keeping track of all water fluxes and their corresponding hypothetical transactions using the results of a hydro-economic model. The proposed approach is illustrated with the Eastern Nile River basin in Africa.

  11. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  12. Large margin image set representation and classification

    KAUST Repository

    Wang, Jim Jing-Yan; Alzahrani, Majed A.; Gao, Xin

    2014-01-01

    In this paper, we propose a novel image set representation and classification method by maximizing the margin of image sets. The margin of an image set is defined as the difference of the distance to its nearest image set from different classes and the distance to its nearest image set of the same class. By modeling the image sets by using both their image samples and their affine hull models, and maximizing the margins of the images sets, the image set representation parameter learning problem is formulated as an minimization problem, which is further optimized by an expectation - maximization (EM) strategy with accelerated proximal gradient (APG) optimization in an iterative algorithm. To classify a given test image set, we assign it to the class which could provide the largest margin. Experiments on two applications of video-sequence-based face recognition demonstrate that the proposed method significantly outperforms state-of-the-art image set classification methods in terms of both effectiveness and efficiency.

  13. IAEA safeguards and classified materials

    International Nuclear Information System (INIS)

    Pilat, J.F.; Eccleston, G.W.; Fearey, B.L.; Nicholas, N.J.; Tape, J.W.; Kratzer, M.

    1997-01-01

    The international community in the post-Cold War period has suggested that the International Atomic Energy Agency (IAEA) utilize its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials, some of which are classified, under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring classified materials. A traditional safeguards approach, based on nuclear material accountancy, would seem unavoidably to reveal classified information. However, further analysis of the IAEA's safeguards approaches is warranted in order to understand fully the scope and nature of any problems. The issues are complex and difficult, and it is expected that common technical understandings will be essential for their resolution. Accordingly, this paper examines and compares traditional safeguards item accounting of fuel at a nuclear power station (especially spent fuel) with the challenges presented by inspections of classified materials. This analysis is intended to delineate more clearly the problems as well as reveal possible approaches, techniques, and technologies that could allow the adaptation of safeguards to the unprecedented task of inspecting classified materials. It is also hoped that a discussion of these issues can advance ongoing political-technical debates on international inspections of excess classified materials

  14. Quantification and Assessment of Interfraction Setup Errors Based on Cone Beam CT and Determination of Safety Margins for Radiotherapy.

    Directory of Open Access Journals (Sweden)

    Macarena Cubillos Mesías

    Full Text Available To quantify interfraction patient setup-errors for radiotherapy based on cone-beam computed tomography and suggest safety margins accordingly.Positioning vectors of pre-treatment cone-beam computed tomography for different treatment sites were collected (n = 9504. For each patient group the total average and standard deviation were calculated and the overall mean, systematic and random errors as well as safety margins were determined.The systematic (and random errors in the superior-inferior, left-right and anterior-posterior directions were: for prostate, 2.5(3.0, 2.6(3.9 and 2.9(3.9mm; for prostate bed, 1.7(2.0, 2.2(3.6 and 2.6(3.1mm; for cervix, 2.8(3.4, 2.3(4.6 and 3.2(3.9mm; for rectum, 1.6(3.1, 2.1(2.9 and 2.5(3.8mm; for anal, 1.7(3.7, 2.1(5.1 and 2.5(4.8mm; for head and neck, 1.9(2.3, 1.4(2.0 and 1.7(2.2mm; for brain, 1.0(1.5, 1.1(1.4 and 1.0(1.1mm; and for mediastinum, 3.3(4.6, 2.6(3.7 and 3.5(4.0mm. The CTV-to-PTV margins had the smallest value for brain (3.6, 3.7 and 3.3mm and the largest for mediastinum (11.5, 9.1 and 11.6mm. For pelvic treatments the means (and standard deviations were 7.3 (1.6, 8.5 (0.8 and 9.6 (0.8mm.Systematic and random setup-errors were smaller than 5mm. The largest errors were found for organs with higher motion probability. The suggested safety margins were comparable to published values in previous but often smaller studies.

  15. Potential improvement of CANDU NPP safety margins by shortening the response time of shutdown systems using FPGA based implementation

    Energy Technology Data Exchange (ETDEWEB)

    Jingke She, E-mail: jshe2@uwo.ca [Department of Electrical and Computer Engineering, University of Western Ontario, London, Ontario N6A 5B9 (Canada); Jin Jiang, E-mail: jjiang@eng.uwo.ca [Department of Electrical and Computer Engineering, University of Western Ontario, London, Ontario N6A 5B9 (Canada)

    2012-03-15

    Highlights: Black-Right-Pointing-Pointer Quantitative analysis of the safety margin improvement through thermalhydraulic simulation and analysis. Black-Right-Pointing-Pointer Hardware-in-the-loop simulation of realizing the improvement by an FPGA-based SDS1. Black-Right-Pointing-Pointer Verification of potential operating power upgrade without endangering the plant safety. - Abstract: The relationship between the peak values of critical reactor variables, such as neutronic power, inside a CANDU reactor and the speed of the response of its shutdown system has been analyzed in the event of a large loss of coolant accident (LOCA). The advantage of shortening the response time of the shutdown action has been demonstrated in term of the improved safety margin. A field programmable gate array (FPGA) platform has been chosen to implement such a shutdown system. Hardware-in-the-loop (HIL) simulations have been performed to demonstrate the feasibility of this concept. Furthermore, connections between the speed of response of the shutdown system and the nominal operating power level of the reactor have been drawn to support for potential power upgrade for existing power plants.

  16. Identification of species based on DNA barcode using k-mer feature vector and Random forest classifier.

    Science.gov (United States)

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R

    2016-11-05

    DNA barcoding is a molecular diagnostic method that allows automated and accurate identification of species based on a short and standardized fragment of DNA. To this end, an attempt has been made in this study to develop a computational approach for identifying the species by comparing its barcode with the barcode sequence of known species present in the reference library. Each barcode sequence was first mapped onto a numeric feature vector based on k-mer frequencies and then Random forest methodology was employed on the transformed dataset for species identification. The proposed approach outperformed similarity-based, tree-based, diagnostic-based approaches and found comparable with existing supervised learning based approaches in terms of species identification success rate, while compared using real and simulated datasets. Based on the proposed approach, an online web interface SPIDBAR has also been developed and made freely available at http://cabgrid.res.in:8080/spidbar/ for species identification by the taxonomists. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Total and Marginal Cost Analysis for a High School Based Bystander Intervention

    Science.gov (United States)

    Bush, Joshua L.; Bush, Heather M.; Coker, Ann L.; Brancato, Candace J.; Clear, Emily R.; Recktenwald, Eileen A.

    2018-01-01

    Costs of providing the Green Dot bystander-based intervention, shown to be effective in the reduction of sexual violence among Kentucky high school students, were estimated based on data from a large cluster-randomized clinical trial. Rape Crisis Center Educators were trained to provide Green Dot curriculum to students. Implementing Green Dot in…

  18. Hybrid classifiers methods of data, knowledge, and classifier combination

    CERN Document Server

    Wozniak, Michal

    2014-01-01

    This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.

  19. Reliability-based evaluation of bridge components for consistent safety margins.

    Science.gov (United States)

    2010-10-01

    The Load and Resistant Factor Design (LRFD) approach is based on the concept of structural reliability. The approach is more : rational than the former design approaches such as Load Factor Design or Allowable Stress Design. The LRFD Specification fo...

  20. Mobile Game Based Learning: Can it enhance learning of marginalized peer educators?

    OpenAIRE

    Roy, Anupama; Sharples, Mike

    2015-01-01

    This paper describes an investigatory project to pilot an SMS based game to enhance the training of peer educators of MSM (Males having Sex with Males) groups in India. The objective of this research was to increase the efficacy of the MSM peer educators by bridging the gap between the training needs and their real life experiences. An SMS based game was designed using participatory approaches as a learning support, upholding their real life experiences in game form. The game was designed on ...

  1. Computer-aided diagnosis for classifying benign versus malignant thyroid nodules based on ultrasound images: A comparison with radiologist-based assessments

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Yongjun [School of Electrical Engineering, Korea Advanced Institute of Science and Technology, 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Paul, Anjan Kumar [Funzin, Inc., 148 Ankuk-dong, Jongro-gu, Seoul 03060 (Korea, Republic of); Kim, Namkug, E-mail: namkugkim@gmail.com; Baek, Jung Hwan; Choi, Young Jun [Department of Radiology, University of Ulsan College of Medicine, 388-1 Pungnap2-dong, Songpa-gu, Seoul 05505 (Korea, Republic of); Ha, Eun Ju [Department of Radiology, Ajou University School of Medicine, Wonchon-Dong, Yeongtong-Gu, Suwon 16499 (Korea, Republic of); Lee, Kang Dae; Lee, Hyoung Shin [Department of Otolaryngology Head and Neck Surgery, Kosin University College of Medicine, 34 Amnamdong, Seu-Gu, Busan 49267 (Korea, Republic of); Shin, DaeSeock; Kim, Nakyoung [MIDAS Information Technology, Pangyo-ro 228, Bundang-gu, Seongnam-si, Gyeonggi 13487 (Korea, Republic of)

    2016-01-15

    Purpose: To develop a semiautomated computer-aided diagnosis (CAD) system for thyroid cancer using two-dimensional ultrasound images that can be used to yield a second opinion in the clinic to differentiate malignant and benign lesions. Methods: A total of 118 ultrasound images that included axial and longitudinal images from patients with biopsy-confirmed malignant (n = 30) and benign (n = 29) nodules were collected. Thyroid CAD software was developed to extract quantitative features from these images based on thyroid nodule segmentation in which adaptive diffusion flow for active contours was used. Various features, including histogram, intensity differences, elliptical fit, gray-level co-occurrence matrixes, and gray-level run-length matrixes, were evaluated for each region imaged. Based on these imaging features, a support vector machine (SVM) classifier was used to differentiate benign and malignant nodules. Leave-one-out cross-validation with sequential forward feature selection was performed to evaluate the overall accuracy of this method. Additionally, analyses with contingency tables and receiver operating characteristic (ROC) curves were performed to compare the performance of CAD with visual inspection by expert radiologists based on established gold standards. Results: Most univariate features for this proposed CAD system attained accuracies that ranged from 78.0% to 83.1%. When optimal SVM parameters that were established using a grid search method with features that radiologists use for visual inspection were employed, the authors could attain rates of accuracy that ranged from 72.9% to 84.7%. Using leave-one-out cross-validation results in a multivariate analysis of various features, the highest accuracy achieved using the proposed CAD system was 98.3%, whereas visual inspection by radiologists reached 94.9% accuracy. To obtain the highest accuracies, “axial ratio” and “max probability” in axial images were most frequently included in the

  2. CAD system for quantifying emphysema severity based on multi-class classifier using CT image and spirometry information

    International Nuclear Information System (INIS)

    Nimura, Yukitaka; Mori, Kensaku; Kitasaka, Takayuki; Honma, Hirotoshi; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi

    2010-01-01

    Many diagnosis methods based on CT image processing are proposed for quantifying emphysema. The most of these diagnosis methods extract lesions as Low-Attenuation Areas (LAA) by simple threshold processing and evaluate their severity by calculating the LAA (LAA%) in the lung. However, pulmonary emphysema is diagnosed by not only the LAA but also the changes of pulmonary blood vessel and the spirometric measurements. This paper proposes a novel computer-aided detection (CAD) system for quantifying emphysema by combining spirometric measurements and results of CT image processing. The experimental results revealed that the accuracy rate of the proposed method was 78.3%. It is 13.1% improvement compared with the method based on only the LAA%. (author)

  3. A user-friendly SSVEP-based brain-computer interface using a time-domain classifier.

    Science.gov (United States)

    Luo, An; Sullivan, Thomas J

    2010-04-01

    We introduce a user-friendly steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) system. Single-channel EEG is recorded using a low-noise dry electrode. Compared to traditional gel-based multi-sensor EEG systems, a dry sensor proves to be more convenient, comfortable and cost effective. A hardware system was built that displays four LED light panels flashing at different frequencies and synchronizes with EEG acquisition. The visual stimuli have been carefully designed such that potential risk to photosensitive people is minimized. We describe a novel stimulus-locked inter-trace correlation (SLIC) method for SSVEP classification using EEG time-locked to stimulus onsets. We studied how the performance of the algorithm is affected by different selection of parameters. Using the SLIC method, the average light detection rate is 75.8% with very low error rates (an 8.4% false positive rate and a 1.3% misclassification rate). Compared to a traditional frequency-domain-based method, the SLIC method is more robust (resulting in less annoyance to the users) and is also suitable for irregular stimulus patterns.

  4. mPLR-Loc: an adaptive decision multi-label classifier based on penalized logistic regression for protein subcellular localization prediction.

    Science.gov (United States)

    Wan, Shibiao; Mak, Man-Wai; Kung, Sun-Yuan

    2015-03-15

    Proteins located in appropriate cellular compartments are of paramount importance to exert their biological functions. Prediction of protein subcellular localization by computational methods is required in the post-genomic era. Recent studies have been focusing on predicting not only single-location proteins but also multi-location proteins. However, most of the existing predictors are far from effective for tackling the challenges of multi-label proteins. This article proposes an efficient multi-label predictor, namely mPLR-Loc, based on penalized logistic regression and adaptive decisions for predicting both single- and multi-location proteins. Specifically, for each query protein, mPLR-Loc exploits the information from the Gene Ontology (GO) database by using its accession number (AC) or the ACs of its homologs obtained via BLAST. The frequencies of GO occurrences are used to construct feature vectors, which are then classified by an adaptive decision-based multi-label penalized logistic regression classifier. Experimental results based on two recent stringent benchmark datasets (virus and plant) show that mPLR-Loc remarkably outperforms existing state-of-the-art multi-label predictors. In addition to being able to rapidly and accurately predict subcellular localization of single- and multi-label proteins, mPLR-Loc can also provide probabilistic confidence scores for the prediction decisions. For readers' convenience, the mPLR-Loc server is available online (http://bioinfo.eie.polyu.edu.hk/mPLRLocServer). Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Mobile Game Based Learning: Can It Enhance Learning of Marginalized Peer Educators?

    Science.gov (United States)

    Roy, Anupama; Sharples, Mike

    2015-01-01

    This paper describes an investigatory project to pilot an SMS based game to enhance the training of peer educators of MSM (Males having Sex with Males) groups in India. The objective of this research was to increase the efficacy of the MSM peer educators by bridging the gap between the training needs and their real life experiences. An SMS based…

  6. Monitoring and classifying animal behavior using ZigBee-based mobile ad hoc wireless sensor networks and artificial neural networks

    DEFF Research Database (Denmark)

    S. Nadimi, Esmaeil; Nyholm Jørgensen, Rasmus; Blanes-Vidal, Victoria

    2012-01-01

    Animal welfare is an issue of great importance in modern food production systems. Because animal behavior provides reliable information about animal health and welfare, recent research has aimed at designing monitoring systems capable of measuring behavioral parameters and transforming them...... into their corresponding behavioral modes. However, network unreliability and high-energy consumption have limited the applicability of those systems. In this study, a 2.4-GHz ZigBee-based mobile ad hoc wireless sensor network (MANET) that is able to overcome those problems is presented. The designed MANET showed high...... communication reliability, low energy consumption and low packet loss rate (14.8%) due to the deployment of modern communication protocols (e.g. multi-hop communication and handshaking protocol). The measured behavioral parameters were transformed into the corresponding behavioral modes using a multilayer...

  7. Large-scale Reconstructions and Independent, Unbiased Clustering Based on Morphological Metrics to Classify Neurons in Selective Populations.

    Science.gov (United States)

    Bragg, Elise M; Briggs, Farran

    2017-02-15

    This protocol outlines large-scale reconstructions of neurons combined with the use of independent and unbiased clustering analyses to create a comprehensive survey of the morphological characteristics observed among a selective neuronal population. Combination of these techniques constitutes a novel approach for the collection and analysis of neuroanatomical data. Together, these techniques enable large-scale, and therefore more comprehensive, sampling of selective neuronal populations and establish unbiased quantitative methods for describing morphologically unique neuronal classes within a population. The protocol outlines the use of modified rabies virus to selectively label neurons. G-deleted rabies virus acts like a retrograde tracer following stereotaxic injection into a target brain structure of interest and serves as a vehicle for the delivery and expression of EGFP in neurons. Large numbers of neurons are infected using this technique and express GFP throughout their dendrites, producing "Golgi-like" complete fills of individual neurons. Accordingly, the virus-mediated retrograde tracing method improves upon traditional dye-based retrograde tracing techniques by producing complete intracellular fills. Individual well-isolated neurons spanning all regions of the brain area under study are selected for reconstruction in order to obtain a representative sample of neurons. The protocol outlines procedures to reconstruct cell bodies and complete dendritic arborization patterns of labeled neurons spanning multiple tissue sections. Morphological data, including positions of each neuron within the brain structure, are extracted for further analysis. Standard programming functions were utilized to perform independent cluster analyses and cluster evaluations based on morphological metrics. To verify the utility of these analyses, statistical evaluation of a cluster analysis performed on 160 neurons reconstructed in the thalamic reticular nucleus of the thalamus

  8. Faith-Based Diplomacy: A Pathway to Marginalizing Al-Qa’ida

    Science.gov (United States)

    2013-03-01

    contrast to the Islamist governments 12 that have risen to power since early 2011. The Muslim Brotherhood in Egypt , for example, has advocated a more...Zarqawi who had been misbehaving, and he says, brother, we notice from afar X, Y, and Z is happening. Based on our experience in Egypt and around the...theory: “Al Qaeda is not a traditional hierarchical organization, with a pyramid -style organizational structure, and it does not exercise full command

  9. Difference in the Set-up Margin between 2D Conventional and 3D CT Based Planning in Patients with Early Breast Cancer

    International Nuclear Information System (INIS)

    Jo, Sun Mi; Chun, Mi Sun; Kim, Mi Hwa; Oh, Young Taek; Noh, O Kyu; Kang, Seung Hee

    2010-01-01

    Simulation using computed tomography (CT) is now widely available for radiation treatment planning for breast cancer. It is an important tool to help define the tumor target and normal tissue based on anatomical features of an individual patient. In Korea, most patients have small sized breasts and the purpose of this study was to review the margin of treatment field between conventional two-dimensional (2D) planning and CT based three-dimensional (3D) planning in patients with small breasts. Twenty-five consecutive patients with early breast cancer undergoing breast conservation therapy were selected. All patients underwent 3D CT based planning with a conventional breast tangential field design. In 2D planning, the treatment field margins were determined by palpation of the breast parenchyma (In general, the superior: base of the clavicle, medial: midline, lateral: mid - axillary line, and inferior margin: 2 m below the inflamammary fold). In 3D planning, the clinical target volume (CTV) ought to comprise all glandular breast tissue, and the PTV was obtained by adding a 3D margin of 1 cm around the CTV except in the skin direction. The difference in the treatment field margin and equivalent field size between 2D and 3D planning were evaluated. The association between radiation field margins and factors such as body mass index, menopause status, and bra size was determined. Lung volume and heart volume were examined on the basis of the prescribed breast radiation dose and 3D dose distribution. The margins of the treatment field were smaller in the 3D planning except for two patients. The superior margin was especially variable (average, 2.5 cm; range, -2.5 to 4.5 cm; SD, 1.85). The margin of these targets did not vary equally across BMI class, menopause status, or bra size. The average irradiated lung volume was significantly lower for 3D planning. The average irradiated heart volume did not decrease significantly. The use of 3D CT based planning reduced the

  10. Difference in the Set-up Margin between 2D Conventional and 3D CT Based Planning in Patients with Early Breast Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Sun Mi; Chun, Mi Sun; Kim, Mi Hwa; Oh, Young Taek; Noh, O Kyu [Ajou University School of Medicine, Seoul (Korea, Republic of); Kang, Seung Hee [Inje University, Ilsan Paik Hospital, Ilsan (Korea, Republic of)

    2010-11-15

    Simulation using computed tomography (CT) is now widely available for radiation treatment planning for breast cancer. It is an important tool to help define the tumor target and normal tissue based on anatomical features of an individual patient. In Korea, most patients have small sized breasts and the purpose of this study was to review the margin of treatment field between conventional two-dimensional (2D) planning and CT based three-dimensional (3D) planning in patients with small breasts. Twenty-five consecutive patients with early breast cancer undergoing breast conservation therapy were selected. All patients underwent 3D CT based planning with a conventional breast tangential field design. In 2D planning, the treatment field margins were determined by palpation of the breast parenchyma (In general, the superior: base of the clavicle, medial: midline, lateral: mid - axillary line, and inferior margin: 2 m below the inflamammary fold). In 3D planning, the clinical target volume (CTV) ought to comprise all glandular breast tissue, and the PTV was obtained by adding a 3D margin of 1 cm around the CTV except in the skin direction. The difference in the treatment field margin and equivalent field size between 2D and 3D planning were evaluated. The association between radiation field margins and factors such as body mass index, menopause status, and bra size was determined. Lung volume and heart volume were examined on the basis of the prescribed breast radiation dose and 3D dose distribution. The margins of the treatment field were smaller in the 3D planning except for two patients. The superior margin was especially variable (average, 2.5 cm; range, -2.5 to 4.5 cm; SD, 1.85). The margin of these targets did not vary equally across BMI class, menopause status, or bra size. The average irradiated lung volume was significantly lower for 3D planning. The average irradiated heart volume did not decrease significantly. The use of 3D CT based planning reduced the

  11. Are marginalized women being left behind? A population-based study of institutional deliveries in Karnataka, India

    Directory of Open Access Journals (Sweden)

    Adamson Paul C

    2012-01-01

    Full Text Available Abstract Background While India has made significant progress in reducing maternal mortality, attaining further declines will require increased skilled birth attendance and institutional delivery among marginalized and difficult to reach populations. Methods A population-based survey was carried out among 16 randomly selected rural villages in rural Mysore District in Karnataka, India between August and September 2008. All households in selected villages were enumerated and women with children 6 years of age or younger underwent an interviewer-administered questionnaire on antenatal care and institutional delivery. Results Institutional deliveries in rural areas of Mysore District increased from 51% to 70% between 2002 and 2008. While increasing numbers of women were accessing antenatal care and delivering in hospitals, large disparities were found in uptake of these services among different castes. Mothers belonging to general castes were almost twice as likely to have an institutional birth as compared to scheduled castes and tribes. Mothers belonging to other backward caste or general castes had 1.8 times higher odds (95% CI: 1.21, 2.89 of having an institutional delivery as compared to scheduled castes and tribes. In multivariable analysis, which adjusted for inter- and intra-village variance, Below Poverty Line status, caste, and receiving antenatal care were all associated with institutional delivery. Conclusion The results of the study suggest that while the Indian Government has made significant progress in increasing antenatal care and institutional deliveries among rural populations, further success in lowering maternal mortality will likely hinge on the success of NRHM programs focused on serving marginalized groups. Health interventions which target SC/ST may also have to address both perceived and actual stigma and discrimination, in addition to providing needed services. Strategies for overcoming these barriers may include

  12. Classifying wine according to geographical origin via quadrupole-based ICP-mass spectrometry measurements of boron isotope ratios

    Energy Technology Data Exchange (ETDEWEB)

    Coetzee, Paul P. [University of Johannesburg, Department of Chemistry, Johannesburg (South Africa); Vanhaecke, Frank [Institute for Nuclear Sciences, Laboratory of Analytical Chemistry Ghent University, Ghent (Belgium)

    2005-11-01

    The potential of quadrupole-based ICP-MS as a tool for B-isotopic analysis of wines and its usefulness in provenance determinations were assessed. A precision of 0.1-0.25% RSD (corresponding to a relative standard deviation of the mean of three replicate measurements of 0.06-0.12%) was sufficient to establish small differences in the B isotope ratios in wines from different geographical origins. Each sample measurement was bracketed by measurements of a standard and mass bias drift correction made by interpolation. Sample preparation was kept to a minimum to avoid possible fractionation. Dilution of the wine samples by a factor of 100 with 0.65% HNO{sub 3} was found to reduce matrix-induced mass discrimination substantially. Wines from three wine-producing regions, Stellenbosch, Robertson, and Swartland, in the Western Cape Province of South Africa, and wines from specific regions in France (Bergerac) and Italy (Valpolicella) were analyzed by ICP-QMS for their B-isotopic compositions. It was concluded that the {sup 11}B/{sup 10}B ratios can be used to characterize wines from different geographical origins. Average {sup 11}B/{sup 10}B ratios in red wines from South Africa (Stellenbosch), France (Bergerac), and Italy (Valpolicella) were found to differ by between 0.5 and 1.5%. (orig.)

  13. PG-Metrics: A chemometric-based approach for classifying bacterial peptidoglycan data sets and uncovering their subjacent chemical variability.

    Directory of Open Access Journals (Sweden)

    Keshav Kumar

    Full Text Available Bacteria cells are protected from osmotic and environmental stresses by an exoskeleton-like polymeric structure called peptidoglycan (PG or murein sacculus. This structure is fundamental for bacteria's viability and thus, the mechanisms underlying cell wall assembly and how it is modulated serve as targets for many of our most successful antibiotics. Therefore, it is now more important than ever to understand the genetics and structural chemistry of the bacterial cell walls in order to find new and effective methods of blocking it for the treatment of disease. In the last decades, liquid chromatography and mass spectrometry have been demonstrated to provide the required resolution and sensitivity to characterize the fine chemical structure of PG. However, the large volume of data sets that can be produced by these instruments today are difficult to handle without a proper data analysis workflow. Here, we present PG-metrics, a chemometric based pipeline that allows fast and easy classification of bacteria according to their muropeptide chromatographic profiles and identification of the subjacent PG chemical variability between e.g. bacterial species, growth conditions and, mutant libraries. The pipeline is successfully validated here using PG samples from different bacterial species and mutants in cell wall proteins. The obtained results clearly demonstrated that PG-metrics pipeline is a valuable bioanalytical tool that can lead us to cell wall classification and biomarker discovery.

  14. Classifying wine according to geographical origin via quadrupole-based ICP-mass spectrometry measurements of boron isotope ratios

    International Nuclear Information System (INIS)

    Coetzee, Paul P.; Vanhaecke, Frank

    2005-01-01

    The potential of quadrupole-based ICP-MS as a tool for B-isotopic analysis of wines and its usefulness in provenance determinations were assessed. A precision of 0.1-0.25% RSD (corresponding to a relative standard deviation of the mean of three replicate measurements of 0.06-0.12%) was sufficient to establish small differences in the B isotope ratios in wines from different geographical origins. Each sample measurement was bracketed by measurements of a standard and mass bias drift correction made by interpolation. Sample preparation was kept to a minimum to avoid possible fractionation. Dilution of the wine samples by a factor of 100 with 0.65% HNO 3 was found to reduce matrix-induced mass discrimination substantially. Wines from three wine-producing regions, Stellenbosch, Robertson, and Swartland, in the Western Cape Province of South Africa, and wines from specific regions in France (Bergerac) and Italy (Valpolicella) were analyzed by ICP-QMS for their B-isotopic compositions. It was concluded that the 11 B/ 10 B ratios can be used to characterize wines from different geographical origins. Average 11 B/ 10 B ratios in red wines from South Africa (Stellenbosch), France (Bergerac), and Italy (Valpolicella) were found to differ by between 0.5 and 1.5%. (orig.)

  15. A margin-based analysis of the dosimetric impact of motion on step-and-shoot IMRT lung plans

    International Nuclear Information System (INIS)

    Waghorn, Benjamin J; Shah, Amish P; Rineer, Justin M; Langen, Katja M; Meeks, Sanford L

    2014-01-01

    Intrafraction motion during step-and-shoot (SNS) IMRT is known to affect the target dosimetry by a combination of dose blurring and interplay effects. These effects are typically managed by adding a margin around the target. A quantitative analysis was performed, assessing the relationship between target motion, margin size, and target dosimetry with the goal of introducing new margin recipes. A computational algorithm was used to calculate 1,174 motion-encoded dose distributions and DVHs within the patient’s CT dataset. Sinusoidal motion tracks were used simulating intrafraction motion for nine lung tumor patients, each with multiple margin sizes. D 95% decreased by less than 3% when the maximum target displacement beyond the margin experienced motion less than 5 mm in the superior-inferior direction and 15 mm in the anterior-posterior direction. For target displacements greater than this, D 95% decreased rapidly. Targets moving in excess of 5 mm outside the margin can cause significant changes to the target. D 95% decreased by up to 20% with target motion 10 mm outside the margin, with underdosing primarily limited to the target periphery. Multi-fractionated treatments were found to exacerbate target under-coverage. Margins several millimeters smaller than the maximum target displacement provided acceptable motion protection, while also allowing for reduced normal tissue morbidity

  16. Trends in size classified particle number concentration in subtropical Brisbane, Australia, based on a 5 year study

    Science.gov (United States)

    Mejía, J. F.; Wraith, D.; Mengersen, K.; Morawska, L.

    Particle number size distribution data in the range from 0.015 to 0.630 μm were collected over a 5-year period in the central business district (CBD) of Brisbane, Australia. Particle size distribution was summarised by total number concentration and number median diameter (NMD) as well as the number concentration of the 0.015-0.030 ( N15-30), 0.030-0.050 ( N30-50), 0.050-0.100 ( N50-100), 0.100-0.300 ( N100-300) and 0.300-0.630 ( N300-630) μm size classes. Morning (6:00-10:00) and afternoon (16:00-19:00) measurements, the former representing fresh traffic emissions (based on the local meteorological conditions) and the latter well-mixed emissions from the CBD, during weekdays were extracted and the respective monthly mean values were estimated for time series analysis. For all size fractions, average morning concentrations were about 1.5 higher than in the afternoon whereas NMD did not vary between the morning and afternoon. The trend and seasonal components were extracted through weighted linear regression models, using the monthly variance as weights. Only the morning measurements exhibited significant trends. During this time of the day, total particle number increased by 105.7% and the increase was greater for larger particles, resulting in a shift in NMD by 7.9%. Although no seasonal component was detected the evidence against it remained weak due to the limitations of the database.

  17. Final Validation of the ProMisE Molecular Classifier for Endometrial Carcinoma in a Large Population-based Case Series.

    Science.gov (United States)

    Kommoss, S; McConechy, M K; Kommoss, F; Leung, S; Bunz, A; Magrill, J; Britton, H; Kommoss, F; Grevenkamp, F; Karnezis, A; Yang, W; Lum, A; Krämer, B; Taran, F; Staebler, A; Lax, S; Brucker, S Y; Huntsman, D G; Gilks, C B; McAlpine, J N; Talhouk, A

    2018-02-07

    Based on The Cancer Genome Atlas, we previously developed and confirmed a pragmatic molecular classifier for endometrial cancers; ProMisE (Proactive Molecular Risk Classifier for Endometrial Cancer). ProMisE identifies four prognostically distinct molecular subtypes, and can be applied to diagnostic specimens (biopsy/curettings), enabling earlier informed decision-making. We have strictly adhered to the Institute of Medicine (IOM) guidelines for the development of genomic biomarkers, and herein present the final validation step of a locked-down classifier prior to clinical application. We assessed a retrospective cohort of women from the Tübingen University Women's Hospital treated for endometrial carcinoma between 2003-13. Primary outcomes of overall, disease-specific and progression-free survival were evaluated for clinical, pathological, and molecular features. Complete clinical and molecular data were evaluable from 452 women. Patient age ranged from 29 - 93 (median 65) years, and 87.8% cases were endometrioid histotype. Grade distribution included 282 (62.4%) G1, 75 (16.6%) G2, and 95 (21.0%) G3 tumors. 276 (61.1%) patients had stage IA disease, with the remaining stage IB (89 (19.7%)), stage II (26 (5.8%)), and stage III/IV (61 (13.5%)). ProMisE molecular classification yielded 127 (28.1%) MMR-D, 42 (9.3%) POLE, 55 (12.2%) p53abn, and 228 (50.4%) p53wt. ProMisE was a prognostic marker for progression-free (P=0.001) and disease-specific (P=0.03) survival even after adjusting for known risk factors. Concordance between diagnostic and surgical specimens was highly favorable; accuracy 0.91, kappa 0.88. We have developed, confirmed and now validated a pragmatic molecular classification tool (ProMisE) that provides consistent categorization of tumors and identifies four distinct prognostic molecular subtypes. ProMisE can be applied to diagnostic samples and thus could be used to inform surgical procedure(s) and/or need for adjuvant therapy. Based on the IOM

  18. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    Science.gov (United States)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post

  19. Power Allocation based on Certain Equivalent Margin Applicable to Wireless Communication Systems Limited by Interference

    Directory of Open Access Journals (Sweden)

    Tadeu Junior Gross

    2015-10-01

    Full Text Available This paper analyzes the method of power allocation for wireless communication networks, limited by multiple access interference under fading Rayleigh channels. The method takes into account the statistical variation of the desired and interfering signals power and optimally and dynamically allocates the power resources in the system. This allocation is made being taken into account the restrictions imposed by the outage probability for each receiver/transmitter pair. Based on the Perron-Frobenius eigenvalue and geometric programming (GP theories, some interesting results were found for the optimization problem analyzed. A GP is a special type of optimization problem that can be transformed in a non-linear convex optimization problem simply by the change of variables, and then to be globally solved. Also, an iterative method for fast convergence was shown to find the optimally power allocation seeking to minimize the outage probability in the communication.

  20. Opening up the blackbox: an interpretable deep neural network-based classifier for cell-type specific enhancer predictions.

    Science.gov (United States)

    Kim, Seong Gon; Theera-Ampornpunt, Nawanol; Fang, Chih-Hao; Harwani, Mrudul; Grama, Ananth; Chaterji, Somali

    2016-08-01

    Gene expression is mediated by specialized cis-regulatory modules (CRMs), the most prominent of which are called enhancers. Early experiments indicated that enhancers located far from the gene promoters are often responsible for mediating gene transcription. Knowing their properties, regulatory activity, and genomic targets is crucial to the functional understanding of cellular events, ranging from cellular homeostasis to differentiation. Recent genome-wide investigation of epigenomic marks has indicated that enhancer elements could be enriched for certain epigenomic marks, such as, combinatorial patterns of histone modifications. Our efforts in this paper are motivated by these recent advances in epigenomic profiling methods, which have uncovered enhancer-associated chromatin features in different cell types and organisms. Specifically, in this paper, we use recent state-of-the-art Deep Learning methods and develop a deep neural network (DNN)-based architecture, called EP-DNN, to predict the presence and types of enhancers in the human genome. It uses as features, the expression levels of the histone modifications at the peaks of the functional sites as well as in its adjacent regions. We apply EP-DNN to four different cell types: H1, IMR90, HepG2, and HeLa S3. We train EP-DNN using p300 binding sites as enhancers, and TSS and random non-DHS sites as non-enhancers. We perform EP-DNN predictions to quantify the validation rate for different levels of confidence in the predictions and also perform comparisons against two state-of-the-art computational models for enhancer predictions, DEEP-ENCODE and RFECS. We find that EP-DNN has superior accuracy and takes less time to make predictions. Next, we develop methods to make EP-DNN interpretable by computing the importance of each input feature in the classification task. This analysis indicates that the important histone modifications were distinct for different cell types, with some overlaps, e.g., H3K27ac was

  1. Overcoming the drawback of lower sense margin in tunnel FET based dynamic memory along with enhanced charge retention and scalability

    Science.gov (United States)

    Navlakha, Nupur; Kranti, Abhinav

    2017-11-01

    The work reports on the use of a planar tri-gate tunnel field effect transistor (TFET) to operate as dynamic memory at 85 °C with an enhanced sense margin (SM). Two symmetric gates (G1) aligned to the source at a partial region of intrinsic film result into better electrostatic control that regulates the read mechanism based on band-to-band tunneling, while the other gate (G2), positioned adjacent to the first front gate is responsible for charge storage and sustenance. The proposed architecture results in an enhanced SM of ˜1.2 μA μm-1 along with a longer retention time (RT) of ˜1.8 s at 85 °C, for a total length of 600 nm. The double gate architecture towards the source increases the tunneling current and also reduces short channel effects, enhancing SM and scalability, thereby overcoming the critical bottleneck faced by TFET based dynamic memories. The work also discusses the impact of overlap/underlap and interface charges on the performance of TFET based dynamic memory. Insights into device operation demonstrate that the choice of appropriate architecture and biases not only limit the trade-off between SM and RT, but also result in improved scalability with drain voltage and total length being scaled down to 0.8 V and 115 nm, respectively.

  2. Community-Based Research among Marginalized HIV Populations: Issues of Support, Resources, and Empowerment

    Directory of Open Access Journals (Sweden)

    Mario Brondani

    2012-01-01

    Full Text Available A research question was posed to us by a local HIV-resource organization interested in exploring the educational and service needs of those unreached. In order to properly address this inquiry, we developed a community-based participatory research by training peer-led volunteers to facilitate focus-group discussions within Aboriginal and refugees participants following an interview guide. We gathered Aboriginal people and refugees separated into three focus groups each, enrolling a total of 41 self-identified HIV-positive, 38 males. The discussions were tape recorded upon consent and lasted between 59 and 118 minutes. We analyzed the thematic information collected interactively through constant comparison. The qualitative data leading to categories, codes, and themes formed the basis for the spatial representation of a conceptual mapping. Both groups shared similar struggles in living with HIV and in properly accessing local nonmedical HIV resources and discussed their concerns towards the need for empowerment and support to take control of their health.

  3. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Science.gov (United States)

    Tokuda, Tomoki; Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  4. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions.

    Directory of Open Access Journals (Sweden)

    Tomoki Tokuda

    Full Text Available We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data.

  5. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    Science.gov (United States)

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  6. Providing choices for a marginalized community. A community-based project with Malaysian aborigines.

    Science.gov (United States)

    Kaur, P

    1994-01-01

    In 1991, the Family Planning Association (FPA) of the Malaysian state of Perak initiated a community-based development project in the remote Aborigine village of Kampung Tisong. The community consists of approximately 34 households who survive on an average income of about US $37. Malnutrition is pervasive, even minor ailments cause death, more serious afflictions are prevalent, and the closest government clinic is 20 kilometers away and seldom used by the Aborigines. 70% of the children have access to education, but parental illiteracy is a serious educational obstacle. The goals of the FPA program are to 1) promote maternal and child health and responsible parenthood, 2) provide health education, 3) encourage women to seek self-determination, and 4) encourage the development of self-reliance in the community as a whole. The first step was to survey the community's culture, beliefs, and health status with the help of the Aborigines Department and the village headman. After a series of preliminary meetings with other agencies, the FPA began to provide activities including health talks, health courses and demonstrations, medical examinations and check-ups, and first aid training. Environmental protection and sanitation measures were included in the educational activities, and following the traditional "mutual aid system," a small plot of land was cleared for vegetable production. Vegetable gardens and needlecraft will become income-producing activities for the women. Attempts to motivate the women to use family planning have been hindered by the fact that the health of 2 women deteriorated after they began using oral contraceptives. Positive changes are occurring slowly and steadily, however, and the FPA has been instrumental in having the settlement included in a program for the hardcore poor which will provide new housing and farming projects.

  7. Classifying Returns as Extreme

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2014-01-01

    I consider extreme returns for the stock and bond markets of 14 EU countries using two classification schemes: One, the univariate classification scheme from the previous literature that classifies extreme returns for each market separately, and two, a novel multivariate classification scheme tha...

  8. Classifying individuals based on a densely captured sequence of vital signs: An example using repeated blood pressure measurements during hemodialysis treatment.

    Science.gov (United States)

    Goldstein, Benjamin A; Chang, Tara I; Winkelmayer, Wolfgang C

    2015-10-01

    Electronic Health Records (EHRs) present the opportunity to observe serial measurements on patients. While potentially informative, analyzing these data can be challenging. In this work we present a means to classify individuals based on a series of measurements collected by an EHR. Using patients undergoing hemodialysis, we categorized people based on their intradialytic blood pressure. Our primary criteria were that the classifications were time dependent and independent of other subjects. We fit a curve of intradialytic blood pressure using regression splines and then calculated first and second derivatives to come up with four mutually exclusive classifications at different time points. We show that these classifications relate to near term risk of cardiac events and are moderately stable over a succeeding two-week period. This work has general application for analyzing dense EHR data. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. LCC: Light Curves Classifier

    Science.gov (United States)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  10. Sparse representation of multi parametric DCE-MRI features using K-SVD for classifying gene expression based breast cancer recurrence risk

    Science.gov (United States)

    Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina

    2014-03-01

    We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.

  11. Northeastern Brazilian margin: Regional tectonic evolution based on integrated analysis of seismic reflection and potential field data and modelling

    Science.gov (United States)

    Blaich, Olav A.; Tsikalas, Filippos; Faleide, Jan Inge

    2008-10-01

    Integration of regional seismic reflection and potential field data along the northeastern Brazilian margin, complemented by crustal-scale gravity modelling, is used to reveal and illustrate onshore-offshore crustal structure correlation, the character of the continent-ocean boundary, and the relationship of crustal structure to regional variation of potential field anomalies. The study reveals distinct along-margin structural and magmatic changes that are spatially related to a number of conjugate Brazil-West Africa transfer systems, governing the margin segmentation and evolution. Several conceptual tectonic models are invoked to explain the structural evolution of the different margin segments in a conjugate margin context. Furthermore, the constructed transects, the observed and modelled Moho relief, and the potential field anomalies indicate that the Recôncavo, Tucano and Jatobá rift system may reflect a polyphase deformation rifting-mode associated with a complex time-dependent thermal structure of the lithosphere. The constructed transects and available seismic reflection profiles, indicate that the northern part of the study area lacks major breakup-related magmatic activity, suggesting a rifted non-volcanic margin affinity. In contrast, the southern part of the study area is characterized by abrupt crustal thinning and evidence for breakup magmatic activity, suggesting that this region evolved, partially, with a rifted volcanic margin affinity and character.

  12. Level of Alkenylbenzenes in Parsley and Dill Based Teas and Associated Risk Assessment Using the Margin of Exposure Approach.

    Science.gov (United States)

    Alajlouni, Abdalmajeed M; Al-Malahmeh, Amer J; Isnaeni, Farida Nur; Wesseling, Sebastiaan; Vervoort, Jacques; Rietjens, Ivonne M C M

    2016-11-16

    Risk assessment of parsley and dill based teas that contain alkenylbenzenes was performed. To this end the estimated daily intake (EDI) of alkenylbenzenes resulting from use of the teas was quantified. Since most teas appeared to contain more than one alkenylbenzene, a combined risk assessment was performed based on equal potency of all alkenylbenzenes or using a so-called toxic equivalency (TEQ) approach through defining toxic equivalency factors (TEFs) for the different alkenylbenzenes. The EDI values resulting from consuming one cup of tea a day were 0.2-10.1 μg/kg bw for the individual alkenylbenzenes, 0.6-13.1 μg/kg bw for the sum of the alkenylbenzenes, and 0.3-10.7 μg safrole equiv/kg bw for the sum of alkenylbenzenes when expressed in safrole equivalents. The margin of exposure (MOE) values obtained were generally <10000, indicating a concern if the teas would be consumed on a daily basis over longer periods of time.

  13. Breast conservation therapy based on liberal selection criteria and less extensive surgery. Analysis of cases with positive margins

    International Nuclear Information System (INIS)

    Amemiya, Atsushi; Kondo, Makoto

    1999-01-01

    The relationship between the margin status and the risk of in-breast recurrence (IBR) is an important consideration in patients treated with breast conservation therapy but has not been defined adequately. To address this issue, 1533 clinical stage I and II patients who completed irradiation therapy between 1983 and 1998 were evaluated. Only selection criterion was whether she could be satisfied with cosmesis after lumpectomy. Size and location of the tumor, nodal status, histology and age were not primary consideration. The tumor was excised in such a way to obtain macroscopically clear margins. The breast was treated with 50 Gy of external irradiation but without boost. Margins were evaluated by serially sectioning of the specimen and the margin was judged positive only when cancer cells were present on the inked surface. Margins were also evaluated by scratch cytology. Seventy two IBR were experienced within 5 years. Only age and margin status were found to be independent risk factors. Five-year IBR rate with negative and positive margins was 3.7% and 10.0%, respectively. In patients with positive margins, number of positive site and positive cytology were independent risk factor for IBR. IBR rate among patients with focally involved margins by non-comedo, comedo and invasive ca, was 0.0%, 3.5%, and 8.7%, respectively. IBR rate in more than focal involvement by non-comedo, comedo, and invasive ca, was 4.0%, 33.0% and 30.0%, respectively. If histologically positive margin was also positive cytologically, IBR was 14.8%, whereas only 3.6% if negative cytologically. Even with liberal patient selection and less extensive local treatment, adequate local control can be obtained, provided that margins are histologically and/or cytologically negative. Focal margin involvement by DCIS or more than focal involvement by non-comedo type DCIS does not jeopardize local control. More than focal involvement by comedo DCIS or involvement by invasive ca results in high IBR rate

  14. Margin Evaluation in the Presence of Deformation, Rotation, and Translation in Prostate and Entire Seminal Vesicle Irradiation With Daily Marker-Based Setup Corrections

    International Nuclear Information System (INIS)

    Mutanga, Theodore F.; Boer, Hans C.J. de; Wielen, Gerard J. van der; Hoogeman, Mischa S.; Incrocci, Luca; Heijmen, Ben J.M.

    2011-01-01

    Purpose: To develop a method for margin evaluation accounting for all measured displacements during treatment of prostate cancer. Methods and Materials: For 21 patients treated with stereographic targeting marker-based online translation corrections, dose distributions with varying margins and gradients were created. Sets of possible cumulative delivered dose distributions were simulated by moving voxels and accumulating dose per voxel. Voxel motion was simulated consistent with measured distributions of systematic and random displacements due to stereographic targeting inaccuracies, deformation, rotation, and intrafraction motion. The method of simulation maintained measured correlation of voxel motions due to organ deformation. Results: For the clinical target volume including prostate and seminal vesicles (SV), the probability that some part receives <95% of the prescribed dose, the changes in minimum dose, and volume receiving 95% of prescription dose compared with planning were 80.5% ± 19.2%, 9.0 ± 6.8 Gy, and 3.0% ± 3.7%, respectively, for the smallest studied margins (3 mm prostate, 5 mm SV) and steepest dose gradients. Corresponding values for largest margins (5 mm prostate, 8 mm SV) with a clinical intensity-modulated radiotherapy dose distribution were 46.5% ± 34.7%, 6.7 ± 5.8 Gy, and 1.6% ± 2.3%. For prostate-only clinical target volume, the values were 51.8% ± 17.7%, 3.3 ± 1.6 Gy, and 0.6% ± 0.5% with the smallest margins and 5.2% ± 7.4%, 1.8 ± 0.9 Gy, and 0.1% ± 0.1% for the largest margins. Addition of three-dimensional rotation corrections only improved these values slightly. All rectal planning constraints were met in the actual reconstructed doses for all studied margins. Conclusion: We developed a system for margin validation in the presence of deformations. In our population, a 5-mm margin provided sufficient dosimetric coverage for the prostate. In contrast, an 8-mm SV margin was still insufficient owing to deformations. Addition of

  15. A margin-of-exposure approach to assessment of noncancer risks of dioxins based on human exposure and response data.

    Science.gov (United States)

    Aylward, Lesa L; Goodman, Julie E; Charnley, Gail; Rhomberg, Lorenz R

    2008-10-01

    Risk assessment of human environmental exposure to polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/PCDFs) and other dioxin-like compounds is complicated by several factors, including limitations in measuring intakes because of the low concentrations of these compounds in foods and the environment and interspecies differences in pharmacokinetics and responses. We examined the feasibility of relying directly on human studies of exposure and potential responses to PCDD/PCDFs and related compounds in terms of measured lipid-adjusted concentrations to assess margin of exposure (MOE) in a quantitative, benchmark dose (BMD)-based framework using representative exposure and selected response data sets. We characterize estimated central tendency and upper-bound general U.S. population lipid-adjusted concentrations of PCDD/PCDFs from the 1970s and early 2000s based on available data sets. Estimates of benchmark concentrations for three example responses of interest (induction of cytochrome P4501A2 activity, dental anomalies, and neonatal thyroid hormone alterations) were derived based on selected human studies. The exposure data sets indicate that current serum lipid concentrations in young adults are approximately 6- to 7-fold lower than 1970s-era concentrations. Estimated MOEs for each end point based on current serum lipid concentrations range from 100 for dental anomalies-approximately 6-fold greater than would have existed during the 1970s. Human studies of dioxin exposure and outcomes can be used in a BMD framework for quantitative assessments of MOE. Incomplete exposure characterization can complicate the use of such studies in a BMD framework.

  16. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    Science.gov (United States)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.

  17. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    Science.gov (United States)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883

  18. Classifying Linear Canonical Relations

    OpenAIRE

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  19. Setup uncertainties in linear accelerator based stereotactic radiosurgery and a derivation of the corresponding setup margin for treatment planning.

    Science.gov (United States)

    Zhang, Mutian; Zhang, Qinghui; Gan, Hua; Li, Sicong; Zhou, Su-min

    2016-02-01

    In the present study, clinical stereotactic radiosurgery (SRS) setup uncertainties from image-guidance data are analyzed, and the corresponding setup margin is estimated for treatment planning purposes. Patients undergoing single-fraction SRS at our institution were localized using invasive head ring or non-invasive thermoplastic masks. Setup discrepancies were obtained from an in-room x-ray patient position monitoring system. Post treatment re-planning using the measured setup errors was performed in order to estimate the individual target margins sufficient to compensate for the actual setup errors. The formula of setup margin for a general SRS patient population was derived by proposing a correlation between the three-dimensional setup error and the required minimal margin. Setup errors of 104 brain lesions were analyzed, in which 81 lesions were treated using an invasive head ring, and 23 were treated using non-invasive masks. In the mask cases with image guidance, the translational setup uncertainties achieved the same level as those in the head ring cases. Re-planning results showed that the margins for individual patients could be smaller than the clinical three-dimensional setup errors. The derivation of setup margin adequate to address the patient setup errors was demonstrated by using the arbitrary planning goal of treating 95% of the lesions with sufficient doses. With image guidance, the patient setup accuracy of mask cases can be comparable to that of invasive head rings. The SRS setup margin can be derived for a patient population with the proposed margin formula to compensate for the institution-specific setup errors. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Geological Constraints on the Evolution of the Angolan Margin Based on Reflection and Refraction Seismic Data (ZaïAngo project)

    Science.gov (United States)

    Moulin, M.; Aslanian, D.; Olivet, J.; Contrucci, I.; Matias, L.; Geli, L.; Klingelhoefer, F.; Nouze, H.; Rabineau, M.; Labails, C.; Rehault, J.; Unternehr, P.

    2005-05-01

    Deep penetration multi-channel reflection and OBS wide-angle seismic data from the Congo-Angola margin were collected in 2000 during the ZaiAngo cruise (Ifremer and Total). These data help constrain the deep structure of the non-volcanic continental margin, the geometry of the pre-salt sediment layers and the geometry of the Aptian salt layer. Dating the deposition of the salt relative to the chronology of the margin formation is an issue of fundamental importance for reconstructing the evolution of the margin and for the understanding of the crustal thinning processes. The data show that the crust thins abruptly, from a 30 - 40km thickness to less than 10km, over a lateral distance of less than 50km. The transitional domain is a 180km wide basin with a thickness lower than 7 km. The pre-salt sediment layering within this basin is parallel to the base of the salt and hardly affected by tectonic deformation. In addition, the presence of a continuous salt cover, from the continental platform down to the presumed oceanic boundary, provides indications on the conditions of salt deposition that constrain the geometry of the margin at that time. These crucial observations imply shallow deposition environments during the rifting and suggest that vertical motions prevailed - compared to horizontal motions - during the formation of the basin.

  1. Geological constraints on the evolution of the Angolan margin based on reflection and refraction seismic data (ZaïAngo project)

    Science.gov (United States)

    Moulin, Maryline; Aslanian, Daniel; Olivet, Jean-Louis; Contrucci, Isabelle; Matias, Luis; Géli, Louis; Klingelhoefer, Frauke; Nouzé, Hervé; Réhault, Jean-Pierre; Unternehr, Patrick

    2005-09-01

    Deep penetration multichannel reflection and Ocean Bottom Seismometer wide-angle seismic data from the Congo-Angola margin were collected in 2000 during the ZaïAngo cruise. These data help constrain the deep structure of the continental margin, the geometry of the pre-salt sediment layers and the geometry of the Aptian salt layer. Dating the deposition of the salt relative to the chronology of the margin formation is an issue of fundamental importance for reconstructing the evolution of the margin and for the understanding of the crustal thinning processes. The data show that the crust thins abruptly, from a 30-40 km thickness to less than 10 km, over a lateral distance of less than 50 km. The transitional domain is a 180-km-wide basin. The pre-salt sediment layering within this basin is parallel to the base of the salt and hardly affected by tectonic deformation. In addition, the presence of a continuous salt cover, from the continental platform down to the presumed oceanic boundary, provides indications on the conditions of salt deposition that constrain the geometry of the margin at that time. These crucial observations imply shallow deposition environments during the rifting and suggest that vertical motions prevailed-compared to horizontal motions-during the formation of the basin.

  2. GIS-based approach for defining bioenergy facilities location: A case study in Northern Spain based on marginal delivery costs and resources competition between facilities

    Energy Technology Data Exchange (ETDEWEB)

    Panichelli, Luis; Gnansounou, Edgard [Laboratory of Energy Systems, Swiss Federal Institute of Technology, LASEN-ICARE-ENAC, Station 18, EPFL, CH-1015 Lausanne (Switzerland)

    2008-04-15

    This paper presents a GIS-based decision support system for selecting least-cost bioenergy locations when there is a significant variability in biomass farmgate price and when more than one bioenergy plant with a fixed capacity has to be placed in the region. The methodology tackles the resources competition problem between energy facilities through a location-allocation model based on least-cost biomass quantities. Whole system least delivery cost including intermediate bioenergy products is estimated. The methodology is based on a case study where forest wood residues (FWR) from final cuttings (FCs) are used to produce torrefied wood (TW) in two torrefaction plants (TUs) that supply a gasification unit (GU) in order to produce electricity. The provinces of Navarra, Bizkaia, Gipuzkoa, Alava, La Rioja, Cantabria and Burgos are assessed in order to find the best locations for settling down the TUs and the GU according to biomass availability, FWR and TW marginal delivery costs. (author)

  3. Predicting membrane protein types using various decision tree classifiers based on various modes of general PseAAC for imbalanced datasets.

    Science.gov (United States)

    Sankari, E Siva; Manimegalai, D

    2017-12-21

    Predicting membrane protein types is an important and challenging research area in bioinformatics and proteomics. Traditional biophysical methods are used to classify membrane protein types. Due to large exploration of uncharacterized protein sequences in databases, traditional methods are very time consuming, expensive and susceptible to errors. Hence, it is highly desirable to develop a robust, reliable, and efficient method to predict membrane protein types. Imbalanced datasets and large datasets are often handled well by decision tree classifiers. Since imbalanced datasets are taken, the performance of various decision tree classifiers such as Decision Tree (DT), Classification And Regression Tree (CART), C4.5, Random tree, REP (Reduced Error Pruning) tree, ensemble methods such as Adaboost, RUS (Random Under Sampling) boost, Rotation forest and Random forest are analysed. Among the various decision tree classifiers Random forest performs well in less time with good accuracy of 96.35%. Another inference is RUS boost decision tree classifier is able to classify one or two samples in the class with very less samples while the other classifiers such as DT, Adaboost, Rotation forest and Random forest are not sensitive for the classes with fewer samples. Also the performance of decision tree classifiers is compared with SVM (Support Vector Machine) and Naive Bayes classifier. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Comparison of prostate set-up accuracy and margins with off-line bony anatomy corrections and online implanted fiducial-based corrections.

    Science.gov (United States)

    Greer, P B; Dahl, K; Ebert, M A; Wratten, C; White, M; Denham, J W

    2008-10-01

    The aim of the study was to determine prostate set-up accuracy and set-up margins with off-line bony anatomy-based imaging protocols, compared with online implanted fiducial marker-based imaging with daily corrections. Eleven patients were treated with implanted prostate fiducial markers and online set-up corrections. Pretreatment orthogonal electronic portal images were acquired to determine couch shifts and verification images were acquired during treatment to measure residual set-up error. The prostate set-up errors that would result from skin marker set-up, off-line bony anatomy-based protocols and online fiducial marker-based corrections were determined. Set-up margins were calculated for each set-up technique using the percentage of encompassed isocentres and a margin recipe. The prostate systematic set-up errors in the medial-lateral, superior-inferior and anterior-posterior directions for skin marker set-up were 2.2, 3.6 and 4.5 mm (1 standard deviation). For our bony anatomy-based off-line protocol the prostate systematic set-up errors were 1.6, 2.5 and 4.4 mm. For the online fiducial based set-up the results were 0.5, 1.4 and 1.4 mm. A prostate systematic error of 10.2 mm was uncorrected by the off-line bone protocol in one patient. Set-up margins calculated to encompass 98% of prostate set-up shifts were 11-14 mm with bone off-line set-up and 4-7 mm with online fiducial markers. Margins from the van Herk margin recipe were generally 1-2 mm smaller. Bony anatomy-based set-up protocols improve the group prostate set-up error compared with skin marks; however, large prostate systematic errors can remain undetected or systematic errors increased for individual patients. The margin required for set-up errors was found to be 10-15 mm unless implanted fiducial markers are available for treatment guidance.

  5. Comparison of prostate set-up accuracy and margins with off-line bony anatomy corrections and online implanted fiducial-based corrections

    International Nuclear Information System (INIS)

    Greer, P. B.; Dahl, K.; Ebert, M. A.; Wratten, C.; White, M.; Denham, K. W.

    2008-01-01

    Full text: The aim of the study was to determine prostate set-up accuracy and set-up margins with off-line bony anatomy-based imaging protocols, compared with online implanted fiducial marker-based imaging with daily corrections. Eleven patients were treated with implanted prostate fiducial markers and online set-up corrections. Pretreatment orthogonal electronic portal images were acquired to determine couch shifts and verification images were acquired during treatment to measure residual set-up error. The prostate set-up errors that would result from skin marker set-up, off-line bony anatomy-based protocols and online fiducial marker-based corrections were determined. Set-up margins were calculated for each set-up technique using the percentage of encompassed isocentres land a margin recipe. The prostate systematic set-up errors in the medial-lateral, superior-inferior and anterior-I posterior directions for skin marker set-up were 2.2, 3.6 and 4.5 mm (1 standard deviation). For our bony anatomy-I based off-line protocol the prostate systematic set-up errors were 1.6, 2.5 and 4.4 mm. For the online fiducial based set-up the results were 0.5, 1.4 and 1.4 mm. A prostate systematic error of 10.2 mm was uncorrected by the off-line bone protocol in one patient. Set-up margins calculated to encompass 98% of prostate set-up shifts were 111-14 mm with bone off-line set-up and 4-7 mm with online fiducial markers. Margins from the van Herk margin I recipe were generally 1-2 mm smaller. Bony anatomy-based set-up protocols improve the group prostate set-up error compared with skin marks; however, large prostate systematic errors can remain undetected or systematic (errors increased for individual patients. The margin required for set-up errors was found to be 10-15 mm unless I implanted fiducial markers are available for treatment guidance.

  6. Error minimizing algorithms for nearest eighbor classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory; Zimmer, G. Beate [TEXAS A& M

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.

  7. A DNA-based pattern classifier with in vitro learning and associative recall for genomic characterization and biosensing without explicit sequence knowledge.

    Science.gov (United States)

    Lee, Ju Seok; Chen, Junghuei; Deaton, Russell; Kim, Jin-Woo

    2014-01-01

    Genetic material extracted from in situ microbial communities has high promise as an indicator of biological system status. However, the challenge is to access genomic information from all organisms at the population or community scale to monitor the biosystem's state. Hence, there is a need for a better diagnostic tool that provides a holistic view of a biosystem's genomic status. Here, we introduce an in vitro methodology for genomic pattern classification of biological samples that taps large amounts of genetic information from all genes present and uses that information to detect changes in genomic patterns and classify them. We developed a biosensing protocol, termed Biological Memory, that has in vitro computational capabilities to "learn" and "store" genomic sequence information directly from genomic samples without knowledge of their explicit sequences, and that discovers differences in vitro between previously unknown inputs and learned memory molecules. The Memory protocol was designed and optimized based upon (1) common in vitro recombinant DNA operations using 20-base random probes, including polymerization, nuclease digestion, and magnetic bead separation, to capture a snapshot of the genomic state of a biological sample as a DNA memory and (2) the thermal stability of DNA duplexes between new input and the memory to detect similarities and differences. For efficient read out, a microarray was used as an output method. When the microarray-based Memory protocol was implemented to test its capability and sensitivity using genomic DNA from two model bacterial strains, i.e., Escherichia coli K12 and Bacillus subtilis, results indicate that the Memory protocol can "learn" input DNA, "recall" similar DNA, differentiate between dissimilar DNA, and detect relatively small concentration differences in samples. This study demonstrated not only the in vitro information processing capabilities of DNA, but also its promise as a genomic pattern classifier that could

  8. Feature extraction for dynamic integration of classifiers

    NARCIS (Netherlands)

    Pechenizkiy, M.; Tsymbal, A.; Puuronen, S.; Patterson, D.W.

    2007-01-01

    Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. In this paper, we present an algorithm for the dynamic integration of classifiers in the space of extracted features (FEDIC). It is based on the technique

  9. MARGINALIZATION OF SOUTH ASIANS BASED ON THE RACE AND SKIN COLOR IN BHARATI MUKHERJEE’S "JASMINE" AND CHITRA B. DIVAKARUNI’S "THE MISTRESS OF SPICES"

    Directory of Open Access Journals (Sweden)

    Iwona Filipczak

    2016-04-01

    Full Text Available The purpose of this article is to focus on the issue of marginalization of South Asians in the United States as portrayed in two novels written by writers of Indian origin: Bharati Mukherjee’s "Jasmine" and Chitra Bannerjee Divakaruni’s "The Mistress of Spices". It is investigated how race or skin color are the reasons for the marginalization of Indian immigrants in the United States. While "Jasmine" shows white Americans’ inability to embrace the racial difference of an Indian immigrant, which may be read as a reflection of the relative newness of this ethnic group in the United States and its shifting racial classification, "The Mistress of Spices" shows that the patterns of marginalization based on skin color may be developed already in the homeland, India, and then transferred to the US and confronted with the country’s racial diversity. Divakaruni’s novel raises a discussion of how the appreciation of whiteness developed in the country of birth leads to the hierarchical relations between the members of the Indian diaspora, and how it affects their relations with other American minorities. In this way, it shows that marginalization based on skin color is not only the outcome of inter-ethnic encounters but it can be an internal problem of this ethnic group as well.

  10. Machine Learning-based Individual Assessment of Cortical Atrophy Pattern in Alzheimer's Disease Spectrum: Development of the Classifier and Longitudinal Evaluation.

    Science.gov (United States)

    Lee, Jin San; Kim, Changsoo; Shin, Jeong-Hyeon; Cho, Hanna; Shin, Dae-Seock; Kim, Nakyoung; Kim, Hee Jin; Kim, Yeshin; Lockhart, Samuel N; Na, Duk L; Seo, Sang Won; Seong, Joon-Kyung

    2018-03-07

    To develop a new method for measuring Alzheimer's disease (AD)-specific similarity of cortical atrophy patterns at the individual-level, we employed an individual-level machine learning algorithm. A total of 869 cognitively normal (CN) individuals and 473 patients with probable AD dementia who underwent high-resolution 3T brain MRI were included. We propose a machine learning-based method for measuring the similarity of an individual subject's cortical atrophy pattern with that of a representative AD patient cohort. In addition, we validated this similarity measure in two longitudinal cohorts consisting of 79 patients with amnestic-mild cognitive impairment (aMCI) and 27 patients with probable AD dementia. Surface-based morphometry classifier for discriminating AD from CN showed sensitivity and specificity values of 87.1% and 93.3%, respectively. In the longitudinal validation study, aMCI-converts had higher atrophy similarity at both baseline (p < 0.001) and first year visits (p < 0.001) relative to non-converters. Similarly, AD patients with faster decline had higher atrophy similarity than slower decliners at baseline (p = 0.042), first year (p = 0.028), and third year visits (p = 0.027). The AD-specific atrophy similarity measure is a novel approach for the prediction of dementia risk and for the evaluation of AD trajectories on an individual subject level.

  11. Energy-Efficient Neuromorphic Classifiers.

    Science.gov (United States)

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

  12. Stack filter classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  13. The ACTTION-American Pain Society Pain Taxonomy (AAPT): an evidence-based and multidimensional approach to classifying chronic pain conditions.

    Science.gov (United States)

    Fillingim, Roger B; Bruehl, Stephen; Dworkin, Robert H; Dworkin, Samuel F; Loeser, John D; Turk, Dennis C; Widerstrom-Noga, Eva; Arnold, Lesley; Bennett, Robert; Edwards, Robert R; Freeman, Roy; Gewandter, Jennifer; Hertz, Sharon; Hochberg, Marc; Krane, Elliot; Mantyh, Patrick W; Markman, John; Neogi, Tuhina; Ohrbach, Richard; Paice, Judith A; Porreca, Frank; Rappaport, Bob A; Smith, Shannon M; Smith, Thomas J; Sullivan, Mark D; Verne, G Nicholas; Wasan, Ajay D; Wesselmann, Ursula

    2014-03-01

    Current approaches to classification of chronic pain conditions suffer from the absence of a systematically implemented and evidence-based taxonomy. Moreover, existing diagnostic approaches typically fail to incorporate available knowledge regarding the biopsychosocial mechanisms contributing to pain conditions. To address these gaps, the Analgesic, Anesthetic, and Addiction Clinical Trial Translations Innovations Opportunities and Networks (ACTTION) public-private partnership with the U.S. Food and Drug Administration and the American Pain Society (APS) have joined together to develop an evidence-based chronic pain classification system called the ACTTION-APS Pain Taxonomy. This paper describes the outcome of an ACTTION-APS consensus meeting, at which experts agreed on a structure for this new taxonomy of chronic pain conditions. Several major issues around which discussion revolved are presented and summarized, and the structure of the taxonomy is presented. ACTTION-APS Pain Taxonomy will include the following dimensions: 1) core diagnostic criteria; 2) common features; 3) common medical comorbidities; 4) neurobiological, psychosocial, and functional consequences; and 5) putative neurobiological and psychosocial mechanisms, risk factors, and protective factors. In coming months, expert working groups will apply this taxonomy to clusters of chronic pain conditions, thereby developing a set of diagnostic criteria that have been consistently and systematically implemented across nearly all common chronic pain conditions. It is anticipated that the availability of this evidence-based and mechanistic approach to pain classification will be of substantial benefit to chronic pain research and treatment. The ACTTION-APS Pain Taxonomy is an evidence-based chronic pain classification system designed to classify chronic pain along the following dimensions: 1) core diagnostic criteria; 2) common features; 3) common medical comorbidities; 4) neurobiological, psychosocial

  14. Structural interpretation of the Konkan basin, southwestern continental margin of India, based on magnetic and bathymetric data

    Digital Repository Service at National Institute of Oceanography (India)

    Subrahmanyam, V.; Krishna, K.S.; Murty, G.P.S.; Rao, D.G.; Ramana, M.V.; Rao, M.G.

    Magnetic and bathymetric studies on the Konkan basin of the southwestern continental margin of India reveal prominent NNW-SSE, NW-SE, ENE-WSW, and WNE-ESE structural trends. The crystalline basement occurs at about 5-6 km below the mean sea level. A...

  15. Prevalence, risk factors and outcomes of velamentous and marginal cord insertions: a population-based study of 634,741 pregnancies.

    Directory of Open Access Journals (Sweden)

    Cathrine Ebbing

    Full Text Available OBJECTIVES: To determine the prevalence of, and risk factors for anomalous insertions of the umbilical cord, and the risk for adverse outcomes of these pregnancies. DESIGN: Population-based registry study. SETTING: Medical Birth Registry of Norway 1999-2009. POPULATION: All births (gestational age >16 weeks to <45 weeks in Norway (623,478 singletons and 11,263 pairs of twins. METHODS: Descriptive statistics and odds ratios (ORs for risk factors and adverse outcomes based on logistic regressions adjusted for confounders. MAIN OUTCOME MEASURES: Velamentous or marginal cord insertion. Abruption of the placenta, placenta praevia, pre-eclampsia, preterm birth, operative delivery, low Apgar score, transferral to neonatal intensive care unit (NICU, malformations, birthweight, and perinatal death. RESULTS: The prevalence of abnormal cord insertion was 7.8% (1.5% velamentous, 6.3% marginal in singleton pregnancies and 16.9% (6% velamentous, 10.9% marginal in twins. The two conditions shared risk factors; twin gestation and pregnancies conceived with the aid of assisted reproductive technology were the most important, while bleeding in pregnancy, advanced maternal age, maternal chronic disease, female foetus and previous pregnancy with anomalous cord insertion were other risk factors. Velamentous and marginal insertion was associated with an increased risk of adverse outcomes such as placenta praevia (OR = 3.7, (95% CI = 3.1-4.6, and placental abruption (OR = 2.6, (95% CI = 2.1-3.2. The risk of pre-eclampsia, preterm birth and delivery by acute caesarean was doubled, as was the risk of low Apgar score, transferral to NICU, low birthweight and malformations. For velamentous insertion the risk of perinatal death at term was tripled, OR = 3.3 (95% CI = 2.5-4.3. CONCLUSION: The prevalence of velamentous and marginal insertions of the umbilical cord was 7.8% in singletons and 16.9% in twin gestations, with marginal insertion being more

  16. SU-E-J-30: Benchmark Image-Based TCP Calculation for Evaluation of PTV Margins for Lung SBRT Patients

    Energy Technology Data Exchange (ETDEWEB)

    Li, M [Wayne State Univeristy, Detroit, MI (United States); Chetty, I [Henry Ford Health System, Detroit, MI (United States); Zhong, H [Henry Ford Hospital System, Detroit, MI (United States)

    2014-06-01

    Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVF formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients.

  17. SU-E-J-30: Benchmark Image-Based TCP Calculation for Evaluation of PTV Margins for Lung SBRT Patients

    International Nuclear Information System (INIS)

    Li, M; Chetty, I; Zhong, H

    2014-01-01

    Purpose: Tumor control probability (TCP) calculated with accumulated radiation doses may help design appropriate treatment margins. Image registration errors, however, may compromise the calculated TCP. The purpose of this study is to develop benchmark CT images to quantify registration-induced errors in the accumulated doses and their corresponding TCP. Methods: 4DCT images were registered from end-inhale (EI) to end-exhale (EE) using a “demons” algorithm. The demons DVFs were corrected by an FEM model to get realistic deformation fields. The FEM DVFs were used to warp the EI images to create the FEM-simulated images. The two images combined with the FEM DVF formed a benchmark model. Maximum intensity projection (MIP) images, created from the EI and simulated images, were used to develop IMRT plans. Two plans with 3 and 5 mm margins were developed for each patient. With these plans, radiation doses were recalculated on the simulated images and warped back to the EI images using the FEM DVFs to get the accumulated doses. The Elastix software was used to register the FEM-simulated images to the EI images. TCPs calculated with the Elastix-accumulated doses were compared with those generated by the FEM to get the TCP error of the Elastix registrations. Results: For six lung patients, the mean Elastix registration error ranged from 0.93 to 1.98 mm. Their relative dose errors in PTV were between 0.28% and 6.8% for 3mm margin plans, and between 0.29% and 6.3% for 5mm-margin plans. As the PTV margin reduced from 5 to 3 mm, the mean TCP error of the Elastix-reconstructed doses increased from 2.0% to 2.9%, and the mean NTCP errors decreased from 1.2% to 1.1%. Conclusion: Patient-specific benchmark images can be used to evaluate the impact of registration errors on the computed TCPs, and may help select appropriate PTV margins for lung SBRT patients

  18. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier

    Science.gov (United States)

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task.

  19. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier.

    Science.gov (United States)

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  20. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  1. Deconvolution When Classifying Noisy Data Involving Transformations.

    Science.gov (United States)

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  2. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2012-01-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  3. Defining and Classifying Interest Groups

    DEFF Research Database (Denmark)

    Baroni, Laura; Carroll, Brendan; Chalmers, Adam

    2014-01-01

    The interest group concept is defined in many different ways in the existing literature and a range of different classification schemes are employed. This complicates comparisons between different studies and their findings. One of the important tasks faced by interest group scholars engaged...... in large-N studies is therefore to define the concept of an interest group and to determine which classification scheme to use for different group types. After reviewing the existing literature, this article sets out to compare different approaches to defining and classifying interest groups with a sample...... in the organizational attributes of specific interest group types. As expected, our comparison of coding schemes reveals a closer link between group attributes and group type in narrower classification schemes based on group organizational characteristics than those based on a behavioral definition of lobbying....

  4. An aCGH classifier derived from BRCA1-mutated breast cancer and benefit of high-dose platinum-based chemotherapy in HER2-negative breast cancer patients

    NARCIS (Netherlands)

    Vollebergh, M. A.; Lips, E. H.; Nederlof, P. M.; Wessels, L. F. A.; Schmidt, M. K.; van Beers, E. H.; Cornelissen, S.; Holtkamp, M.; Froklage, F. E.; de Vries, E. G. E.; Schrama, J. G.; Wesseling, J.; van de Vijver, M. J.; van Tinteren, H.; de Bruin, M.; Hauptmann, M.; Rodenhuis, S.; Linn, S. C.

    Patients and methods: We evaluated this classifier in stage III breast cancer patients, who had been randomly assigned between adjuvant high-dose platinum-based (HD-PB) chemotherapy, a DSB-inducing regimen, and conventional anthracycline-based chemotherapy. Additionally, we assessed BRCA1 loss

  5. Human factors quantification via boundary identification of flight performance margin

    Directory of Open Access Journals (Sweden)

    Yang Changpeng

    2014-08-01

    Full Text Available A systematic methodology including a computational pilot model and a pattern recognition method is presented to identify the boundary of the flight performance margin for quantifying the human factors. The pilot model is proposed to correlate a set of quantitative human factors which represent the attributes and characteristics of a group of pilots. Three information processing components which are influenced by human factors are modeled: information perception, decision making, and action execution. By treating the human factors as stochastic variables that follow appropriate probability density functions, the effects of human factors on flight performance can be investigated through Monte Carlo (MC simulation. Kernel density estimation algorithm is selected to find and rank the influential human factors. Subsequently, human factors are quantified through identifying the boundary of the flight performance margin by the k-nearest neighbor (k-NN classifier. Simulation-based analysis shows that flight performance can be dramatically improved with the quantitative human factors.

  6. Marginal integrity of low-shrinkage and methacrylate-based composite resins: Effect of three different hemostatic agents

    Science.gov (United States)

    Khoroushi, Maryam; Sahraneshin-Samani, Mahsa

    2016-01-01

    Background Moisture control is very important in restorative procedures in dentistry. Use of hemostatic agents helps control moisture; however, they might result in changes on enamel and dentin surfaces, affecting composite resin bond quality. The aim of this in vitro study was to evaluate the marginal microleakage of two different composite resins with the use of three different hemostatic agents. Material and Methods Standardized Class V cavities were prepared on the buccal and lingual surfaces of 48 premolars with cervical margins 1 mm apical to the cementoenamel junction (CEJ). The samples were randomly divided into 8 groups. In groups 1 to 4, an etch-and-rinse adhesive (Adper Single Bond) was applied as the bonding system, followed by exposure to different hemostatic agent: group 1: no hemostatic agent (control); group 2: ViscoStat; group 3: ViscoStat Clear; and group 4: trichloracetic acid, as hemostatic agents. The cavities were restored with Z-250 composite resin. In group 5 to 8 Silorane System Adhesive (Filtek P90 Adhesive) was applied as a bonding agent, followed by exposure to different hemostatic agents in a manner similar to that in groups 1to 4. The cavities were restored with Filtek P90, a low-shrinkage composite resin. The samples in each group were evaluated for dye penetration under a stereomicroscope at ×36 after 24 hours and a 500-round thermocycling procedure at enamel and dentin margins. Statistical analysis was carried out using Kruskal-Wallis and Mann-Whitney tests (α=0.05). Results Z-250 composite resin exhibited significantly higher dentin microleakage scores compared to Filtek P90 (P = 0.004). Trichloracetic acid increased dentin microleakage with Filtek P90 (P=0.033). Conclusions Under the limitations of this in vitro study, application of hemostatic agents did not affect microleakage of the two tested composite resins except for trichloracetic acid that increased marginal microleakage when used with Filtek P90. Key words

  7. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  8. Convexity and Marginal Vectors

    NARCIS (Netherlands)

    van Velzen, S.; Hamers, H.J.M.; Norde, H.W.

    2002-01-01

    In this paper we construct sets of marginal vectors of a TU game with the property that if the marginal vectors from these sets are core elements, then the game is convex.This approach leads to new upperbounds on the number of marginal vectors needed to characterize convexity.An other result is that

  9. "We call ourselves marginalized"

    DEFF Research Database (Denmark)

    Jørgensen, Nanna Jordt

    2014-01-01

    of the people we refer to as marginalized. In this paper, I discuss how young secondary school graduates from a pastoralist community in Kenya use and negotiate indigeneity, marginal identity, and experiences of marginalization in social navigations aimed at broadening their current and future opportunities. I...

  10. Marginal and Internal Discrepancies of Posterior Zirconia-Based Crowns Fabricated with Three Different CAD/CAM Systems Versus Metal-Ceramic.

    Science.gov (United States)

    Ortega, Rocio; Gonzalo, Esther; Gomez-Polo, Miguel; Suárez, María J

    2015-01-01

    The aim of this study was to analyze the marginal and internal fit of metalceramic and zirconia-based crowns. Forty standardized steel specimens were prepared to receive posterior crowns and randomly divided into four groups (n = 10): (1) metal-ceramic, (2) NobelProcera Zirconia, (3) Lava Zirconia, and (4) VITA In-Ceram YZ. All crowns were cemented with glass-ionomer agent and sectioned buccolingually. A scanning electron microscope was used for measurements. Kruskal-Wallis and Wilcoxon signed rank test (α = .05) statistical analyses were conducted. Significant differences (P < .0001) in marginal discrepancies were observed between metal-ceramic and zirconia groups. No differences were found for the axial wall fit (P = .057). Significant differences were shown among the groups in discrepancies at the occlusal cusp (P = .0012) and at the fossa (P = .0062). No differences were observed between surfaces. All zirconia groups showed better values of marginal discrepancies than the metal-ceramic group. Procera Zirconia showed the lowest gaps.

  11. Consistency Analysis of Nearest Subspace Classifier

    OpenAIRE

    Wang, Yi

    2015-01-01

    The Nearest subspace classifier (NSS) finds an estimation of the underlying subspace within each class and assigns data points to the class that corresponds to its nearest subspace. This paper mainly studies how well NSS can be generalized to new samples. It is proved that NSS is strongly consistent under certain assumptions. For completeness, NSS is evaluated through experiments on various simulated and real data sets, in comparison with some other linear model based classifiers. It is also ...

  12. Molecular markers in the surgical margin of oral carcinomas

    DEFF Research Database (Denmark)

    Bilde, A.; Buchwald, C. von; Dabelsteen, E.

    2009-01-01

    epithelium in the surgical resection margin may explain the local recurrence rate. The purpose of this study is to investigate the presence of senescence markers, which may represent early malignant changes in the margin that in routine pathological evaluations are classified as histologically normal...

  13. A novel classifier based on three preoperative tumor markers predicting the cancer-specific survival of gastric cancer (CEA, CA19-9 and CA72-4).

    Science.gov (United States)

    Guo, Jing; Chen, Shangxiang; Li, Shun; Sun, Xiaowei; Li, Wei; Zhou, Zhiwei; Chen, Yingbo; Xu, Dazhi

    2018-01-12

    Several studies have highlighted the prognostic value of the individual and the various combinations of the tumor markers for gastric cancer (GC). Our study was designed to assess establish a new novel model incorporating carcino-embryonic antigen (CEA), carbohydrate antigen 19-9 (CA19-9), carbohydrate antigen 72-4 (CA72-4). A total of 1,566 GC patients (Primary cohort) between Jan 2000 and July 2013 were analyzed. The Primary cohort was randomly divided into Training set (n=783) and Validation set (n=783). A three-tumor marker classifier was developed in the Training set and validated in the Validation set by multivariate regression and risk-score analysis. We have identified a three-tumor marker classifier (including CEA, CA19-9 and CA72-4) for the cancer specific survival (CSS) of GC (ptumor marker classifier is closely associated with the CSS of GC and may serve as a novel model for future decisions concerning treatments.

  14. Marginal gap, cement thickness, and microleakage of 2 zirconia crown systems luted with glass ionomer and MDP-based cements.

    Science.gov (United States)

    Sener, Isil; Turker, Begum; Valandro, Luiz Felipe; Ozcan, Mutlu

    2014-01-01

    This in vitro study evaluated the marginal gap, cement thickness, and microleakage of glass-ionomer cement (GIC) and phosphate monomer-containing resin cement (MDP-RC) under 2 zirconia crown systems (Cercon and DC-Zirkon). Forty human premolars were prepared for all-ceramic zirconia crowns with a 1 mm circumferential finish line and a 1.5 mm occlusal reduction. The crowns (n = 10 per group) from each zirconia system were randomly divided into 2 groups and cemented either with GIC (Vivaglass CEM) or MDP-RC (Panavia F 2.0) cement. The cemented crowns were thermocycled 5000 times (5°-55°C). The crowns were immersed in 0.5% basic fuchsine dye solution for 24 hours and sectioned buccolingually and mesiodistally. Specimens were examined under optical microscope (100X). Data were analyzed using Student t-test and chi-square tests (α = 0.05). Mean marginal gap values for Cercon (85 ± 11.4 μm) were significantly higher than for DC-Zircon (75.3 ± 13.2 μm) (P = 0.018). The mean cement thickness values of GIC (81.7 ± 13.9 μm) and MDP-RC (78.5 ± 12.5 μm) were not significantly different (P = 0.447). Microleakage scores did not demonstrate significant difference between GIC (P = 0.385) and MDP-RC (P = 0.631) under Cercon or DC-Zircon. Considering the cement thickness values and microleakage scores obtained, both zirconia crown systems could be cemented in combination with either GIC or MDP-RC.

  15. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  16. Local curvature analysis for classifying breast tumors: Preliminary analysis in dedicated breast CT

    International Nuclear Information System (INIS)

    Lee, Juhun; Nishikawa, Robert M.; Reiser, Ingrid; Boone, John M.; Lindfors, Karen K.

    2015-01-01

    Purpose: The purpose of this study is to measure the effectiveness of local curvature measures as novel image features for classifying breast tumors. Methods: A total of 119 breast lesions from 104 noncontrast dedicated breast computed tomography images of women were used in this study. Volumetric segmentation was done using a seed-based segmentation algorithm and then a triangulated surface was extracted from the resulting segmentation. Total, mean, and Gaussian curvatures were then computed. Normalized curvatures were used as classification features. In addition, traditional image features were also extracted and a forward feature selection scheme was used to select the optimal feature set. Logistic regression was used as a classifier and leave-one-out cross-validation was utilized to evaluate the classification performances of the features. The area under the receiver operating characteristic curve (AUC, area under curve) was used as a figure of merit. Results: Among curvature measures, the normalized total curvature (C_T) showed the best classification performance (AUC of 0.74), while the others showed no classification power individually. Five traditional image features (two shape, two margin, and one texture descriptors) were selected via the feature selection scheme and its resulting classifier achieved an AUC of 0.83. Among those five features, the radial gradient index (RGI), which is a margin descriptor, showed the best classification performance (AUC of 0.73). A classifier combining RGI and C_T yielded an AUC of 0.81, which showed similar performance (i.e., no statistically significant difference) to the classifier with the above five traditional image features. Additional comparisons in AUC values between classifiers using different combinations of traditional image features and C_T were conducted. The results showed that C_T was able to replace the other four image features for the classification task. Conclusions: The normalized curvature measure

  17. Neural Network Classifiers for Local Wind Prediction.

    Science.gov (United States)

    Kretzschmar, Ralf; Eckert, Pierre; Cattani, Daniel; Eggimann, Fritz

    2004-05-01

    This paper evaluates the quality of neural network classifiers for wind speed and wind gust prediction with prediction lead times between +1 and +24 h. The predictions were realized based on local time series and model data. The selection of appropriate input features was initiated by time series analysis and completed by empirical comparison of neural network classifiers trained on several choices of input features. The selected input features involved day time, yearday, features from a single wind observation device at the site of interest, and features derived from model data. The quality of the resulting classifiers was benchmarked against persistence for two different sites in Switzerland. The neural network classifiers exhibited superior quality when compared with persistence judged on a specific performance measure, hit and false-alarm rates.

  18. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle component-based factor analysis

    Directory of Open Access Journals (Sweden)

    C. A. Stroud

    2012-09-01

    Full Text Available Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007 in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA and two other carbonaceous species, black carbon (BC and carbon monoxide (CO, made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON and two rural sites (Harrow and Bear Creek, ON to derive hydrocarbon-like organic aerosol (HOA factors. A novel diagnostic model evaluation was performed by investigating model POA bias as a function of HOA mass concentration and indicator ratios (e.g. BC/HOA. Eight case studies were selected based on factor analysis and back trajectories to help classify model bias for certain POA source types. By considering model POA bias in relation to co-located BC and CO biases, a plausible story is developed that explains the model biases for all three species.

    At the rural sites, daytime mean PM1 POA mass concentrations were under-predicted compared to observed HOA concentrations. POA under-predictions were accentuated when the transport arriving at the rural sites was from the Detroit/Windsor urban complex and for short-term periods of biomass burning influence. Interestingly, the daytime CO concentrations were only slightly under-predicted at both rural sites, whereas CO was over-predicted at the urban Windsor site with a normalized mean bias of 134%, while good agreement was observed at Windsor for the comparison of daytime PM1 POA and HOA mean values, 1.1 μg m−3 and 1.2 μg m−3, respectively. Biases in model POA predictions also trended from positive to negative with increasing HOA values. Periods of POA over-prediction were most evident at the urban site on calm nights due to an overly-stable model surface layer

  19. An aCGH classifier derived from BRCA1-mutated breast cancer and benefit of high-dose platinum-based chemotherapy in HER2-negative breast cancer patients

    NARCIS (Netherlands)

    Vollebergh, M. A.; Lips, E. H.; Nederlof, P. M.; Wessels, L. F. A.; Schmidt, M. K.; van Beers, E. H.; Cornelissen, S.; Holtkamp, M.; Froklage, F. E.; de Vries, E. G. E.; Schrama, J. G.; Wesseling, J.; van de Vijver, M. J.; van Tinteren, H.; de Bruin, Michiel; Hauptmann, M.; Rodenhuis, S.; Linn, S. C.

    2011-01-01

    Breast cancer cells deficient for BRCA1 are hypersensitive to agents inducing DNA double-strand breaks (DSB), such as bifunctional alkylators and platinum agents. Earlier, we had developed a comparative genomic hybridisation (CGH) classifier based on BRCA1-mutated breast cancers. We hypothesised

  20. Improving predictions of protein-protein interfaces by combining amino acid-specific classifiers based on structural and physicochemical descriptors with their weighted neighbor averages.

    Directory of Open Access Journals (Sweden)

    Fábio R de Moraes

    Full Text Available Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR from free surface residues (FSR. We formulated a linear discriminative analysis (LDA classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/ are suitable for such a task. Receiver operating characteristic (ROC analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study

  1. Improving predictions of protein-protein interfaces by combining amino acid-specific classifiers based on structural and physicochemical descriptors with their weighted neighbor averages.

    Science.gov (United States)

    de Moraes, Fábio R; Neshich, Izabella A P; Mazoni, Ivan; Yano, Inácio H; Pereira, José G C; Salim, José A; Jardine, José G; Neshich, Goran

    2014-01-01

    Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR) from free surface residues (FSR). We formulated a linear discriminative analysis (LDA) classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/) are suitable for such a task. Receiver operating characteristic (ROC) analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication) or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study is now

  2. Improving Predictions of Protein-Protein Interfaces by Combining Amino Acid-Specific Classifiers Based on Structural and Physicochemical Descriptors with Their Weighted Neighbor Averages

    Science.gov (United States)

    de Moraes, Fábio R.; Neshich, Izabella A. P.; Mazoni, Ivan; Yano, Inácio H.; Pereira, José G. C.; Salim, José A.; Jardine, José G.; Neshich, Goran

    2014-01-01

    Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR) from free surface residues (FSR). We formulated a linear discriminative analysis (LDA) classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/) are suitable for such a task. Receiver operating characteristic (ROC) analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication) or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study is now

  3. Recognize and classify pneumoconiosis

    International Nuclear Information System (INIS)

    Hering, K.G.; Hofmann-Preiss, K.

    2014-01-01

    In the year 2012, out of the 10 most frequently recognized occupational diseases 6 were forms of pneumoconiosis. With respect to healthcare and economic aspects, silicosis and asbestos-associated diseases are of foremost importance. The latter are to be found everywhere and are not restricted to large industrial areas. Radiology has a central role in the diagnosis and evaluation of occupational lung disorders. In cases of known exposure mainly to asbestos and quartz, the diagnosis of pneumoconiosis, with few exceptions will be established primarily by the radiological findings. As these disorders are asymptomatic for a long time they are quite often detected as incidental findings in examinations for other reasons. Therefore, radiologists have to be familiar with the pattern of findings of the most frequent forms of pneumoconiosis and the differential diagnoses. For reasons of equal treatment of the insured a quality-based, standardized performance, documentation and evaluation of radiological examinations is required in preventive procedures and evaluations. Above all, a standardized low-dose protocol has to be used in computed tomography (CT) examinations, although individualized concerning the dose, in order to keep radiation exposure as low as possible for the patient. The International Labour Office (ILO) classification for the coding of chest X-rays and the international classification of occupational and environmental respiratory diseases (ICOERD) classification used since 2004 for CT examinations meet the requirements of the insured and the occupational insurance associations as a means of reproducible and comparable data for decision-making. (orig.) [de

  4. Fingerprint prediction using classifier ensembles

    CSIR Research Space (South Africa)

    Molale, P

    2011-11-01

    Full Text Available ); logistic discrimination (LgD), k-nearest neighbour (k-NN), artificial neural network (ANN), association rules (AR) decision tree (DT), naive Bayes classifier (NBC) and the support vector machine (SVM). The performance of several multiple classifier systems...

  5. Sustainability of Hydrogen Supply Chain. Part II: Prioritizing and Classifying the Sustainability of Hydrogen Supply Chains based on the Combination of Extension Theory and AHP

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Manzardo, Alessandro; Toniolo, Sara

    2013-01-01

    The purpose of this study is to develop a method for prioritizing and classifying the sustainability of hydrogen supply chains and assist decision-making for the stakeholders/decision-makers. Multiple criteria for sustainability assessment of hydrogen supply chains are considered and multiple...... decision-makers are allowed to participate in the decision-making using linguistic terms. In this study, extension theory and analytic hierarchy process are combined to rate the sustainability of hydrogen supply chains. The sustainability of hydrogen supply chains could be identified according...

  6. Submarine geo-hazards on the eastern Sardinia-Corsica continental margin based on preliminary pipeline route investigation

    Science.gov (United States)

    Cecchini, S.; Taliana, D.; Giacomini, L.; Herisson, C.; Bonnemaire, B.

    2011-03-01

    The understanding of the morphology and the shallow geo-hazards of the seafloor is a major focus for both academic and private industry research. On November and December 2009 a geophysical pipeline survey was carried out by Fugro Oceansismica S.p.A. (FOSPA) and FUGRO France (FFSA) for DORIS Engineering on behalf of GRTgaz (Engineering centre, Transmission Pipe Department; http://www.grtgaz.com) which are currently investigating the possibility of laying a pipeline between Sardinia and Corsica as a spur line from the planned GALSI Project. The Project, "Alimentation de la Corse en gaz naturel", consists of a corridor 100 km long and 1.0 km wide along the Corsica-Sardinia shelf. The integration of the multibeam, sidescan sonar and sparker data provided a high resolution seafloor mapping for geo-hazard assessment. In this article the data acquired along a break of slope section (approximately 20 km × 1.5 km), in the eastern sector of the Strait of Bonifacio are described. The area was abandoned during the survey, because of its unsuitability. Indeed, in this area the continental shelf, approximately 100 m deep and deepening gently eastward, is characterized by an uneven morphology, with different seabed features such as Beach- rocks mainly NNW-SSE oriented. Also, the continuity of the continental margin, identified around -110/-115 m, is interrupted by four canyon heads which incise the slope and are associated with glide deposits.

  7. Refining margins and prospects

    International Nuclear Information System (INIS)

    Baudouin, C.; Favennec, J.P.

    1997-01-01

    Refining margins throughout the world have remained low in 1996. In Europe, in spite of an improvement, particularly during the last few weeks, they are still not high enough to finance new investments. Although the demand for petroleum products is increasing, experts are still sceptical about any rapid recovery due to prevailing overcapacity and to continuing capacity growth. After a historical review of margins and an analysis of margins by regions, we analyse refining over-capacities in Europe and the unbalances between production and demand. Then we discuss the current situation concerning barriers to the rationalization, agreements between oil companies, and the consequences on the future of refining capacities and margins. (author)

  8. Classified

    CERN Multimedia

    Computer Security Team

    2011-01-01

    In the last issue of the Bulletin, we have discussed recent implications for privacy on the Internet. But privacy of personal data is just one facet of data protection. Confidentiality is another one. However, confidentiality and data protection are often perceived as not relevant in the academic environment of CERN.   But think twice! At CERN, your personal data, e-mails, medical records, financial and contractual documents, MARS forms, group meeting minutes (and of course your password!) are all considered to be sensitive, restricted or even confidential. And this is not all. Physics results, in particular when being preliminary and pending scrutiny, are sensitive, too. Just recently, an ATLAS collaborator copy/pasted the abstract of an ATLAS note onto an external public blog, despite the fact that this document was clearly marked as an "Internal Note". Such an act was not only embarrassing to the ATLAS collaboration, and had negative impact on CERN’s reputation --- i...

  9. Assessment of the Agronomic Feasibility of Bioenergy Crop Cultivation on Marginal and Polluted Land: A GIS-Based Suitability Study from the Sulcis Area, Italy

    Directory of Open Access Journals (Sweden)

    Giuseppe Pulighe

    2016-10-01

    Full Text Available In the context of environmental sustainability there has been an increasing interest in bioenergy production from renewable resources, and is expected that European biofuel production from energy crops will increase as a consequence of the achievement of policy targets. The aim of this paper is to assess the agronomic feasibility of biomass crop cultivation to provide profitable renewable feedstocks in a marginal and heavy-metal polluted area located in the Sulcis district, Sardinia (Italy. Results from literature review and unpublished data from field trials carried out in Sardinia were analysed to establish the main agronomic traits of crops (e.g., yield potential and input requirements. A Geographical Information System (GIS-based procedure with remotely sensed data is also used to evaluate the land suitability and the actual land use/cover, considering a future scenario of expansion of energy crops on these marginal areas avoiding potential conflicts with food production. The results of the review suggests that giant reed, native perennial grasses and milk thistle are the most suitable energy crops for this area. The land suitability analysis shows that about 5700 ha and 1000 ha could be available for feedstock cultivation in the study area and in the most polluted area, respectively. The results obtained from land suitability process and agronomic evaluation will serve as a base to support technical and economical feasibility studies, as well as for the evaluation of environmental sustainability of the cultivation in the study area.

  10. Three data partitioning strategies for building local classifiers (Chapter 14)

    NARCIS (Netherlands)

    Zliobaite, I.; Okun, O.; Valentini, G.; Re, M.

    2011-01-01

    Divide-and-conquer approach has been recognized in multiple classifier systems aiming to utilize local expertise of individual classifiers. In this study we experimentally investigate three strategies for building local classifiers that are based on different routines of sampling data for training.

  11. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  12. Classifying Sluice Occurrences in Dialogue

    DEFF Research Database (Denmark)

    Baird, Austin; Hamza, Anissa; Hardt, Daniel

    2018-01-01

    perform manual annotation with acceptable inter-coder agreement. We build classifier models with Decision Trees and Naive Bayes, with accuracy of 67%. We deploy a classifier to automatically classify sluice occurrences in OpenSubtitles, resulting in a corpus with 1.7 million occurrences. This will support....... Despite this, the corpus can be of great use in research on sluicing and development of systems, and we are making the corpus freely available on request. Furthermore, we are in the process of improving the accuracy of sluice identification and annotation for the purpose of created a subsequent version...

  13. Robust Template Decomposition without Weight Restriction for Cellular Neural Networks Implementing Arbitrary Boolean Functions Using Support Vector Classifiers

    Directory of Open Access Journals (Sweden)

    Yih-Lon Lin

    2013-01-01

    Full Text Available If the given Boolean function is linearly separable, a robust uncoupled cellular neural network can be designed as a maximal margin classifier. On the other hand, if the given Boolean function is linearly separable but has a small geometric margin or it is not linearly separable, a popular approach is to find a sequence of robust uncoupled cellular neural networks implementing the given Boolean function. In the past research works using this approach, the control template parameters and thresholds are restricted to assume only a given finite set of integers, and this is certainly unnecessary for the template design. In this study, we try to remove this restriction. Minterm- and maxterm-based decomposition algorithms utilizing the soft margin and maximal margin support vector classifiers are proposed to design a sequence of robust templates implementing an arbitrary Boolean function. Several illustrative examples are simulated to demonstrate the efficiency of the proposed method by comparing our results with those produced by other decomposition methods with restricted weights.

  14. Craniocaudal Safety Margin Calculation Based on Interfractional Changes in Tumor Motion in Lung SBRT Assessed With an EPID in Cine Mode

    International Nuclear Information System (INIS)

    Ueda, Yoshihiro; Miyazaki, Masayoshi; Nishiyama, Kinji; Suzuki, Osamu; Tsujii, Katsutomo; Miyagi, Ken

    2012-01-01

    Purpose: To evaluate setup error and interfractional changes in tumor motion magnitude using an electric portal imaging device in cine mode (EPID cine) during the course of stereotactic body radiation therapy (SBRT) for non–small-cell lung cancer (NSCLC) and to calculate margins to compensate for these variations. Materials and Methods: Subjects were 28 patients with Stage I NSCLC who underwent SBRT. Respiratory-correlated four-dimensional computed tomography (4D-CT) at simulation was binned into 10 respiratory phases, which provided average intensity projection CT data sets (AIP). On 4D-CT, peak-to-peak motion of the tumor (M-4DCT) in the craniocaudal direction was assessed and the tumor center (mean tumor position [MTP]) of the AIP (MTP-4DCT) was determined. At treatment, the tumor on cone beam CT was registered to that on AIP for patient setup. During three sessions of irradiation, peak-to-peak motion of the tumor (M-cine) and the mean tumor position (MTP-cine) were obtained using EPID cine and in-house software. Based on changes in tumor motion magnitude (∆M) and patient setup error (∆MTP), defined as differences between M-4DCT and M-cine and between MTP-4DCT and MTP-cine, a margin to compensate for these variations was calculated with Stroom’s formula. Results: The means (±standard deviation: SD) of M-4DCT and M-cine were 3.1 (±3.4) and 4.0 (±3.6) mm, respectively. The means (±SD) of ∆M and ∆MTP were 0.9 (±1.3) and 0.2 (±2.4) mm, respectively. Internal target volume-planning target volume (ITV-PTV) margins to compensate for ∆M, ∆MTP, and both combined were 3.7, 5.2, and 6.4 mm, respectively. Conclusion: EPID cine is a useful modality for assessing interfractional variations of tumor motion. The ITV-PTV margins to compensate for these variations can be calculated.

  15. Multi-label classifier based on histogram of gradients for predicting the anatomical therapeutic chemical class/classes of a given compound.

    Science.gov (United States)

    Nanni, Loris; Brahnam, Sheryl

    2017-09-15

    Given an unknown compound, is it possible to predict its Anatomical Therapeutic Chemical class/classes? This is a challenging yet important problem since such a prediction could be used to deduce not only a compound's possible active ingredients but also its therapeutic, pharmacological and chemical properties, thereby substantially expediting the pace of drug development. The problem is challenging because some drugs and compounds belong to two or more ATC classes, making machine learning extremely difficult. In this article a multi-label classifier system is proposed that incorporates information about a compound's chemical-chemical interaction and its structural and fingerprint similarities to other compounds belonging to the different ATC classes. The proposed system reshapes a 1D feature vector to obtain a 2D matrix representation of the compound. This matrix is then described by a histogram of gradients that is fed into a Multi-Label Learning with Label-Specific Features classifier. Rigorous cross-validations demonstrate the superior prediction quality of this method compared with other state-of-the-art approaches developed for this problem, a superiority that is reflected particularly in the absolute true rate, the most important and harshest metric for assessing multi-label systems. The MATLAB code for replicating the experiments presented in this article is available at https://www.dropbox.com/s/7v1mey48tl9bfgz/ToolPaperATC.rar?dl=0 . loris.nanni@unipd.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. On marginal regeneration

    NARCIS (Netherlands)

    Stein, H.N.

    1991-01-01

    On applying the marginal regeneration concept to the drainage of free liquid films, problems are encountered: the films do not show a "neck" of minimum thickness at the film/border transition; and the causes of the direction dependence of the marginal regeneration are unclear. Both problems can be

  17. Indian Ocean margins

    Digital Repository Service at National Institute of Oceanography (India)

    Naqvi, S.W.A

    in the latter two areas. Some of these fluxes are expected to be substantial in the case of Indonesian continental margins and probably also across the eastern coasts of Africa not covered in this chapter. However, a dearth of information makes these margins...

  18. Fixing soft margins

    NARCIS (Netherlands)

    P. Kofman (Paul); A. Vaal, de (Albert); C.G. de Vries (Casper)

    1993-01-01

    textabstractNon-parametric tolerance limits are employed to calculate soft margins such as advocated in Williamson's target zone proposal. In particular, the tradeoff between softness and zone width is quantified. This may be helpful in choosing appropriate margins. Furthermore, it offers

  19. Quantum ensembles of quantum classifiers.

    Science.gov (United States)

    Schuld, Maria; Petruccione, Francesco

    2018-02-09

    Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.

  20. 4D-CT-based target volume definition in stereotactic radiotherapy of lung tumours: Comparison with a conventional technique using individual margins

    International Nuclear Information System (INIS)

    Hof, Holger; Rhein, Bernhard; Haering, Peter; Kopp-Schneider, Annette; Debus, Juergen; Herfarth, Klaus

    2009-01-01

    Purpose: To investigate the dosimetric benefit of integration of 4D-CT in the planning target volume (PTV) definition process compared to conventional PTV definition using individual margins in stereotactic body radiotherapy (SBRT) of lung tumours. Material and methods: Two different PTVs were defined: PTV conv consisting of the helical-CT-based clinical target volume (CTV) enlarged isotropically for each spatial direction by the individually measured amount of motion in the 4D-CT, and PTV 4D encompassing the CTVs defined in the 4D-CT phases displaying the extremes of the tumour position. Tumour motion as well as volumetric and dosimetric differences and relations of both PTVs were evaluated. Results: Volumetric examinations revealed a significant reduction of the mean PTV by 4D-CT from 57.7 to 40.7 cm 3 (31%) (p 4D in PTV conv (r = -0.69, 90% confidence limits: -0.87 and -0.34, p = 0.007). Mean lung dose (MLD) was decreased significantly by 17% (p < 0.001). Conclusions: In SBRT of lung tumours the mere use of individual margins for target volume definition cannot compensate for the additional effects that the implementation of 4D-CT phases can offer.

  1. Self-reported pain severity, quality of life, disability, anxiety and depression in patients classified with 'nociceptive', 'peripheral neuropathic' and 'central sensitisation' pain. The discriminant validity of mechanisms-based classifications of low back (±leg) pain.

    LENUS (Irish Health Repository)

    Smart, Keith M

    2012-04-01

    Evidence of validity is required to support the use of mechanisms-based classifications of pain clinically. The purpose of this study was to evaluate the discriminant validity of \\'nociceptive\\' (NP), \\'peripheral neuropathic\\' (PNP) and \\'central sensitisation\\' (CSP) as mechanisms-based classifications of pain in patients with low back (±leg) pain by evaluating the extent to which patients classified in this way differ from one another according to health measures associated with various dimensions of pain. This study employed a cross-sectional, between-subjects design. Four hundred and sixty-four patients with low back (±leg) pain were assessed using a standardised assessment protocol. Clinicians classified each patient\\'s pain using a mechanisms-based classification approach. Patients completed a number of self-report measures associated with pain severity, health-related quality of life, functional disability, anxiety and depression. Discriminant validity was evaluated using a multivariate analysis of variance. There was a statistically significant difference between pain classifications on the combined self-report measures, (p = .001; Pillai\\'s Trace = .33; partial eta squared = .16). Patients classified with CSP (n = 106) reported significantly more severe pain, poorer general health-related quality of life, and greater levels of back pain-related disability, depression and anxiety compared to those classified with PNP (n = 102) and NP (n = 256). A similar pattern was found in patients with PNP compared to NP. Mechanisms-based pain classifications may reflect meaningful differences in attributes underlying the multidimensionality of pain. Further studies are required to evaluate the construct and criterion validity of mechanisms-based classifications of musculoskeletal pain.

  2. A Customizable Text Classifier for Text Mining

    Directory of Open Access Journals (Sweden)

    Yun-liang Zhang

    2007-12-01

    Full Text Available Text mining deals with complex and unstructured texts. Usually a particular collection of texts that is specified to one or more domains is necessary. We have developed a customizable text classifier for users to mine the collection automatically. It derives from the sentence category of the HNC theory and corresponding techniques. It can start with a few texts, and it can adjust automatically or be adjusted by user. The user can also control the number of domains chosen and decide the standard with which to choose the texts based on demand and abundance of materials. The performance of the classifier varies with the user's choice.

  3. Machine-Learning Classifier for Patients with Major Depressive Disorder: Multifeature Approach Based on a High-Order Minimum Spanning Tree Functional Brain Network.

    Science.gov (United States)

    Guo, Hao; Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie

    2017-01-01

    High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%.

  4. Classifier Fusion With Contextual Reliability Evaluation.

    Science.gov (United States)

    Liu, Zhunga; Pan, Quan; Dezert, Jean; Han, Jun-Wei; He, You

    2018-05-01

    Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.

  5. Re-appraisal of the Magma-rich versus Magma-poor Paradigm at Rifted Margins: consequences for breakup processes

    Science.gov (United States)

    Tugend, J.; Gillard, M.; Manatschal, G.; Nirrengarten, M.; Harkin, C. J.; Epin, M. E.; Sauter, D.; Autin, J.; Kusznir, N. J.; McDermott, K.

    2017-12-01

    Rifted margins are often classified based on their magmatic budget only. Magma-rich margins are commonly considered to have excess decompression melting at lithospheric breakup compared with steady state seafloor spreading while magma-poor margins have suppressed melting. New observations derived from high quality geophysical data sets and drill-hole data have revealed the diversity of rifted margin architecture and variable distribution of magmatism. Recent studies suggest, however, that rifted margins have more complex and polyphase tectono-magmatic evolutions than previously assumed and cannot be characterized based on the observed volume of magma alone. We compare the magmatic budget related to lithospheric breakup along two high-resolution long-offset deep reflection seismic profiles across the SE-Indian (magma-poor) and Uruguayan (magma-rich) rifted margins. Resolving the volume of magmatic additions is difficult. Interpretations are non-unique and several of them appear plausible for each case involving variable magmatic volumes and mechanisms to achieve lithospheric breakup. A supposedly 'magma-poor' rifted margin (SE-India) may show a 'magma-rich' lithospheric breakup whereas a 'magma-rich' rifted margin (Uruguay) does not necessarily show excess magmatism at lithospheric breakup compared with steady-state seafloor spreading. This questions the paradigm that rifted margins can be subdivided in either magma-poor or magma-rich margins. The Uruguayan and other magma-rich rifted margins appear characterized by an early onset of decompression melting relative to crustal breakup. For the converse, where the onset of decompression melting is late compared with the timing of crustal breakup, mantle exhumation can occur (e.g. SE-India). Our work highlights the difficulty in determining a magmatic budget at rifted margins based on seismic reflection data alone, showing the limitations of margin classification based solely on magmatic volumes. The timing of

  6. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  7. Refining margins: recent trends

    International Nuclear Information System (INIS)

    Baudoin, C.; Favennec, J.P.

    1999-01-01

    Despite a business environment that was globally mediocre due primarily to the Asian crisis and to a mild winter in the northern hemisphere, the signs of improvement noted in the refining activity in 1996 were borne out in 1997. But the situation is not yet satisfactory in this sector: the low return on invested capital and the financing of environmental protection expenditure are giving cause for concern. In 1998, the drop in crude oil prices and the concomitant fall in petroleum product prices was ultimately rather favorable to margins. Two elements tended to put a damper on this relative optimism. First of all, margins continue to be extremely volatile and, secondly, the worsening of the economic and financial crisis observed during the summer made for a sharp decline in margins in all geographic regions, especially Asia. Since the beginning of 1999, refining margins are weak and utilization rates of refining capacities have decreased. (authors)

  8. Level of Alkenylbenzenes in Parsley and Dill Based Teas and Associated Risk Assessment Using the Margin of Exposure Approach

    NARCIS (Netherlands)

    Alajlouni, Abdul; Al-Malahmeh, Amer J.; Isnaeni, Farida Nur; Wesseling, Sebas; Vervoort, Jacques; Rietjens, Ivonne M.C.M.

    2016-01-01

    Risk assessment of parsley and dill based teas that contain alkenylbenzenes was performed. To this end the estimated daily intake (EDI) of alkenylbenzenes resulting from use of the teas was quantified. Since most teas appeared to contain more than one alkenylbenzene, a combined risk assessment

  9. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    Directory of Open Access Journals (Sweden)

    Shehzad Khalid

    2014-01-01

    Full Text Available We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.

  10. SOCIAL MARGINALIZATION AND HEALTH

    Directory of Open Access Journals (Sweden)

    Marjana Bogdanović

    2007-04-01

    Full Text Available The 20th century was characterized by special improvement in health. The aim of WHO’s policy EQUITY IN HEALTH is to enable equal accessibility and equal high quality of health care for all citizens. More or less some social groups have stayed out of many social systems even out of health care system in the condition of social marginalization. Phenomenon of social marginalization is characterized by dynamics. Marginalized persons have lack of control over their life and available resources. Social marginalization stands for a stroke on health and makes the health status worse. Low socio-economic level dramatically influences people’s health status, therefore, poverty and illness work together. Characteristic marginalized groups are: Roma people, people with AIDS, prisoners, persons with development disorders, persons with mental health disorders, refugees, homosexual people, delinquents, prostitutes, drug consumers, homeless…There is a mutual responsibility of community and marginalized individuals in trying to resolve the problem. Health and other problems could be solved only by multisector approach to well-designed programs.

  11. Pickering seismic safety margin

    International Nuclear Information System (INIS)

    Ghobarah, A.; Heidebrecht, A.C.; Tso, W.K.

    1992-06-01

    A study was conducted to recommend a methodology for the seismic safety margin review of existing Canadian CANDU nuclear generating stations such as Pickering A. The purpose of the seismic safety margin review is to determine whether the nuclear plant has sufficient seismic safety margin over its design basis to assure plant safety. In this review process, it is possible to identify the weak links which might limit the seismic performance of critical structures, systems and components. The proposed methodology is a modification the EPRI (Electric Power Research Institute) approach. The methodology includes: the characterization of the site margin earthquake, the definition of the performance criteria for the elements of a success path, and the determination of the seismic withstand capacity. It is proposed that the margin earthquake be established on the basis of using historical records and the regional seismo-tectonic and site specific evaluations. The ability of the components and systems to withstand the margin earthquake is determined by database comparisons, inspection, analysis or testing. An implementation plan for the application of the methodology to the Pickering A NGS is prepared

  12. A measurement-based method for predicting margins and uncertainties for unprotected accidents in the Integral Fast Reactor concept

    International Nuclear Information System (INIS)

    Vilim, R.B.

    1990-01-01

    A measurement-based method for predicting the response of an LMR core to unprotected accidents has been developed. The method processes plant measurements taken at normal operation to generate a stochastic model for the core dynamics. This model can be used to predict three sigma confidence intervals for the core temperature and power response. Preliminary numerical simulations performed for EBR-2 appear promising. 6 refs., 2 figs

  13. Knowledge Uncertainty and Composed Classifier

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2007-01-01

    Roč. 1, č. 2 (2007), s. 101-105 ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Boosting architecture * contextual modelling * composed classifier * knowledge management, * knowledge * uncertainty Subject RIV: IN - Informatics, Computer Science

  14. Content-based retrieval of brain tumor in contrast-enhanced MRI images using tumor margin information and learned distance metric.

    Science.gov (United States)

    Yang, Wei; Feng, Qianjin; Yu, Mei; Lu, Zhentai; Gao, Yang; Xu, Yikai; Chen, Wufan

    2012-11-01

    A content-based image retrieval (CBIR) method for T1-weighted contrast-enhanced MRI (CE-MRI) images of brain tumors is presented for diagnosis aid. The method is thoroughly evaluated on a large image dataset. Using the tumor region as a query, the authors' CBIR system attempts to retrieve tumors of the same pathological category. Aside from commonly used features such as intensity, texture, and shape features, the authors use a margin information descriptor (MID), which is capable of describing the characteristics of tissue surrounding a tumor, for representing image contents. In addition, the authors designed a distance metric learning algorithm called Maximum mean average Precision Projection (MPP) to maximize the smooth approximated mean average precision (mAP) to optimize retrieval performance. The effectiveness of MID and MPP algorithms was evaluated using a brain CE-MRI dataset consisting of 3108 2D scans acquired from 235 patients with three categories of brain tumors (meningioma, glioma, and pituitary tumor). By combining MID and other features, the mAP of retrieval increased by more than 6% with the learned distance metrics. The distance metric learned by MPP significantly outperformed the other two existing distance metric learning methods in terms of mAP. The CBIR system using the proposed strategies achieved a mAP of 87.3% and a precision of 89.3% when top 10 images were returned by the system. Compared with scale-invariant feature transform, the MID, which uses the intensity profile as descriptor, achieves better retrieval performance. Incorporating tumor margin information represented by MID with the distance metric learned by the MPP algorithm can substantially improve the retrieval performance for brain tumors in CE-MRI.

  15. A Topic Model Approach to Representing and Classifying Football Plays

    KAUST Repository

    Varadarajan, Jagannadan

    2013-09-09

    We address the problem of modeling and classifying American Football offense teams’ plays in video, a challenging example of group activity analysis. Automatic play classification will allow coaches to infer patterns and tendencies of opponents more ef- ficiently, resulting in better strategy planning in a game. We define a football play as a unique combination of player trajectories. To this end, we develop a framework that uses player trajectories as inputs to MedLDA, a supervised topic model. The joint maximiza- tion of both likelihood and inter-class margins of MedLDA in learning the topics allows us to learn semantically meaningful play type templates, as well as, classify different play types with 70% average accuracy. Furthermore, this method is extended to analyze individual player roles in classifying each play type. We validate our method on a large dataset comprising 271 play clips from real-world football games, which will be made publicly available for future comparisons.

  16. The method and efficacy of support vector machine classifiers based on texture features and multi-resolution histogram from 18F-FDG PET-CT images for the evaluation of mediastinal lymph nodes in patients with lung cancer

    International Nuclear Information System (INIS)

    Gao, Xuan; Chu, Chunyu; Li, Yingci; Lu, Peiou; Wang, Wenzhi; Liu, Wanyu; Yu, Lijuan

    2015-01-01

    Highlights: • Three support vector machine classifiers were constructed from PET-CT images. • The areas under the ROC curve for SVM1, SVM2, and SVM3 were 0.689, 0.579, and 0.685, respectively. • The areas under curves for maximum short diameter and SUV max were 0.684 and 0.652, respectively. • The algorithm based on SVM was potential in the diagnosis of mediastinal lymph nodes. - Abstract: Objectives: In clinical practice, image analysis is dependent on simply visual perception and the diagnostic efficacy of this analysis pattern is limited for mediastinal lymph nodes in patients with lung cancer. In order to improve diagnostic efficacy, we developed a new computer-based algorithm and tested its diagnostic efficacy. Methods: 132 consecutive patients with lung cancer underwent 18 F-FDG PET/CT examination before treatment. After all data were imported into the database of an on-line medical image analysis platform, the diagnostic efficacy of visual analysis was first evaluated without knowing pathological results, and the maximum short diameter and maximum standardized uptake value (SUV max ) were measured. Then lymph nodes were segmented manually. Three classifiers based on support vector machine (SVM) were constructed from CT, PET, and combined PET-CT images, respectively. The diagnostic efficacy of SVM classifiers was obtained and evaluated. Results: According to ROC curves, the areas under curves for maximum short diameter and SUV max were 0.684 and 0.652, respectively. The areas under the ROC curve for SVM1, SVM2, and SVM3 were 0.689, 0.579, and 0.685, respectively. Conclusion: The algorithm based on SVM was potential in the diagnosis of mediastinal lymph nodes

  17. MARGINS: Toward a novel science plan

    Science.gov (United States)

    Mutter, John C.

    A science plan to study continental margins has been in the works for the past 3 years, with almost 200 Earth scientists from a wide variety of disciplines gathering at meetings and workshops. Most geological hazards and resources are found at continental margins, yet our understanding of the processes that shape the margins is meager.In formulating this MARGINS research initiative, fundamental issues concerning our understanding of basic Earth-forming processes have arisen. It is clear that a business-as-usual approach will not solve the class of problems defined by the MARGINS program; the solutions demand approaches different from those used in the past. In many cases, a different class of experiment will be required, one that is well beyond the capability of individual principle investigators to undertake on their own. In most cases, broadly based interdisciplinary studies will be needed.

  18. TRABAJO, REMUNERACIÓN Y PRODUCTIVIDAD (Cómo establecer la cuota de remuneración justa al trabajo en base a su productividad marginal TRABAJO, REMUNERACIÓN Y PRODUCTIVIDAD (Cómo establecer la cuota de remuneración justa al trabajo en base a su productividad marginal

    Directory of Open Access Journals (Sweden)

    Jorge Rionda Ramírez

    2012-02-01

    Full Text Available Este trabajo hace una propuesta metodológica acerca de cómo medir la cuota justa de remuneración al trabajo con base a su contribución marginal al valor del producto. Se aplica al caso de la producción de bienes tangibles. Parte de un modelo de programación lineal cuya función dual estipula precios sombra que indican la contribución marginal al valor producido de cada unidad de insumo involucrado en la producción. Un acercamiento novedoso dado que en los trabajos marginalistas no existe acercamiento alguno ex profeso en torno al tema de la tasa de explotación y la cuota de remuneración al trabajo. Así también presenta un acercamiento interesante en materia de precios de oportunidad en la sustitución de factores productivos en base al grado de intensidad utilizada del mismo, como de su contribución al valor producido. Con esto se comprueba que aún dentro de la tesis utilitarista el tema de la explotación tiene cabida, simplemente que los teóricos de esta visión evaden el tema por tratar de establecer un planteamiento del problema económico de tipo cientificista, positivo, sin implicaciones de corte político-normativo.This work makes a methodological proposal on how to measure a just remuneration rate for work on the basis of its marginal contribution to product value. It is applied to the production of tangible goods and comes from a linear programming model whose dual function stipulates the shadow prices that indicate the marginal contribution of the value produced of each factor involved in production. It is a novel approach given that in marginalist work no approach exists ex profeso on the subjects of exploitation rate and the remuneration rate for work. It also presents an interesting approach on the subject of opportunity prices with the substitution of productive factors based on the degree of intensity used in production, as well as its contribution to the value produced. It is shown that even within a utilitarian

  19. Classified facilities for environmental protection

    International Nuclear Information System (INIS)

    Anon.

    1993-02-01

    The legislation of the classified facilities governs most of the dangerous or polluting industries or fixed activities. It rests on the law of 9 July 1976 concerning facilities classified for environmental protection and its application decree of 21 September 1977. This legislation, the general texts of which appear in this volume 1, aims to prevent all the risks and the harmful effects coming from an installation (air, water or soil pollutions, wastes, even aesthetic breaches). The polluting or dangerous activities are defined in a list called nomenclature which subjects the facilities to a declaration or an authorization procedure. The authorization is delivered by the prefect at the end of an open and contradictory procedure after a public survey. In addition, the facilities can be subjected to technical regulations fixed by the Environment Minister (volume 2) or by the prefect for facilities subjected to declaration (volume 3). (A.B.)

  20. Univariate decision tree induction using maximum margin classification

    OpenAIRE

    Yıldız, Olcay Taner

    2012-01-01

    In many pattern recognition applications, first decision trees are used due to their simplicity and easily interpretable nature. In this paper, we propose a new decision tree learning algorithm called univariate margin tree where, for each continuous attribute, the best split is found using convex optimization. Our simulation results on 47 data sets show that the novel margin tree classifier performs at least as good as C4.5 and linear discriminant tree (LDT) with a similar time complexity. F...

  1. Evaluation of chemical transport model predictions of primary organic aerosol for air masses classified by particle-component-based factor analysis

    OpenAIRE

    C. A. Stroud; M. D. Moran; P. A. Makar; S. Gong; W. Gong; J. Zhang; J. G. Slowik; J. P. D. Abbatt; G. Lu; J. R. Brook; C. Mihele; Q. Li; D. Sills; K. B. Strawbridge; M. L. McGuire

    2012-01-01

    Observations from the 2007 Border Air Quality and Meteorology Study (BAQS-Met 2007) in Southern Ontario, Canada, were used to evaluate predictions of primary organic aerosol (POA) and two other carbonaceous species, black carbon (BC) and carbon monoxide (CO), made for this summertime period by Environment Canada's AURAMS regional chemical transport model. Particle component-based factor analysis was applied to aerosol mass spectrometer measurements made at one urban site (Windsor, ON) and two...

  2. Stochastic LMP (Locational marginal price) calculation method in distribution systems to minimize loss and emission based on Shapley value and two-point estimate method

    International Nuclear Information System (INIS)

    Azad-Farsani, Ehsan; Agah, S.M.M.; Askarian-Abyaneh, Hossein; Abedi, Mehrdad; Hosseinian, S.H.

    2016-01-01

    LMP (Locational marginal price) calculation is a serious impediment in distribution operation when private DG (distributed generation) units are connected to the network. A novel policy is developed in this study to guide distribution company (DISCO) to exert its control over the private units when power loss and green-house gases emissions are minimized. LMP at each DG bus is calculated according to the contribution of the DG to the reduced amount of loss and emission. An iterative algorithm which is based on the Shapley value method is proposed to allocate loss and emission reduction. The proposed algorithm will provide a robust state estimation tool for DISCOs in the next step of operation. The state estimation tool provides the decision maker with the ability to exert its control over private DG units when loss and emission are minimized. Also, a stochastic approach based on the PEM (point estimate method) is employed to capture uncertainty in the market price and load demand. The proposed methodology is applied to a realistic distribution network, and efficiency and accuracy of the method are verified. - Highlights: • Reduction of the loss and emission at the same time. • Fair allocation of loss and emission reduction. • Estimation of the system state using an iterative algorithm. • Ability of DISCOs to control DG units via the proposed policy. • Modeling the uncertainties to calculate the stochastic LMP.

  3. Marginal kidney donor

    Directory of Open Access Journals (Sweden)

    Ganesh Gopalakrishnan

    2007-01-01

    Full Text Available Renal transplantation is the treatment of choice for a medically eligible patient with end stage renal disease. The number of renal transplants has increased rapidly over the last two decades. However, the demand for organs has increased even more. This disparity between the availability of organs and waitlisted patients for transplants has forced many transplant centers across the world to use marginal kidneys and donors. We performed a Medline search to establish the current status of marginal kidney donors in the world. Transplant programs using marginal deceased renal grafts is well established. The focus is now on efforts to improve their results. Utilization of non-heart-beating donors is still in a plateau phase and comprises a minor percentage of deceased donations. The main concern is primary non-function of the renal graft apart from legal and ethical issues. Transplants with living donors outnumbered cadaveric transplants at many centers in the last decade. There has been an increased use of marginal living kidney donors with some acceptable medical risks. Our primary concern is the safety of the living donor. There is not enough scientific data available to quantify the risks involved for such donation. The definition of marginal living donor is still not clear and there are no uniform recommendations. The decision must be tailored to each donor who in turn should be actively involved at all levels of the decision-making process. In the current circumstances, our responsibility is very crucial in making decisions for either accepting or rejecting a marginal living donor.

  4. 76 FR 34761 - Classified National Security Information

    Science.gov (United States)

    2011-06-14

    ... MARINE MAMMAL COMMISSION Classified National Security Information [Directive 11-01] AGENCY: Marine... Commission's (MMC) policy on classified information, as directed by Information Security Oversight Office... of Executive Order 13526, ``Classified National Security Information,'' and 32 CFR part 2001...

  5. From Borders to Margins

    DEFF Research Database (Denmark)

    Parker, Noel

    2009-01-01

    of entities that are ever open to identity shifts.  The concept of the margin possesses a much wider reach than borders, and focuses continual attention on the meetings and interactions between a range of indeterminate entities whose interactions may determine both themselves and the types of entity...... upon Deleuze's philosophy to set out an ontology in which the continual reformulation of entities in play in ‘post-international' society can be grasped.  This entails a strategic shift from speaking about the ‘borders' between sovereign states to referring instead to the ‘margins' between a plethora...

  6. Validation of simple quantification methods for 18F FP CIT PET Using Automatic Delineation of volumes of interest based on statistical probabilistic anatomical mapping and isocontour margin setting

    International Nuclear Information System (INIS)

    Kim, Yong Il; Im, Hyung Jun; Paeng, Jin Chul; Lee, Jae Sung; Eo, Jae Seon; Kim, Dong Hyun; Kim, Euishin E.; Kang, Keon Wook; Chung, June Key; Lee Dong Soo

    2012-01-01

    18 F FP CIT positron emission tomography (PET) is an effective imaging for dopamine transporters. In usual clinical practice, 18 F FP CIT PET is analyzed visually or quantified using manual delineation of a volume of interest (VOI) fir the stratum. in this study, we suggested and validated two simple quantitative methods based on automatic VOI delineation using statistical probabilistic anatomical mapping (SPAM) and isocontour margin setting. Seventy five 18 F FP CIT images acquired in routine clinical practice were used for this study. A study-specific image template was made and the subject images were normalized to the template. afterwards, uptakes in the striatal regions and cerebellum were quantified using probabilistic VOI based on SPAM. A quantitative parameter, Q SPAM, was calculated to simulate binding potential. additionally, the functional volume of each striatal region and its uptake were measured in automatically delineated VOI using isocontour margin setting. Uptake volume product(Q UVP) was calculated for each striatal region. Q SPAMa nd Q UVPw as calculated for each visual grading and the influence of cerebral atrophy on the measurements was tested. Image analyses were successful in all the cases. Both the Q SPAMa nd Q UVPw ere significantly different according to visual grading (0.001). The agreements of Q UVPa nd Q SPAMw ith visual grading were slight to fair for the caudate nucleus (K= 0.421 and 0.291, respectively) and good to prefect to the putamen (K=0.663 and 0.607, respectively). Also, Q SPAMa nd Q UVPh ad a significant correlation with each other (0.001). Cerebral atrophy made a significant difference in Q SPAMa nd Q UVPo f the caudate nuclei regions with decreased 18 F FP CIT uptake. Simple quantitative measurements of Q SPAMa nd Q UVPs howed acceptable agreement with visual grad-ing. although Q SPAMi n some group may be influenced by cerebral atrophy, these simple methods are expected to be effective in the quantitative analysis of F FP

  7. New Approach of Feature Extraction Method Based on the Raw Form and his Skeleton for Gujarati Handwritten Digits using Neural Networks Classifier

    Directory of Open Access Journals (Sweden)

    K. Moro

    2014-12-01

    Full Text Available This paper presents an optical character recognition (OCR system for Gujarati handwritten digits. One may find so much of work for latin writing, arabic, chines, etc. but Gujarati is a language for which hardly any work is traceable especially for handwritten characters. Here in this work we have proposed a method of feature extraction based on the raw form of the character and his skeleton and we have shown the advantage of using this method over other approaches mentioned in this article.

  8. Application of multivariate statistical methods to classify archaeological pottery from Tel-Alramad site, Syria, based on x-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Bakraji, E. H.

    2007-01-01

    Radioisotopic x-ray fluorescence (XRF) analysis has been utilized to determine the elemental composition of 55 archaeological pottery samples by the determination of 17 chemical elements. Fifty-four of them came from the Tel-Alramad Site in Katana town, near Damascus city, Syria, and one sample came from Brazil. The XRF results have been processed using two multivariate statistical methods, cluster and factor analysis, in order to determine similarities and correlation between the selected samples based on their elemental composition. The methodology successfully separates the samples where four distinct chemical groups were identified. (author)

  9. An ICA-EBM-Based sEMG Classifier for Recognizing Lower Limb Movements in Individuals With and Without Knee Pathology.

    Science.gov (United States)

    Naik, Ganesh R; Selvan, S Easter; Arjunan, Sridhar P; Acharyya, Amit; Kumar, Dinesh K; Ramanujam, Arvind; Nguyen, Hung T

    2018-03-01

    Surface electromyography (sEMG) data acquired during lower limb movements has the potential for investigating knee pathology. Nevertheless, a major challenge encountered with sEMG signals generated by lower limb movements is the intersubject variability, because the signals recorded from the leg or thigh muscles are contingent on the characteristics of a subject such as gait activity and muscle structure. In order to cope with this difficulty, we have designed a three-step classification scheme. First, the multichannel sEMG is decomposed into activities of the underlying sources by means of independent component analysis via entropy bound minimization. Next, a set of time-domain features, which would best discriminate various movements, are extracted from the source estimates. Finally, the feature selection is performed with the help of the Fisher score and a scree-plot-based statistical technique, prior to feeding the dimension-reduced features to the linear discriminant analysis. The investigation involves 11 healthy subjects and 11 individuals with knee pathology performing three different lower limb movements, namely, walking, sitting, and standing, which yielded an average classification accuracy of 96.1% and 86.2%, respectively. While the outcome of this study per se is very encouraging, with suitable improvement, the clinical application of such an sEMG-based pattern recognition system that distinguishes healthy and knee pathological subjects would be an attractive consequence.

  10. National Geo-Database for Biofuel Simulations and Regional Analysis of Biorefinery Siting Based on Cellulosic Feedstock Grown on Marginal Lands

    Energy Technology Data Exchange (ETDEWEB)

    Izaurralde, Roberto C.; Zhang, Xuesong; Sahajpal, Ritvik; Manowitz, David H.

    2012-04-01

    and PostgreSQL database hosting. The second resource was the DOE-JGCRI 'Evergreen' cluster, capable of executing millions of simulations in relatively short periods. ARRA funding also supported a PhD student from UMD who worked on creating the geodatabases and executing some of the simulations in this study. Using a physically based classification of marginal lands, we simulated production of cellulosic feedstocks from perennial mixtures grown on these lands in the US Midwest. Marginal lands in the western states of the US Midwest appear to have significant potential to supply feedstocks to a cellulosic biofuel industry. Similar results were obtained with simulations of N-fertilized perennial mixtures. A detailed spatial analysis allowed for the identification of possible locations for the establishment of 34 cellulosic ethanol biorefineries with an annual production capacity of 5.6 billion gallons. In summary, we have reported on the development of a spatially explicit national geodatabase to conduct biofuel simulation studies and provided simulation results on the potential of perennial cropping systems to serve as feedstocks for the production of cellulosic ethanol. To accomplish this, we have employed sophisticated spatial analysis methods in combination with the process-based biogeochemical model EPIC. The results of this study will be submitted to the USDOE Bioenergy Knowledge Discovery Framework as a way to contribute to the development of a sustainable bioenergy industry. This work provided the opportunity to test the hypothesis that marginal lands can serve as sources of cellulosic feedstocks and thus contribute to avoid potential conflicts between bioenergy and food production systems. This work, we believe, opens the door for further analysis on the characteristics of cellulosic feedstocks as major contributors to the development of a sustainable bioenergy economy.

  11. Prediction of Mortality in Patients with Isolated Traumatic Subarachnoid Hemorrhage Using a Decision Tree Classifier: A Retrospective Analysis Based on a Trauma Registry System.

    Science.gov (United States)

    Rau, Cheng-Shyuan; Wu, Shao-Chun; Chien, Peng-Chen; Kuo, Pao-Jen; Chen, Yi-Chun; Hsieh, Hsiao-Yun; Hsieh, Ching-Hua

    2017-11-22

    Background: In contrast to patients with traumatic subarachnoid hemorrhage (tSAH) in the presence of other types of intracranial hemorrhage, the prognosis of patients with isolated tSAH is good. The incidence of mortality in these patients ranges from 0-2.5%. However, few data or predictive models are available for the identification of patients with a high mortality risk. In this study, we aimed to construct a model for mortality prediction using a decision tree (DT) algorithm, along with data obtained from a population-based trauma registry, in a Level 1 trauma center. Methods: Five hundred and forty-five patients with isolated tSAH, including 533 patients who survived and 12 who died, between January 2009 and December 2016, were allocated to training ( n = 377) or test ( n = 168) sets. Using the data on demographics and injury characteristics, as well as laboratory data of the patients, classification and regression tree (CART) analysis was performed based on the Gini impurity index, using the rpart function in the rpart package in R. Results: In this established DT model, three nodes (head Abbreviated Injury Scale (AIS) score ≤4, creatinine (Cr) 4 died, as did the 57% of those with an AIS score ≤4, but Cr ≥1.4 and age ≥76 years. All patients who did not meet the above-mentioned criteria survived. With all the variables in the model, the DT achieved an accuracy of 97.9% (sensitivity of 90.9% and specificity of 98.1%) and 97.7% (sensitivity of 100% and specificity of 97.7%), for the training set and test set, respectively. Conclusions: The study established a DT model with three nodes (head AIS score ≤4, Cr decision-making algorithm may help identify patients with a high risk of mortality.

  12. The use of Web-based GIS data technologies in the construction of geoscience instructional materials: examples from the MARGINS Data in the Classroom project

    Science.gov (United States)

    Ryan, J. G.; McIlrath, J. A.

    2008-12-01

    Web-accessible geospatial information system (GIS) technologies have advanced in concert with an expansion of data resources that can be accessed and used by researchers, educators and students. These resources facilitate the development of data-rich instructional resources and activities that can be used to transition seamlessly into undergraduate research projects. MARGINS Data in the Classroom (http://serc.carleton.edu/ margins/index.html) seeks to engage MARGINS researchers and educators in using the images, datasets, and visualizations produced by NSF-MARGINS Program-funded research and related efforts to create Web-deliverable instructional materials for use in undergraduate-level geoscience courses (MARGINS Mini-Lessons). MARGINS science data is managed by the Marine Geosciences Data System (MGDS), and these and all other MGDS-hosted data can be accessed, manipulated and visualized using GeoMapApp (www.geomapapp.org; Carbotte et al, 2004), a freely available geographic information system focused on the marine environment. Both "packaged" MGDS datasets (i.e., global earthquake foci, volcanoes, bathymetry) and "raw" data (seismic surveys, magnetics, gravity) are accessible via GeoMapApp, with WFS linkages to other resources (geodesy from UNAVCO; seismic profiles from IRIS; geochemical and drillsite data from EarthChem, IODP, and others), permitting the comprehensive characterization of many regions of the ocean basins. Geospatially controlled datasets can be imported into GeoMapApp visualizations, and these visualizations can be exported into Google Earth as .kmz image files. Many of the MARGINS Mini-Lessons thus far produced use (or have studentss use the varied capabilities of GeoMapApp (i.e., constructing topographic profiles, overlaying varied geophysical and bathymetric datasets, characterizing geochemical data). These materials are available for use and testing from the project webpage (http://serc.carleton.edu/margins/). Classroom testing and assessment

  13. Splenic marginal zone lymphoma.

    Science.gov (United States)

    Piris, Miguel A; Onaindía, Arantza; Mollejo, Manuela

    Splenic marginal zone lymphoma (SMZL) is an indolent small B-cell lymphoma involving the spleen and bone marrow characterized by a micronodular tumoral infiltration that replaces the preexisting lymphoid follicles and shows marginal zone differentiation as a distinctive finding. SMZL cases are characterized by prominent splenomegaly and bone marrow and peripheral blood infiltration. Cells in peripheral blood show a villous cytology. Bone marrow and peripheral blood characteristic features usually allow a diagnosis of SMZL to be performed. Mutational spectrum of SMZL identifies specific findings, such as 7q loss and NOTCH2 and KLF2 mutations, both genes related with marginal zone differentiation. There is a striking clinical variability in SMZL cases, dependent of the tumoral load and performance status. Specific molecular markers such as 7q loss, p53 loss/mutation, NOTCH2 and KLF2 mutations have been found to be associated with the clinical variability. Distinction from Monoclonal B-cell lymphocytosis with marginal zone phenotype is still an open issue that requires identification of precise and specific thresholds with clinical meaning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Komorbiditet ved marginal parodontitis

    DEFF Research Database (Denmark)

    Holmstrup, Palle; Damgaard, Christian; Olsen, Ingar

    2017-01-01

    Nærværende artikel præsenterer en oversigt over den foreliggende væsentligste viden om sammenhængen mellem marginal parodontitis og en række medicinske sygdomme, herunder hjerte-kar-sygdomme, diabetes mellitus, reumatoid arthritis, osteoporose, Parkinsons sygdom, Alzheimers sygdom, psoriasis og...

  15. Marginally Deformed Starobinsky Gravity

    DEFF Research Database (Denmark)

    Codello, A.; Joergensen, J.; Sannino, Francesco

    2015-01-01

    We show that quantum-induced marginal deformations of the Starobinsky gravitational action of the form $R^{2(1 -\\alpha)}$, with $R$ the Ricci scalar and $\\alpha$ a positive parameter, smaller than one half, can account for the recent experimental observations by BICEP2 of primordial tensor modes....

  16. Deep continental margin reflectors

    Science.gov (United States)

    Ewing, J.; Heirtzler, J.; Purdy, M.; Klitgord, Kim D.

    1985-01-01

    In contrast to the rarity of such observations a decade ago, seismic reflecting and refracting horizons are now being observed to Moho depths under continental shelves in a number of places. These observations provide knowledge of the entire crustal thickness from the shoreline to the oceanic crust on passive margins and supplement Consortium for Continental Reflection Profiling (COCORP)-type measurements on land.

  17. Marginalization and School Nursing

    Science.gov (United States)

    Smith, Julia Ann

    2004-01-01

    The concept of marginalization was first analyzed by nursing researchers Hall, Stevens, and Meleis. Although nursing literature frequently refers to this concept when addressing "at risk" groups such as the homeless, gays and lesbians, and those infected with HIV/AIDS, the concept can also be applied to nursing. Analysis of current school nursing…

  18. Identifying influenza-like illness presentation from unstructured general practice clinical narrative using a text classifier rule-based expert system versus a clinical expert.

    Science.gov (United States)

    MacRae, Jayden; Love, Tom; Baker, Michael G; Dowell, Anthony; Carnachan, Matthew; Stubbe, Maria; McBain, Lynn

    2015-10-06

    We designed and validated a rule-based expert system to identify influenza like illness (ILI) from routinely recorded general practice clinical narrative to aid a larger retrospective research study into the impact of the 2009 influenza pandemic in New Zealand. Rules were assessed using pattern matching heuristics on routine clinical narrative. The system was trained using data from 623 clinical encounters and validated using a clinical expert as a gold standard against a mutually exclusive set of 901 records. We calculated a 98.2 % specificity and 90.2 % sensitivity across an ILI incidence of 12.4 % measured against clinical expert classification. Peak problem list identification of ILI by clinical coding in any month was 9.2 % of all detected ILI presentations. Our system addressed an unusual problem domain for clinical narrative classification; using notational, unstructured, clinician entered information in a community care setting. It performed well compared with other approaches and domains. It has potential applications in real-time surveillance of disease, and in assisted problem list coding for clinicians. Our system identified ILI presentation with sufficient accuracy for use at a population level in the wider research study. The peak coding of 9.2 % illustrated the need for automated coding of unstructured narrative in our study.

  19. Novel high-resolution computed tomography-based radiomic classifier for screen-identified pulmonary nodules in the National Lung Screening Trial.

    Science.gov (United States)

    Peikert, Tobias; Duan, Fenghai; Rajagopalan, Srinivasan; Karwoski, Ronald A; Clay, Ryan; Robb, Richard A; Qin, Ziling; Sicks, JoRean; Bartholmai, Brian J; Maldonado, Fabien

    2018-01-01

    Optimization of the clinical management of screen-detected lung nodules is needed to avoid unnecessary diagnostic interventions. Herein we demonstrate the potential value of a novel radiomics-based approach for the classification of screen-detected indeterminate nodules. Independent quantitative variables assessing various radiologic nodule features such as sphericity, flatness, elongation, spiculation, lobulation and curvature were developed from the NLST dataset using 726 indeterminate nodules (all ≥ 7 mm, benign, n = 318 and malignant, n = 408). Multivariate analysis was performed using least absolute shrinkage and selection operator (LASSO) method for variable selection and regularization in order to enhance the prediction accuracy and interpretability of the multivariate model. The bootstrapping method was then applied for the internal validation and the optimism-corrected AUC was reported for the final model. Eight of the originally considered 57 quantitative radiologic features were selected by LASSO multivariate modeling. These 8 features include variables capturing Location: vertical location (Offset carina centroid z), Size: volume estimate (Minimum enclosing brick), Shape: flatness, Density: texture analysis (Score Indicative of Lesion/Lung Aggression/Abnormality (SILA) texture), and surface characteristics: surface complexity (Maximum shape index and Average shape index), and estimates of surface curvature (Average positive mean curvature and Minimum mean curvature), all with Pscreen-detected nodule characterization appears extremely promising however independent external validation is needed.

  20. Comparing cosmic web classifiers using information theory

    International Nuclear Information System (INIS)

    Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin; Jasche, Jens

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  1. Comparing cosmic web classifiers using information theory

    Energy Technology Data Exchange (ETDEWEB)

    Leclercq, Florent [Institute of Cosmology and Gravitation (ICG), University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth PO1 3FX (United Kingdom); Lavaux, Guilhem; Wandelt, Benjamin [Institut d' Astrophysique de Paris (IAP), UMR 7095, CNRS – UPMC Université Paris 6, Sorbonne Universités, 98bis boulevard Arago, F-75014 Paris (France); Jasche, Jens, E-mail: florent.leclercq@polytechnique.org, E-mail: lavaux@iap.fr, E-mail: j.jasche@tum.de, E-mail: wandelt@iap.fr [Excellence Cluster Universe, Technische Universität München, Boltzmannstrasse 2, D-85748 Garching (Germany)

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  2. Adjuvant Radiation Therapy for Margin-Positive Vulvar Squamous Cell Carcinoma: Defining the Ideal Dose-Response Using the National Cancer Data Base

    International Nuclear Information System (INIS)

    Chapman, Bhavana V.; Gill, Beant S.; Viswanathan, Akila N.; Balasubramani, Goundappa K.; Sukumvanich, Paniti; Beriwal, Sushil

    2017-01-01

    Purpose: Positive surgical margins after radical vulvectomy for vulvar cancer portend a high risk for local relapse, which may be challenging to salvage. We assessed the impact of adjuvant radiation therapy (aRT) on overall survival (OS) and the dose-response relationship using the National Cancer Data Base. Methods and Materials: Patients with vulvar squamous cell carcinoma who underwent initial extirpative surgery with positive margins from 1998 to 2012 were included. Factors associated with aRT and specific dose levels were analyzed using logistic regression. Log-rank and multivariable Cox proportional hazards modeling were used for OS analysis. Results: We identified 3075 patients with a median age of 66 years (range, 22-90 years); the median follow-up time was 36.4 months (interquartile range [IQR] 15.4-71.0 months). Stage IA/B disease represented 41.2% of the cohort. Sixty-three percent underwent lymph node assessment, with a 45% positivity rate. In total, 1035 patients (35.3%) received aRT, with a median dose of 54.0 Gy (IQR 48.6-60.0 Gy). The 3-year OS improved from 58.5% to 67.4% with aRT (P<.001). On multivariable analysis, age, Charlson-Deyo score ≥1, stage ≥II, tumors ≥4 cm, no aRT, and adverse nodal characteristics led to inferior survival. Dose of aRT was positively associated with OS as a continuous variable on univariate analysis (P<.001). The unadjusted 3-year OS for dose subsets 30.0 to 45.0 Gy, 45.1 to 53.9 Gy, 54.0 to 59.9 Gy, and ≥60 Gy was 54.3%, 55.7%, 70.1%, and 65.3%, respectively (P<.001). Multivariable analysis using a 4-month conditional landmark revealed that the greatest mortality reduction occurred in cumulative doses ≥54 Gy: 45.1 to 53.9 Gy (hazard ratio [HR] 0.94, P=.373), 54.0 to 59.9 Gy (HR 0.75, P=.024), ≥60 Gy (HR 0.71, P=.015). No survival benefit was seen with ≥60 Gy compared with 54.0 to 59.9 Gy (HR 0.95, P=.779). Conclusions: Patients with vulvar squamous cell carcinoma and positive surgical

  3. Coupling 3D groundwater modeling with CFC-based age dating to classify local groundwater circulation in an unconfined crystalline aquifer

    Science.gov (United States)

    Kolbe, Tamara; Marçais, Jean; Thomas, Zahra; Abbott, Benjamin W.; de Dreuzy, Jean-Raynald; Rousseau-Gueutin, Pauline; Aquilina, Luc; Labasque, Thierry; Pinay, Gilles

    2016-12-01

    Nitrogen pollution of freshwater and estuarine environments is one of the most urgent environmental crises. Shallow aquifers with predominantly local flow circulation are particularly vulnerable to agricultural contaminants. Water transit time and flow path are key controls on catchment nitrogen retention and removal capacity, but the relative importance of hydrogeological and topographical factors in determining these parameters is still uncertain. We used groundwater dating and numerical modeling techniques to assess transit time and flow path in an unconfined aquifer in Brittany, France. The 35.5 km2 study catchment has a crystalline basement underneath a ∼60 m thick weathered and fractured layer, and is separated into a distinct upland and lowland area by an 80 m-high butte. We used groundwater discharge and groundwater ages derived from chlorofluorocarbon (CFC) concentration to calibrate a free-surface flow model simulating groundwater flow circulation. We found that groundwater flow was highly local (mean travel distance = 350 m), substantially smaller than the typical distance between neighboring streams (∼1 km), while CFC-based ages were quite old (mean = 40 years). Sensitivity analysis revealed that groundwater travel distances were not sensitive to geological parameters (i.e. arrangement of geological layers and permeability profile) within the constraints of the CFC age data. However, circulation was sensitive to topography in the lowland area where the water table was near the land surface, and to recharge rate in the upland area where water input modulated the free surface of the aquifer. We quantified these differences with a local groundwater ratio (rGW-LOCAL), defined as the mean groundwater travel distance divided by the mean of the reference surface distances (the distance water would have to travel across the surface of the digital elevation model). Lowland, rGW-LOCAL was near 1, indicating primarily topographical controls. Upland, r

  4. The margin of internal exposure (MOIE) concept for dermal risk assessment based on oral toxicity data - A case study with caffeine.

    Science.gov (United States)

    Bessems, Jos G M; Paini, Alicia; Gajewska, Monika; Worth, Andrew

    2017-12-01

    Route-to-route extrapolation is a common part of human risk assessment. Data from oral animal toxicity studies are commonly used to assess the safety of various but specific human dermal exposure scenarios. Using theoretical examples of various user scenarios, it was concluded that delineation of a generally applicable human dermal limit value is not a practicable approach, due to the wide variety of possible human exposure scenarios, including its consequences for internal exposure. This paper uses physiologically based kinetic (PBK) modelling approaches to predict animal as well as human internal exposure dose metrics and for the first time, introduces the concept of Margin of Internal Exposure (MOIE) based on these internal dose metrics. Caffeine was chosen to illustrate this approach. It is a substance that is often found in cosmetics and for which oral repeated dose toxicity data were available. A rat PBK model was constructed in order to convert the oral NOAEL to rat internal exposure dose metrics, i.e. the area under the curve (AUC) and the maximum concentration (C max ), both in plasma. A human oral PBK model was constructed and calibrated using human volunteer data and adapted to accommodate dermal absorption following human dermal exposure. Use of the MOIE approach based on internal dose metrics predictions provides excellent opportunities to investigate the consequences of variations in human dermal exposure scenarios. It can accommodate within-day variation in plasma concentrations and is scientifically more robust than assuming just an exposure in mg/kg bw/day. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Dynamic integration of classifiers in the space of principal components

    NARCIS (Netherlands)

    Tsymbal, A.; Pechenizkiy, M.; Puuronen, S.; Patterson, D.W.; Kalinichenko, L.A.; Manthey, R.; Thalheim, B.; Wloka, U.

    2003-01-01

    Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. It was shown that, for an ensemble to be successful, it should consist of accurate and diverse base classifiers. However, it is also important that the

  6. Just-in-time adaptive classifiers-part II: designing the classifier.

    Science.gov (United States)

    Alippi, Cesare; Roveri, Manuel

    2008-12-01

    Aging effects, environmental changes, thermal drifts, and soft and hard faults affect physical systems by changing their nature and behavior over time. To cope with a process evolution adaptive solutions must be envisaged to track its dynamics; in this direction, adaptive classifiers are generally designed by assuming the stationary hypothesis for the process generating the data with very few results addressing nonstationary environments. This paper proposes a methodology based on k-nearest neighbor (NN) classifiers for designing adaptive classification systems able to react to changing conditions just-in-time (JIT), i.e., exactly when it is needed. k-NN classifiers have been selected for their computational-free training phase, the possibility to easily estimate the model complexity k and keep under control the computational complexity of the classifier through suitable data reduction mechanisms. A JIT classifier requires a temporal detection of a (possible) process deviation (aspect tackled in a companion paper) followed by an adaptive management of the knowledge base (KB) of the classifier to cope with the process change. The novelty of the proposed approach resides in the general framework supporting the real-time update of the KB of the classification system in response to novel information coming from the process both in stationary conditions (accuracy improvement) and in nonstationary ones (process tracking) and in providing a suitable estimate of k. It is shown that the classification system grants consistency once the change targets the process generating the data in a new stationary state, as it is the case in many real applications.

  7. Time Safety Margin: Theory and Practice

    Science.gov (United States)

    2016-09-01

    Air Education and Training Command Handbook 99-107, T-38 Road to Wings, Randolph Air Force Base, Texas, July 2013. 65 This page was intentionally left...412TW-TIH-16-01 TIME SAFETY MARGIN: THEORY AND PRACTICE WILLIAM R. GRAY, III Chief Test Pilot USAF Test Pilot School SEPTEMBER 2016... Safety Margin: The01y and Practice) was submitted by the Commander, 4 I 2th Test Wing, Edwards AFB, Ca lifornia 93524-6843. Foreign announcement and

  8. Estimating Margin of Exposure to Thyroid Peroxidase Inhibitors Using High-Throughput in vitro Data, High-Throughput Exposure Modeling, and Physiologically Based Pharmacokinetic/Pharmacodynamic Modeling

    Science.gov (United States)

    Leonard, Jeremy A.; Tan, Yu-Mei; Gilbert, Mary; Isaacs, Kristin; El-Masri, Hisham

    2016-01-01

    Some pharmaceuticals and environmental chemicals bind the thyroid peroxidase (TPO) enzyme and disrupt thyroid hormone production. The potential for TPO inhibition is a function of both the binding affinity and concentration of the chemical within the thyroid gland. The former can be determined through in vitro assays, and the latter is influenced by pharmacokinetic properties, along with environmental exposure levels. In this study, a physiologically based pharmacokinetic (PBPK) model was integrated with a pharmacodynamic (PD) model to establish internal doses capable of inhibiting TPO in relation to external exposure levels predicted through exposure modeling. The PBPK/PD model was evaluated using published serum or thyroid gland chemical concentrations or circulating thyroxine (T4) and triiodothyronine (T3) hormone levels measured in rats and humans. After evaluation, the model was used to estimate human equivalent intake doses resulting in reduction of T4 and T3 levels by 10% (ED10) for 6 chemicals of varying TPO-inhibiting potencies. These chemicals were methimazole, 6-propylthiouracil, resorcinol, benzophenone-2, 2-mercaptobenzothiazole, and triclosan. Margin of exposure values were estimated for these chemicals using the ED10 and predicted population exposure levels for females of child-bearing age. The modeling approach presented here revealed that examining hazard or exposure alone when prioritizing chemicals for risk assessment may be insufficient, and that consideration of pharmacokinetic properties is warranted. This approach also provides a mechanism for integrating in vitro data, pharmacokinetic properties, and exposure levels predicted through high-throughput means when interpreting adverse outcome pathways based on biological responses. PMID:26865668

  9. Optimization of short amino acid sequences classifier

    Science.gov (United States)

    Barcz, Aleksy; Szymański, Zbigniew

    This article describes processing methods used for short amino acid sequences classification. The data processed are 9-symbols string representations of amino acid sequences, divided into 49 data sets - each one containing samples labeled as reacting or not with given enzyme. The goal of the classification is to determine for a single enzyme, whether an amino acid sequence would react with it or not. Each data set is processed separately. Feature selection is performed to reduce the number of dimensions for each data set. The method used for feature selection consists of two phases. During the first phase, significant positions are selected using Classification and Regression Trees. Afterwards, symbols appearing at the selected positions are substituted with numeric values of amino acid properties taken from the AAindex database. In the second phase the new set of features is reduced using a correlation-based ranking formula and Gram-Schmidt orthogonalization. Finally, the preprocessed data is used for training LS-SVM classifiers. SPDE, an evolutionary algorithm, is used to obtain optimal hyperparameters for the LS-SVM classifier, such as error penalty parameter C and kernel-specific hyperparameters. A simple score penalty is used to adapt the SPDE algorithm to the task of selecting classifiers with best performance measures values.

  10. SVM classifier on chip for melanoma detection.

    Science.gov (United States)

    Afifi, Shereen; GholamHosseini, Hamid; Sinha, Roopak

    2017-07-01

    Support Vector Machine (SVM) is a common classifier used for efficient classification with high accuracy. SVM shows high accuracy for classifying melanoma (skin cancer) clinical images within computer-aided diagnosis systems used by skin cancer specialists to detect melanoma early and save lives. We aim to develop a medical low-cost handheld device that runs a real-time embedded SVM-based diagnosis system for use in primary care for early detection of melanoma. In this paper, an optimized SVM classifier is implemented onto a recent FPGA platform using the latest design methodology to be embedded into the proposed device for realizing online efficient melanoma detection on a single system on chip/device. The hardware implementation results demonstrate a high classification accuracy of 97.9% and a significant acceleration factor of 26 from equivalent software implementation on an embedded processor, with 34% of resources utilization and 2 watts for power consumption. Consequently, the implemented system meets crucial embedded systems constraints of high performance and low cost, resources utilization and power consumption, while achieving high classification accuracy.

  11. Bayes classifiers for imbalanced traffic accidents datasets.

    Science.gov (United States)

    Mujalli, Randa Oqab; López, Griselda; Garach, Laura

    2016-03-01

    Traffic accidents data sets are usually imbalanced, where the number of instances classified under the killed or severe injuries class (minority) is much lower than those classified under the slight injuries class (majority). This, however, supposes a challenging problem for classification algorithms and may cause obtaining a model that well cover the slight injuries instances whereas the killed or severe injuries instances are misclassified frequently. Based on traffic accidents data collected on urban and suburban roads in Jordan for three years (2009-2011); three different data balancing techniques were used: under-sampling which removes some instances of the majority class, oversampling which creates new instances of the minority class and a mix technique that combines both. In addition, different Bayes classifiers were compared for the different imbalanced and balanced data sets: Averaged One-Dependence Estimators, Weightily Average One-Dependence Estimators, and Bayesian networks in order to identify factors that affect the severity of an accident. The results indicated that using the balanced data sets, especially those created using oversampling techniques, with Bayesian networks improved classifying a traffic accident according to its severity and reduced the misclassification of killed and severe injuries instances. On the other hand, the following variables were found to contribute to the occurrence of a killed causality or a severe injury in a traffic accident: number of vehicles involved, accident pattern, number of directions, accident type, lighting, surface condition, and speed limit. This work, to the knowledge of the authors, is the first that aims at analyzing historical data records for traffic accidents occurring in Jordan and the first to apply balancing techniques to analyze injury severity of traffic accidents. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    ved for the isolated English text, but for the handwritten Devanagari script it is not ... characters, lack of standard benchmarking and ground truth dataset, lack of ..... theory, proposed by Glen Shafer as a way to represent cognitive knowledge.

  13. Classifying lipoproteins based on their polar profiles.

    Science.gov (United States)

    Polanco, Carlos; Castañón-González, Jorge Alberto; Buhse, Thomas; Uversky, Vladimir N; Amkie, Rafael Zonana

    2016-01-01

    The lipoproteins are an important group of cargo proteins known for their unique capability to transport lipids. By applying the Polarity index algorithm, which has a metric that only considers the polar profile of the linear sequences of the lipoprotein group, we obtained an analytical and structural differentiation of all the lipoproteins found in UniProt Database. Also, the functional groups of lipoproteins, and particularly of the set of lipoproteins relevant to atherosclerosis, were analyzed with the same method to reveal their structural preference, and the results of Polarity index analysis were verified by an alternate test, the Cumulative Distribution Function algorithm, applied to the same groups of lipoproteins.

  14. The value of breast lumpectomy margin assessment as a predictor of residual tumor burden

    International Nuclear Information System (INIS)

    Wazer, David E.; Schmidt-Ullrich, Rupert K.; Schmid, Christopher H.; Ruthazer, Robin; Kramer, Bradley; Safaii, Homa; Graham, Roger

    1997-01-01

    Purpose: Margin assessment is commonly used as a guide to the relative aggressiveness of therapy for breast conserving treatment (BCT), though its value as a predictor of the presence, type, or extent of residual tumor has not been conclusively studied. Controversy continues to exist as to what constitutes a margin that is 'positive', 'close', or 'negative'. We attempt to address these issues through an analysis of re-excision specimens. Patients and Methods: As part of an institutional prospective practice approach for BCT, 265 cases with AJCC Stage I/II carcinoma with an initial excision margin that was ≤2 mm or indeterminate were subjected to re-excision. The probability of residual tumor (+RE) was evaluated with respect to tumor size, histopathologic subtype, relative closeness of the measured margin, the extent of margin positivity graded as focal, minimal, moderate, or extensive, and the extent of specimen processing as reflected in the number of cut sections per specimen volume (S:V ratio). The amount of residual tumor was graded as microscopic, small, medium, or large. The histopathologic subtype of tumor in the re-excision specimen was classified as having an invasive component (ICa) or pure DCIS (DCIS). Results: The primary excision margin was positive, >0≤1 mm, 1.1-2 mm, and indeterminate in 60%, 18%, 5%, and 17%, respectively. The predominant histopathologies in the initial excision specimens were invasive ductal (IDC) (50%) and tumors with an extensive intraductal component (EIC) (43%). The histopathology of the initial excision specimen was highly predictive of the histopathology of tumor found on re-excision, as residual DCIS was found in 60% of +RE specimens with initial histopathology of EIC compared to 26% for IDC (p 0.001). Neither the extent of margin positivity nor the extent of tumor in the re-excision were significantly related to the initial histopathologic subtype; however, a +RE was seen in 59% of EIC, 43% of IDC, and 32% of invasive

  15. Waste classifying and separation device

    International Nuclear Information System (INIS)

    Kakiuchi, Hiroki.

    1997-01-01

    A flexible plastic bags containing solid wastes of indefinite shape is broken and the wastes are classified. The bag cutting-portion of the device has an ultrasonic-type or a heater-type cutting means, and the cutting means moves in parallel with the transferring direction of the plastic bags. A classification portion separates and discriminates the plastic bag from the contents and conducts classification while rotating a classification table. Accordingly, the plastic bag containing solids of indefinite shape can be broken and classification can be conducted efficiently and reliably. The device of the present invention has a simple structure which requires small installation space and enables easy maintenance. (T.M.)

  16. Construction and Deployment of Tilt Sensors along the Lateral Margins of Jarvis Glacier, Alaska to improve understanding of the Deformation Regime of Wet-Based Polythermal Glaciers

    Science.gov (United States)

    Lee, I. R.; Hawley, R. L.; Clemens-Sewall, D.; Campbell, S. W.; Waszkiewicz, M.; Bernsen, S.; Gerbi, C. C.; Kreutz, K. J.; Koons, P. O.

    2017-12-01

    Most studies of natural ice have been on bodies of ice with frozen beds which experience minimal lateral shear strain, to the exclusion of polythermal ice sheets & glaciers which due to their mixed basal thermal regime have wet-based beds. The deficiency in knowledge and understanding of the operative deformation mechanisms of wet-based bodies of ice results in uncertainty in the constitutive flow law of ice. Given that the flow law was derived experimentally under assumptions more conducive to bodies of ice with frozen-based beds, it is necessary to calibrate the flow law when applied to different bodies of ice such as wet-based polythermal glaciers. To this end, Dartmouth and the University of Maine have collaborated to carry out research on Jarvis Glacier in Alaska, a geometrically simple, wet-based glacier. Here, we constructed and deployed an array of 25 tilt sensors into 3 boreholes drilled along the glacier's shear margin. Our goal is to obtain 3D strain measurements to calculate the full velocity field & create deformation regime maps in the vicinity of the boreholes, as well as to support numerical modeling. The tilt sensors were developed in-lab: Each tilt sensor comes equipped with an LSM303C chip (embedded with a 3-axis accelerometer and magnetometer) and Arduino Pro-Mini mounted on a custom-made printed circuit board encased within a watertight aluminum tube. The design concept was to produce a sensor string, consisting of tilt sensors spaced apart at pre-calculated intervals, to be lowered into a borehole and frozen-in over months to collect strain data through a Campbell Scientific CR1000 datalogger. Three surface-to-bed boreholes were successfully installed with tilt sensor strings. Given the lack of prior in-situ borehole geophysics studies on polythermal glaciers, deliberate consideration on factors such as strain relief and waterproofing electrical components was necessary in the development of the sensor system. On-site challenges also arose due

  17. Middlemen Margins and Globalization

    OpenAIRE

    Pranab Bardhan; Dilip Mookherjee; Masatoshi Tsumagari

    2013-01-01

    We develop a theory of trading middlemen or entrepreneurs who perform financing, quality supervision and marketing roles for goods produced by suppliers or workers. Brand-name reputations are necessary to overcome product quality moral hazard problems; middlemen margins represent reputational incentive rents. We develop a two sector North-South model of competitive equilibrium, with endogenous sorting of agents with heterogenous entrepreneurial abilities into sectors and occupations. The Sout...

  18. Containment safety margins

    International Nuclear Information System (INIS)

    Von Riesemann, W.A.

    1980-01-01

    Objective of the Containment Safety Margins program is the development and verification of methodologies which are capable of reliably predicting the ultimate load-carrying capability of light water reactor containment structures under accident and severe environments. The program was initiated in June 1980 at Sandia and this paper addresses the first phase of the program which is essentially a planning effort. Brief comments are made about the second phase, which will involve testing of containment models

  19. Marginalized Youth. An Introduction.

    OpenAIRE

    Kessl, Fabian; Otto, Hans-Uwe

    2009-01-01

    The life conduct of marginalized groups has become subject to increasing levels of risk in advanced capitalist societies. In particular, children and young people are confronted with the harsh consequences of a “new poverty” in the contemporary era. The demographic complexion of today’s poverty is youthful, as a number of government reports have once again documented in recent years in Australia, Germany, France, Great Britain, the US or Scandinavian countries. Key youth studies have shown a ...

  20. New interpretations based on seismic and modelled well data and their implications for the tectonic evolution of the west Greenland continental margin

    DEFF Research Database (Denmark)

    Mcgregor, E.D.; Nielsen, S.B.; Stephenson, R.A.

    Davis Strait is situated between Baffin Island and Greenland and forms part of a sedimentary basin system, linking Labrador Sea and Baffin Bay, developed during Cretaceous and Palaeocene rifting that culminated in a brief period of sea-floor spreading in the late Palaeocene and Eocene. Seismic...... reflection profiles and exploration wells along the Greenland margin of Davis Strait have been analysed in order to elucidate uplift events affecting sedimentary basin development during the Cenozoic with a focus on postulated Neogene (tectonic) uplift affecting the west Greenland continental margin...... tectonic event. An interpretation in which the inferred onshore cooling is related to erosion of pre-existing topography is more consistent with our new results from the offshore region. These results will have important implications for other continental margins developed throughout the Atlantic...

  1. Maximum margin semi-supervised learning with irrelevant data.

    Science.gov (United States)

    Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R

    2015-10-01

    Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright

  2. Limitations of ''margin'' in qualification tests

    International Nuclear Information System (INIS)

    Clough, R.L.; Gillen, K.T.

    1984-01-01

    We have carried out investigations of polymer radiation degradation behaviors which have brought to light a number of reasons why this concept of margin can break down. First of all, we have found that dose-rate effects vary greatly in magnitude. Thus, based on high dose-rate testing, poor materials with large dose-rate effects may be selected over better materials with small effects. Also, in certain cases, material properties have been found to level out (as with PVC) or reverse trend (as with buna-n) at high doses, so that ''margin'' may be ineffective, misleading, or counterproductive. For Viton, the material properties were found to change in opposite directions at high and low dose rates, making ''margin'' inappropriate. The underlying problem with the concept of ''margin'' is that differences in aging conditions can lead to fundamental differences in degradation mechanisms

  3. Gearbox Condition Monitoring Using Advanced Classifiers

    Directory of Open Access Journals (Sweden)

    P. Večeř

    2010-01-01

    Full Text Available New efficient and reliable methods for gearbox diagnostics are needed in automotive industry because of growing demand for production quality. This paper presents the application of two different classifiers for gearbox diagnostics – Kohonen Neural Networks and the Adaptive-Network-based Fuzzy Interface System (ANFIS. Two different practical applications are presented. In the first application, the tested gearboxes are separated into two classes according to their condition indicators. In the second example, ANFIS is applied to label the tested gearboxes with a Quality Index according to the condition indicators. In both applications, the condition indicators were computed from the vibration of the gearbox housing. 

  4. Using Neural Networks to Classify Digitized Images of Galaxies

    Science.gov (United States)

    Goderya, S. N.; McGuire, P. C.

    2000-12-01

    Automated classification of Galaxies into Hubble types is of paramount importance to study the large scale structure of the Universe, particularly as survey projects like the Sloan Digital Sky Survey complete their data acquisition of one million galaxies. At present it is not possible to find robust and efficient artificial intelligence based galaxy classifiers. In this study we will summarize progress made in the development of automated galaxy classifiers using neural networks as machine learning tools. We explore the Bayesian linear algorithm, the higher order probabilistic network, the multilayer perceptron neural network and Support Vector Machine Classifier. The performance of any machine classifier is dependant on the quality of the parameters that characterize the different groups of galaxies. Our effort is to develop geometric and invariant moment based parameters as input to the machine classifiers instead of the raw pixel data. Such an approach reduces the dimensionality of the classifier considerably, and removes the effects of scaling and rotation, and makes it easier to solve for the unknown parameters in the galaxy classifier. To judge the quality of training and classification we develop the concept of Mathews coefficients for the galaxy classification community. Mathews coefficients are single numbers that quantify classifier performance even with unequal prior probabilities of the classes.

  5. The new tariff model based on marginal costs developing concept for Brazilian electric sector. A case study for Power and Light Company of Sao Paulo State (Brazil)

    International Nuclear Information System (INIS)

    Correia, S.P.S.

    1991-01-01

    A new methodology for power generation cost accounts in Brazilian electric sector is described, with the application of marginal costs theory and its deviation in developing economies. A case report from a Brazilian Power and Light Company is studied, focalizing the seasoning, the planning, the tariff model and the power generation, transmission and distribution. (M.V.M.). 19 refs, 28 figs, 1 tab

  6. Marginal Models for Categorial Data

    NARCIS (Netherlands)

    Bergsma, W.P.; Rudas, T.

    2002-01-01

    Statistical models defined by imposing restrictions on marginal distributions of contingency tables have received considerable attention recently. This paper introduces a general definition of marginal log-linear parameters and describes conditions for a marginal log-linear parameter to be a smooth

  7. Class-specific Error Bounds for Ensemble Classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Prenger, R; Lemmond, T; Varshney, K; Chen, B; Hanley, W

    2009-10-06

    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missed detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.

  8. Masculinity at the margins

    DEFF Research Database (Denmark)

    Jensen, Sune Qvotrup

    2010-01-01

    and other types of material. Taking the concepts of othering, intersectionality and marginality as point of departure the article analyses how these young men experience othering and how they react to it. One type of reaction, described as stylization, relies on accentuating the latently positive symbolic...... of critique although in a masculinist way. These reactions to othering represent a challenge to researchers interested in intersectionality and gender, because gender is reproduced as a hierarchical form of social differentiation at the same time as racism is both reproduced and resisted....

  9. Robust Combining of Disparate Classifiers Through Order Statistics

    Science.gov (United States)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  10. Classifying Drivers' Cognitive Load Using EEG Signals.

    Science.gov (United States)

    Barua, Shaibal; Ahmed, Mobyen Uddin; Begum, Shahina

    2017-01-01

    A growing traffic safety issue is the effect of cognitive loading activities on traffic safety and driving performance. To monitor drivers' mental state, understanding cognitive load is important since while driving, performing cognitively loading secondary tasks, for example talking on the phone, can affect the performance in the primary task, i.e. driving. Electroencephalography (EEG) is one of the reliable measures of cognitive load that can detect the changes in instantaneous load and effect of cognitively loading secondary task. In this driving simulator study, 1-back task is carried out while the driver performs three different simulated driving scenarios. This paper presents an EEG based approach to classify a drivers' level of cognitive load using Case-Based Reasoning (CBR). The results show that for each individual scenario as well as using data combined from the different scenarios, CBR based system achieved approximately over 70% of classification accuracy.

  11. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  12. Mercury⊕: An evidential reasoning image classifier

    Science.gov (United States)

    Peddle, Derek R.

    1995-12-01

    MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.

  13. Classifying smoking urges via machine learning.

    Science.gov (United States)

    Dumortier, Antoine; Beckjord, Ellen; Shiffman, Saul; Sejdić, Ervin

    2016-12-01

    Smoking is the largest preventable cause of death and diseases in the developed world, and advances in modern electronics and machine learning can help us deliver real-time intervention to smokers in novel ways. In this paper, we examine different machine learning approaches to use situational features associated with having or not having urges to smoke during a quit attempt in order to accurately classify high-urge states. To test our machine learning approaches, specifically, Bayes, discriminant analysis and decision tree learning methods, we used a dataset collected from over 300 participants who had initiated a quit attempt. The three classification approaches are evaluated observing sensitivity, specificity, accuracy and precision. The outcome of the analysis showed that algorithms based on feature selection make it possible to obtain high classification rates with only a few features selected from the entire dataset. The classification tree method outperformed the naive Bayes and discriminant analysis methods, with an accuracy of the classifications up to 86%. These numbers suggest that machine learning may be a suitable approach to deal with smoking cessation matters, and to predict smoking urges, outlining a potential use for mobile health applications. In conclusion, machine learning classifiers can help identify smoking situations, and the search for the best features and classifier parameters significantly improves the algorithms' performance. In addition, this study also supports the usefulness of new technologies in improving the effect of smoking cessation interventions, the management of time and patients by therapists, and thus the optimization of available health care resources. Future studies should focus on providing more adaptive and personalized support to people who really need it, in a minimum amount of time by developing novel expert systems capable of delivering real-time interventions. Copyright © 2016 Elsevier Ireland Ltd. All rights

  14. 15 CFR 4.8 - Classified Information.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily the...

  15. A definition model of electric power tariff based on marginal cost: case study at CERON - the electric company of Rondonia, Brazil; Um modelo de definicao de tarifa de energia eletrica baseada no custo marginal: estudo de caso na CERON - Centrais Eletricas de Rondonia, Brasil

    Energy Technology Data Exchange (ETDEWEB)

    Domiciano, Jose Antonio

    2002-07-01

    The present competition circumstances (ambient) require enterprises (companies or undertaking) like CERON to have understanding of all conditions to propose tariffs which give correct signal the consumers. Objective. Thus, search for valuations of model to define the tariff of electric energy based on marginal cost though a study in case of CERON. Method. Develop an investigation of a model of definition of tariffs of electric energy based on marginal costs to start the study in case of CERON and followed by analysis of its tariff structure. Results. With application of the signal (sign or indication) of tariffs, can measure the degree of separation of tariffs and to propose new modalities of alternate tariffs which offer conditions to reflect the real form of costs imposed by clients who form subgroups of tariffs of CERON. With final results, it offers parameters to trace (seek) important strategy for the company. Conclusion: The model gives condition's to identify and quantify of subsidies inside the tariff structure. It is a base which permits to create alternatives to resolve tariff distortions. It permits to have a better understanding which category (class) of consumers who are free will try to seek companies with tariffs which reflect really its costs. (author)

  16. A definition model of electric power tariff based on marginal cost: case study at CERON - the electric company of Rondonia, Brazil; Um modelo de definicao de tarifa de energia eletrica baseada no custo marginal: estudo de caso na CERON - Centrais Eletricas de Rondonia, Brasil

    Energy Technology Data Exchange (ETDEWEB)

    Domiciano, Jose Antonio

    2002-07-01

    The present competition circumstances (ambient) require enterprises (companies or undertaking) like CERON to have understanding of all conditions to propose tariffs which give correct signal the consumers. Objective. Thus, search for valuations of model to define the tariff of electric energy based on marginal cost though a study in case of CERON. Method. Develop an investigation of a model of definition of tariffs of electric energy based on marginal costs to start the study in case of CERON and followed by analysis of its tariff structure. Results. With application of the signal (sign or indication) of tariffs, can measure the degree of separation of tariffs and to propose new modalities of alternate tariffs which offer conditions to reflect the real form of costs imposed by clients who form subgroups of tariffs of CERON. With final results, it offers parameters to trace (seek) important strategy for the company. Conclusion: The model gives condition's to identify and quantify of subsidies inside the tariff structure. It is a base which permits to create alternatives to resolve tariff distortions. It permits to have a better understanding which category (class) of consumers who are free will try to seek companies with tariffs which reflect really its costs. (author)

  17. Effect of electric arc, gas oxygen torch and induction melting techniques on the marginal accuracy of cast base-metal and noble metal-ceramic crowns.

    Science.gov (United States)

    Gómez-Cogolludo, Pablo; Castillo-Oyagüe, Raquel; Lynch, Christopher D; Suárez-García, María-Jesús

    2013-09-01

    The aim of this study was to identify the most appropriate alloy composition and melting technique by evaluating the marginal accuracy of cast metal-ceramic crowns. Seventy standardised stainless-steel abutments were prepared to receive metal-ceramic crowns and were randomly divided into four alloy groups: Group 1: palladium-gold (Pd-Au), Group 2: nickel-chromium-titanium (Ni-Cr-Ti), Group 3: nickel-chromium (Ni-Cr) and Group 4: titanium (Ti). Groups 1, 2 and 3 were in turn subdivided to be melted and cast using: (a) gas oxygen torch and centrifugal casting machine (TC) or (b) induction and centrifugal casting machine (IC). Group 4 was melted and cast using electric arc and vacuum/pressure machine (EV). All of the metal-ceramic crowns were luted with glass-ionomer cement. The marginal fit was measured under an optical microscope before and after cementation using image analysis software. All data was subjected to two-way analysis of variance (ANOVA). Duncan's multiple range test was run for post-hoc comparisons. The Student's t-test was used to investigate the influence of cementation (α=0.05). Uncemented Pd-Au/TC samples achieved the best marginal adaptation, while the worst fit corresponded to the luted Ti/EV crowns. Pd-Au/TC, Ni-Cr and Ti restorations demonstrated significantly increased misfit after cementation. The Ni-Cr-Ti alloy was the most predictable in terms of differences in misfit when either torch or induction was applied before or after cementation. Cemented titanium crowns exceeded the clinically acceptable limit of 120μm. The combination of alloy composition, melting technique, casting method and luting process influences the vertical seal of cast metal-ceramic crowns. An accurate use of the gas oxygen torch may overcome the results attained with the induction system concerning the marginal adaptation of fixed dental prostheses. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  19. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    Directory of Open Access Journals (Sweden)

    Ruben Ruiz-Gonzalez

    2014-11-01

    Full Text Available The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels.

  20. A Politics of Marginability

    DEFF Research Database (Denmark)

    Pallesen, Cecil Marie

    2015-01-01

    always been contested and to some extent vulnerable. However, the Indian communities are strong socially and economically, and the vast majority of its people have great international networks and several potential plans or strategies for the future, should the political climate in Tanzania become......In the end of the 19th century, Indians began settling in East Africa. Most of them left Gujarat because of drought and famine, and they were in search for business opportunities and a more comfortable life. Within the following decades, many of them went from being small-scale entrepreneurs to big...... hostile towards them. I argue that this migrant group is unique being marginalized and strong at the same time, and I explain this uniqueness by several features in the Indian migrants’ cultural and religious background, in colonial and post-colonial Tanzania, and in the Indians’ role as middlemen between...

  1. Ship localization in Santa Barbara Channel using machine learning classifiers.

    Science.gov (United States)

    Niu, Haiqiang; Ozanich, Emma; Gerstoft, Peter

    2017-11-01

    Machine learning classifiers are shown to outperform conventional matched field processing for a deep water (600 m depth) ocean acoustic-based ship range estimation problem in the Santa Barbara Channel Experiment when limited environmental information is known. Recordings of three different ships of opportunity on a vertical array were used as training and test data for the feed-forward neural network and support vector machine classifiers, demonstrating the feasibility of machine learning methods to locate unseen sources. The classifiers perform well up to 10 km range whereas the conventional matched field processing fails at about 4 km range without accurate environmental information.

  2. The role of deep-water sedimentary processes in shaping a continental margin: The Northwest Atlantic

    Science.gov (United States)

    Mosher, David C.; Campbell, D.C.; Gardner, J.V.; Piper, D.J.W.; Chaytor, Jason; Rebesco, M.

    2017-01-01

    The tectonic history of a margin dictates its general shape; however, its geomorphology is generally transformed by deep-sea sedimentary processes. The objective of this study is to show the influences of turbidity currents, contour currents and sediment mass failures on the geomorphology of the deep-water northwestern Atlantic margin (NWAM) between Blake Ridge and Hudson Trough, spanning about 32° of latitude and the shelf edge to the abyssal plain. This assessment is based on new multibeam echosounder data, global bathymetric models and sub-surface geophysical information.The deep-water NWAM is divided into four broad geomorphologic classifications based on their bathymetric shape: graded, above-grade, stepped and out-of-grade. These shapes were created as a function of the balance between sediment accumulation and removal that in turn were related to sedimentary processes and slope-accommodation. This descriptive method of classifying continental margins, while being non-interpretative, is more informative than the conventional continental shelf, slope and rise classification, and better facilitates interpretation concerning dominant sedimentary processes.Areas of the margin dominated by turbidity currents and slope by-pass developed graded slopes. If sediments did not by-pass the slope due to accommodation then an above grade or stepped slope resulted. Geostrophic currents created sedimentary bodies of a variety of forms and positions along the NWAM. Detached drifts form linear, above-grade slopes along their crests from the shelf edge to the deep basin. Plastered drifts formed stepped slope profiles. Sediment mass failure has had a variety of consequences on the margin morphology; large mass-failures created out-of-grade profiles, whereas smaller mass failures tended to remain on the slope and formed above-grade profiles at trough-mouth fans, or nearly graded profiles, such as offshore Cape Fear.

  3. Classifying Transition Behaviour in Postural Activity Monitoring

    Directory of Open Access Journals (Sweden)

    James BRUSEY

    2009-10-01

    Full Text Available A few accelerometers positioned on different parts of the body can be used to accurately classify steady state behaviour, such as walking, running, or sitting. Such systems are usually built using supervised learning approaches. Transitions between postures are, however, difficult to deal with using posture classification systems proposed to date, since there is no label set for intermediary postures and also the exact point at which the transition occurs can sometimes be hard to pinpoint. The usual bypass when using supervised learning to train such systems is to discard a section of the dataset around each transition. This leads to poorer classification performance when the systems are deployed out of the laboratory and used on-line, particularly if the regimes monitored involve fast paced activity changes. Time-based filtering that takes advantage of sequential patterns is a potential mechanism to improve posture classification accuracy in such real-life applications. Also, such filtering should reduce the number of event messages needed to be sent across a wireless network to track posture remotely, hence extending the system’s life. To support time-based filtering, understanding transitions, which are the major event generators in a classification system, is a key. This work examines three approaches to post-process the output of a posture classifier using time-based filtering: a naïve voting scheme, an exponentially weighted voting scheme, and a Bayes filter. Best performance is obtained from the exponentially weighted voting scheme although it is suspected that a more sophisticated treatment of the Bayes filter might yield better results.

  4. Reconstructing Rodinia by Fitting Neoproterozoic Continental Margins

    Science.gov (United States)

    Stewart, John H.

    2009-01-01

    Reconstructions of Phanerozoic tectonic plates can be closely constrained by lithologic correlations across conjugate margins by paleontologic information, by correlation of orogenic belts, by paleomagnetic location of continents, and by ocean floor magmatic stripes. In contrast, Proterozoic reconstructions are hindered by the lack of some of these tools or the lack of their precision. To overcome some of these difficulties, this report focuses on a different method of reconstruction, namely the use of the shape of continents to assemble the supercontinent of Rodinia, much like a jigsaw puzzle. Compared to the vast amount of information available for Phanerozoic systems, such a limited approach for Proterozoic rocks, may seem suspect. However, using the assembly of the southern continents (South America, Africa, India, Arabia, Antarctica, and Australia) as an example, a very tight fit of the continents is apparent and illustrates the power of the jigsaw puzzle method. This report focuses on Neoproterozoic rocks, which are shown on two new detailed geologic maps that constitute the backbone of the study. The report also describes the Neoproterozoic, but younger or older rocks are not discussed or not discussed in detail. The Neoproterozoic continents and continental margins are identified based on the distribution of continental-margin sedimentary and magmatic rocks that define the break-up margins of Rodinia. These Neoproterozoic continental exposures, as well as critical Neo- and Meso-Neoproterozoic tectonic features shown on the two new map compilations, are used to reconstruct the Mesoproterozoic supercontinent of Rodinia. This approach differs from the common approach of using fold belts to define structural features deemed important in the Rodinian reconstruction. Fold belts are difficult to date, and many are significantly younger than the time frame considered here (1,200 to 850 Ma). Identifying Neoproterozoic continental margins, which are primarily

  5. Relationship of Imaging Frequency and Planning Margin to Account for Intrafraction Prostate Motion: Analysis Based on Real-Time Monitoring Data

    International Nuclear Information System (INIS)

    Curtis, William; Khan, Mohammad; Magnelli, Anthony; Stephans, Kevin; Tendulkar, Rahul; Xia, Ping

    2013-01-01

    Purpose: Correction for intrafraction prostate motion becomes important for hypofraction treatment of prostate cancer. The purpose of this study was to estimate an ideal planning margin to account for intrafraction prostate motion as a function of imaging and repositioning frequency in the absence of continuous prostate motion monitoring. Methods and Materials: For 31 patients receiving intensity modulated radiation therapy treatment, prostate positions sampled at 10 Hz during treatment using the Calypso system were analyzed. Using these data, we simulated multiple, less frequent imaging protocols, including intervals of every 10, 15, 20, 30, 45, 60, 90, 120, 180, and 240 seconds. For each imaging protocol, the prostate displacement at the imaging time was corrected by subtracting prostate shifts from the subsequent displacements in that fraction. Furthermore, we conducted a principal component analysis to quantify the direction of prostate motion. Results: Averaging histograms of every 240 and 60 seconds for all patients, vector displacements of the prostate were, respectively, within 3 and 2 mm for 95% of the treatment time. A vector margin of 1 mm achieved 91.2% coverage of the prostate with 30 second imaging. The principal component analysis for all fractions showed the largest variance in prostate position in the midsagittal plane at 54° from the anterior direction, indicating that anterosuperior to inferoposterior is the direction of greatest motion. The smallest prostate motion is in the left-right direction. Conclusions: The magnitudes of intrafraction prostate motion along the superior-inferior and anterior-posterior directions are comparable, and the smallest motion is in the left-right direction. In the absence of continuous prostate motion monitoring, and under ideal circumstances, 1-, 2-, and 3-mm vector planning margins require a respective imaging frequency of every 15, 60, and 240 to account for intrafraction prostate motion while achieving

  6. Fungal and Prokaryotic Activities in the Marine Subsurface Biosphere at Peru Margin and Canterbury Basin Inferred from RNA-Based Analyses and Microscopy.

    Science.gov (United States)

    Pachiadaki, Maria G; Rédou, Vanessa; Beaudoin, David J; Burgaud, Gaëtan; Edgcomb, Virginia P

    2016-01-01

    The deep sedimentary biosphere, extending 100s of meters below the seafloor harbors unexpected diversity of Bacteria, Archaea, and microbial eukaryotes. Far less is known about microbial eukaryotes in subsurface habitats, albeit several studies have indicated that fungi dominate microbial eukaryotic communities and fungal molecular signatures (of both yeasts and filamentous forms) have been detected in samples as deep as 1740 mbsf. Here, we compare and contrast fungal ribosomal RNA gene signatures and whole community metatranscriptomes present in sediment core samples from 6 and 95 mbsf from Peru Margin site 1229A and from samples from 12 and 345 mbsf from Canterbury Basin site U1352. The metatranscriptome analyses reveal higher relative expression of amino acid and peptide transporters in the less nutrient rich Canterbury Basin sediments compared to the nutrient rich Peru Margin, and higher expression of motility genes in the Peru Margin samples. Higher expression of genes associated with metals transporters and antibiotic resistance and production was detected in Canterbury Basin sediments. A poly-A focused metatranscriptome produced for the Canterbury Basin sample from 345 mbsf provides further evidence for active fungal communities in the subsurface in the form of fungal-associated transcripts for metabolic and cellular processes, cell and membrane functions, and catalytic activities. Fungal communities at comparable depths at the two geographically separated locations appear dominated by distinct taxa. Differences in taxonomic composition and expression of genes associated with particular metabolic activities may be a function of sediment organic content as well as oceanic province. Microscopic analysis of Canterbury Basin sediment samples from 4 and 403 mbsf produced visualizations of septate fungal filaments, branching fungi, conidiogenesis, and spores. These images provide another important line of evidence supporting the occurrence and activity of fungi in

  7. Deregulated model and locational marginal pricing

    International Nuclear Information System (INIS)

    Sood, Yog Raj; Padhy, N.P.; Gupta, H.O.

    2007-01-01

    This paper presents a generalized optimal model that dispatches the pool in combination with privately negotiated bilateral and multilateral contracts while maximizing social benefit has been proposed. This model determines the locational marginal pricing (LMP) based on marginal cost theory. It also determines the size of non-firm transactions as well as pool demand and generations. Both firms as well as non-firm transactions are considered in this model. The proposed model has been applied to IEEE-30 bus test system. In this test system different types of transactions are added for analysis of the proposed model. (author)

  8. Determining optimal clinical target volume margins in head-and-neck cancer based on microscopic extracapsular extension of metastatic neck nodes

    International Nuclear Information System (INIS)

    Apisarnthanarax, Smith; Elliott, Danielle D.; El-Naggar, Adel K.; Asper, Joshua A. P.A.; Blanco, Angel; Ang, K. Kian; Garden, Adam S.; Morrison, William H.; Rosenthal, David; Weber, Randal S.; Chao, K.S. Clifford

    2006-01-01

    Purpose: To determine the optimal clinical target volume margins around the gross nodal tumor volume in head-and-neck cancer by assessing microscopic tumor extension beyond cervical lymph node capsules. Methods and Materials: Histologic sections of 96 dissected cervical lymph nodes with extracapsular extension (ECE) from 48 patients with head-and-neck squamous cell carcinoma were examined. The maximum linear distance from the external capsule border to the farthest extent of the tumor or tumoral reaction was measured. The trends of ECE as a function of the distance from the capsule and lymph node size were analyzed. Results: The median diameter of all lymph nodes was 11.0 mm (range: 3.0-30.0 mm). The mean and median ECE extent was 2.2 mm and 1.6 mm, respectively (range: 0.4-9.0 mm). The ECE was <5 mm from the capsule in 96% of the nodes. As the distance from the capsule increased, the probability of tumor extension declined. No significant difference between the extent of ECE and lymph node size was observed. Conclusion: For N1 nodes that are at high risk for ECE but not grossly infiltrating musculature, 1 cm clinical target volume margins around the nodal gross tumor volume are recommended to cover microscopic nodal extension in head-and-neck cancer

  9. Decoding the Margins: What Can the Fractal Geometry of Basaltic Flow Margins Tell Us?

    Science.gov (United States)

    Schaefer, E. I.; Hamilton, C.; Neish, C.; Beard, S. P.; Bramson, A. M.; Sori, M.; Rader, E. L.

    2016-12-01

    Studying lava flows on other planetary bodies is essential to characterizing eruption styles and constraining the bodies' thermal evolution. Although planetary basaltic flows are common, many key features are not resolvable in orbital imagery. We are thus developing a technique to characterize basaltic flow type, sub-meter roughness, and sediment mantling from these data. We will present the results from upcoming fieldwork at Craters of the Moon National Monument and Preserve with FINESSE (August) and at Hawai'i Volcanoes National Park (September). We build on earlier work that showed that basaltic flow margins are approximately fractal [Bruno et al., 1992; Gaonac'h et al., 1992] and that their fractal dimensions (D) have distinct `a`ā and pāhoehoe ranges under simple conditions [Bruno et al., 1994]. Using a differential GPS rover, we have recently shown that the margin of Iceland's 2014 Holuhraun flow exhibits near-perfect (R2=0.9998) fractality for ≥24 km across dm to km scales [Schaefer et al., 2016]. This finding suggests that a fractal-based technique has significant potential to characterize flows at sub-resolution scales. We are simultaneously seeking to understand how margin fractality can be modified. A preliminary result for an `a'ā flow in Hawaii's Ka'ū Desert suggests that although aeolian mantling obscures the original flow margin, the apparent margin (i.e., sediment-lava interface) remains fractal [Schaefer et al., 2015]. Further, the apparent margin's D is likely significantly modified from that of the original margin. Other factors that we are exploring include erosion, transitional flow types, and topographic confinement. We will also rigorously test the intriguing possibility that margin D correlates with the sub-meter Hurst exponent H of the flow surface, a common metric of roughness scaling [e.g., Shepard et al., 2001]. This hypothesis is based on geometric arguments [Turcotte, 1997] and is qualitatively consistent with all results so far.