WorldWideScience

Sample records for network classifiers combined

  1. Non-Mutually Exclusive Deep Neural Network Classifier for Combined Modes of Bearing Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Bach Phi Duong

    2018-04-01

    Full Text Available The simultaneous occurrence of various types of defects in bearings makes their diagnosis more challenging owing to the resultant complexity of the constituent parts of the acoustic emission (AE signals. To address this issue, a new approach is proposed in this paper for the detection of multiple combined faults in bearings. The proposed methodology uses a deep neural network (DNN architecture to effectively diagnose the combined defects. The DNN structure is based on the stacked denoising autoencoder non-mutually exclusive classifier (NMEC method for combined modes. The NMEC-DNN is trained using data for a single fault and it classifies both single faults and multiple combined faults. The results of experiments conducted on AE data collected through an experimental test-bed demonstrate that the DNN achieves good classification performance with a maximum accuracy of 95%. The proposed method is compared with a multi-class classifier based on support vector machines (SVMs. The NMEC-DNN yields better diagnostic performance in comparison to the multi-class classifier based on SVM. The NMEC-DNN reduces the number of necessary data collections and improves the bearing fault diagnosis performance.

  2. Non-Mutually Exclusive Deep Neural Network Classifier for Combined Modes of Bearing Fault Diagnosis.

    Science.gov (United States)

    Duong, Bach Phi; Kim, Jong-Myon

    2018-04-07

    The simultaneous occurrence of various types of defects in bearings makes their diagnosis more challenging owing to the resultant complexity of the constituent parts of the acoustic emission (AE) signals. To address this issue, a new approach is proposed in this paper for the detection of multiple combined faults in bearings. The proposed methodology uses a deep neural network (DNN) architecture to effectively diagnose the combined defects. The DNN structure is based on the stacked denoising autoencoder non-mutually exclusive classifier (NMEC) method for combined modes. The NMEC-DNN is trained using data for a single fault and it classifies both single faults and multiple combined faults. The results of experiments conducted on AE data collected through an experimental test-bed demonstrate that the DNN achieves good classification performance with a maximum accuracy of 95%. The proposed method is compared with a multi-class classifier based on support vector machines (SVMs). The NMEC-DNN yields better diagnostic performance in comparison to the multi-class classifier based on SVM. The NMEC-DNN reduces the number of necessary data collections and improves the bearing fault diagnosis performance.

  3. Non-Mutually Exclusive Deep Neural Network Classifier for Combined Modes of Bearing Fault Diagnosis

    Science.gov (United States)

    Kim, Jong-Myon

    2018-01-01

    The simultaneous occurrence of various types of defects in bearings makes their diagnosis more challenging owing to the resultant complexity of the constituent parts of the acoustic emission (AE) signals. To address this issue, a new approach is proposed in this paper for the detection of multiple combined faults in bearings. The proposed methodology uses a deep neural network (DNN) architecture to effectively diagnose the combined defects. The DNN structure is based on the stacked denoising autoencoder non-mutually exclusive classifier (NMEC) method for combined modes. The NMEC-DNN is trained using data for a single fault and it classifies both single faults and multiple combined faults. The results of experiments conducted on AE data collected through an experimental test-bed demonstrate that the DNN achieves good classification performance with a maximum accuracy of 95%. The proposed method is compared with a multi-class classifier based on support vector machines (SVMs). The NMEC-DNN yields better diagnostic performance in comparison to the multi-class classifier based on SVM. The NMEC-DNN reduces the number of necessary data collections and improves the bearing fault diagnosis performance. PMID:29642466

  4. Hybrid classifiers methods of data, knowledge, and classifier combination

    CERN Document Server

    Wozniak, Michal

    2014-01-01

    This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.

  5. Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets.

    Science.gov (United States)

    McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne

    2018-04-01

    Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Combining multiple classifiers for age classification

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-11-01

    Full Text Available The authors compare several different classifier combination methods on a single task, namely speaker age classification. This task is well suited to combination strategies, since significantly different feature classes are employed. Support vector...

  7. A Comparison of Spectral Angle Mapper and Artificial Neural Network Classifiers Combined with Landsat TM Imagery Analysis for Obtaining Burnt Area Mapping

    Directory of Open Access Journals (Sweden)

    Marko Scholze

    2010-03-01

    Full Text Available Satellite remote sensing, with its unique synoptic coverage capabilities, can provide accurate and immediately valuable information on fire analysis and post-fire assessment, including estimation of burnt areas. In this study the potential for burnt area mapping of the combined use of Artificial Neural Network (ANN and Spectral Angle Mapper (SAM classifiers with Landsat TM satellite imagery was evaluated in a Mediterranean setting. As a case study one of the most catastrophic forest fires, which occurred near the capital of Greece during the summer of 2007, was used. The accuracy of the two algorithms in delineating the burnt area from the Landsat TM imagery, acquired shortly after the fire suppression, was determined by the classification accuracy results of the produced thematic maps. In addition, the derived burnt area estimates from the two classifiers were compared with independent estimates available for the study region, obtained from the analysis of higher spatial resolution satellite data. In terms of the overall classification accuracy, ANN outperformed (overall accuracy 90.29%, Kappa coefficient 0.878 the SAM classifier (overall accuracy 83.82%, Kappa coefficient 0.795. Total burnt area estimates from the two classifiers were found also to be in close agreement with the other available estimates for the study region, with a mean absolute percentage difference of ~1% for ANN and ~6.5% for SAM. The study demonstrates the potential of the examined here algorithms in detecting burnt areas in a typical Mediterranean setting.

  8. Arabic Handwriting Recognition Using Neural Network Classifier

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... an OCR using Neural Network classifier preceded by a set of preprocessing .... Artificial Neural Networks (ANNs), which we adopt in this research, consist of ... advantage and disadvantages of each technique. In [9],. Khemiri ...

  9. Optical Neural Network Classifier Architectures

    National Research Council Canada - National Science Library

    Getbehead, Mark

    1998-01-01

    We present an adaptive opto-electronic neural network hardware architecture capable of exploiting parallel optics to realize real-time processing and classification of high-dimensional data for Air...

  10. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Neural Network Classifiers for Local Wind Prediction.

    Science.gov (United States)

    Kretzschmar, Ralf; Eckert, Pierre; Cattani, Daniel; Eggimann, Fritz

    2004-05-01

    This paper evaluates the quality of neural network classifiers for wind speed and wind gust prediction with prediction lead times between +1 and +24 h. The predictions were realized based on local time series and model data. The selection of appropriate input features was initiated by time series analysis and completed by empirical comparison of neural network classifiers trained on several choices of input features. The selected input features involved day time, yearday, features from a single wind observation device at the site of interest, and features derived from model data. The quality of the resulting classifiers was benchmarked against persistence for two different sites in Switzerland. The neural network classifiers exhibited superior quality when compared with persistence judged on a specific performance measure, hit and false-alarm rates.

  12. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  13. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential...

  14. Ensemble of classifiers based network intrusion detection system performance bound

    CSIR Research Space (South Africa)

    Mkuzangwe, Nenekazi NP

    2017-11-01

    Full Text Available This paper provides a performance bound of a network intrusion detection system (NIDS) that uses an ensemble of classifiers. Currently researchers rely on implementing the ensemble of classifiers based NIDS before they can determine the performance...

  15. Robust Combining of Disparate Classifiers Through Order Statistics

    Science.gov (United States)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  16. Revealing effective classifiers through network comparison

    Science.gov (United States)

    Gallos, Lazaros K.; Fefferman, Nina H.

    2014-11-01

    The ability to compare complex systems can provide new insight into the fundamental nature of the processes captured, in ways that are otherwise inaccessible to observation. Here, we introduce the n-tangle method to directly compare two networks for structural similarity, based on the distribution of edge density in network subgraphs. We demonstrate that this method can efficiently introduce comparative analysis into network science and opens the road for many new applications. For example, we show how the construction of a “phylogenetic tree” across animal taxa according to their social structure can reveal commonalities in the behavioral ecology of the populations, or how students create similar networks according to the University size. Our method can be expanded to study many additional properties, such as network classification, changes during time evolution, convergence of growth models, and detection of structural changes during damage.

  17. Neural network classifier of attacks in IP telephony

    Science.gov (United States)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  18. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    Directory of Open Access Journals (Sweden)

    Shehzad Khalid

    2014-01-01

    Full Text Available We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.

  19. Classifying Radio Galaxies with the Convolutional Neural Network

    International Nuclear Information System (INIS)

    Aniyan, A. K.; Thorat, K.

    2017-01-01

    We present the application of a deep machine learning technique to classify radio images of extended sources on a morphological basis using convolutional neural networks (CNN). In this study, we have taken the case of the Fanaroff–Riley (FR) class of radio galaxies as well as radio galaxies with bent-tailed morphology. We have used archival data from the Very Large Array (VLA)—Faint Images of the Radio Sky at Twenty Centimeters survey and existing visually classified samples available in the literature to train a neural network for morphological classification of these categories of radio sources. Our training sample size for each of these categories is ∼200 sources, which has been augmented by rotated versions of the same. Our study shows that CNNs can classify images of the FRI and FRII and bent-tailed radio galaxies with high accuracy (maximum precision at 95%) using well-defined samples and a “fusion classifier,” which combines the results of binary classifications, while allowing for a mechanism to find sources with unusual morphologies. The individual precision is highest for bent-tailed radio galaxies at 95% and is 91% and 75% for the FRI and FRII classes, respectively, whereas the recall is highest for FRI and FRIIs at 91% each, while the bent-tailed class has a recall of 79%. These results show that our results are comparable to that of manual classification, while being much faster. Finally, we discuss the computational and data-related challenges associated with the morphological classification of radio galaxies with CNNs.

  20. Classifying Radio Galaxies with the Convolutional Neural Network

    Energy Technology Data Exchange (ETDEWEB)

    Aniyan, A. K.; Thorat, K. [Department of Physics and Electronics, Rhodes University, Grahamstown (South Africa)

    2017-06-01

    We present the application of a deep machine learning technique to classify radio images of extended sources on a morphological basis using convolutional neural networks (CNN). In this study, we have taken the case of the Fanaroff–Riley (FR) class of radio galaxies as well as radio galaxies with bent-tailed morphology. We have used archival data from the Very Large Array (VLA)—Faint Images of the Radio Sky at Twenty Centimeters survey and existing visually classified samples available in the literature to train a neural network for morphological classification of these categories of radio sources. Our training sample size for each of these categories is ∼200 sources, which has been augmented by rotated versions of the same. Our study shows that CNNs can classify images of the FRI and FRII and bent-tailed radio galaxies with high accuracy (maximum precision at 95%) using well-defined samples and a “fusion classifier,” which combines the results of binary classifications, while allowing for a mechanism to find sources with unusual morphologies. The individual precision is highest for bent-tailed radio galaxies at 95% and is 91% and 75% for the FRI and FRII classes, respectively, whereas the recall is highest for FRI and FRIIs at 91% each, while the bent-tailed class has a recall of 79%. These results show that our results are comparable to that of manual classification, while being much faster. Finally, we discuss the computational and data-related challenges associated with the morphological classification of radio galaxies with CNNs.

  1. Classifying Radio Galaxies with the Convolutional Neural Network

    Science.gov (United States)

    Aniyan, A. K.; Thorat, K.

    2017-06-01

    We present the application of a deep machine learning technique to classify radio images of extended sources on a morphological basis using convolutional neural networks (CNN). In this study, we have taken the case of the Fanaroff-Riley (FR) class of radio galaxies as well as radio galaxies with bent-tailed morphology. We have used archival data from the Very Large Array (VLA)—Faint Images of the Radio Sky at Twenty Centimeters survey and existing visually classified samples available in the literature to train a neural network for morphological classification of these categories of radio sources. Our training sample size for each of these categories is ˜200 sources, which has been augmented by rotated versions of the same. Our study shows that CNNs can classify images of the FRI and FRII and bent-tailed radio galaxies with high accuracy (maximum precision at 95%) using well-defined samples and a “fusion classifier,” which combines the results of binary classifications, while allowing for a mechanism to find sources with unusual morphologies. The individual precision is highest for bent-tailed radio galaxies at 95% and is 91% and 75% for the FRI and FRII classes, respectively, whereas the recall is highest for FRI and FRIIs at 91% each, while the bent-tailed class has a recall of 79%. These results show that our results are comparable to that of manual classification, while being much faster. Finally, we discuss the computational and data-related challenges associated with the morphological classification of radio galaxies with CNNs.

  2. Classifying emotion in Twitter using Bayesian network

    Science.gov (United States)

    Surya Asriadie, Muhammad; Syahrul Mubarok, Mohamad; Adiwijaya

    2018-03-01

    Language is used to express not only facts, but also emotions. Emotions are noticeable from behavior up to the social media statuses written by a person. Analysis of emotions in a text is done in a variety of media such as Twitter. This paper studies classification of emotions on twitter using Bayesian network because of its ability to model uncertainty and relationships between features. The result is two models based on Bayesian network which are Full Bayesian Network (FBN) and Bayesian Network with Mood Indicator (BNM). FBN is a massive Bayesian network where each word is treated as a node. The study shows the method used to train FBN is not very effective to create the best model and performs worse compared to Naive Bayes. F1-score for FBN is 53.71%, while for Naive Bayes is 54.07%. BNM is proposed as an alternative method which is based on the improvement of Multinomial Naive Bayes and has much lower computational complexity compared to FBN. Even though it’s not better compared to FBN, the resulting model successfully improves the performance of Multinomial Naive Bayes. F1-Score for Multinomial Naive Bayes model is 51.49%, while for BNM is 52.14%.

  3. A convolutional neural network neutrino event classifier

    International Nuclear Information System (INIS)

    Aurisano, A.; Sousa, A.; Radovic, A.; Vahle, P.; Rocco, D.; Pawloski, G.; Himmel, A.; Niner, E.; Messier, M.D.; Psihas, F.

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  4. Using Neural Networks to Classify Digitized Images of Galaxies

    Science.gov (United States)

    Goderya, S. N.; McGuire, P. C.

    2000-12-01

    Automated classification of Galaxies into Hubble types is of paramount importance to study the large scale structure of the Universe, particularly as survey projects like the Sloan Digital Sky Survey complete their data acquisition of one million galaxies. At present it is not possible to find robust and efficient artificial intelligence based galaxy classifiers. In this study we will summarize progress made in the development of automated galaxy classifiers using neural networks as machine learning tools. We explore the Bayesian linear algorithm, the higher order probabilistic network, the multilayer perceptron neural network and Support Vector Machine Classifier. The performance of any machine classifier is dependant on the quality of the parameters that characterize the different groups of galaxies. Our effort is to develop geometric and invariant moment based parameters as input to the machine classifiers instead of the raw pixel data. Such an approach reduces the dimensionality of the classifier considerably, and removes the effects of scaling and rotation, and makes it easier to solve for the unknown parameters in the galaxy classifier. To judge the quality of training and classification we develop the concept of Mathews coefficients for the galaxy classification community. Mathews coefficients are single numbers that quantify classifier performance even with unequal prior probabilities of the classes.

  5. Human Activity Recognition by Combining a Small Number of Classifiers.

    Science.gov (United States)

    Nazabal, Alfredo; Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Ghahramani, Zoubin

    2016-09-01

    We consider the problem of daily human activity recognition (HAR) using multiple wireless inertial sensors, and specifically, HAR systems with a very low number of sensors, each one providing an estimation of the performed activities. We propose new Bayesian models to combine the output of the sensors. The models are based on a soft outputs combination of individual classifiers to deal with the small number of sensors. We also incorporate the dynamic nature of human activities as a first-order homogeneous Markov chain. We develop both inductive and transductive inference methods for each model to be employed in supervised and semisupervised situations, respectively. Using different real HAR databases, we compare our classifiers combination models against a single classifier that employs all the signals from the sensors. Our models exhibit consistently a reduction of the error rate and an increase of robustness against sensor failures. Our models also outperform other classifiers combination models that do not consider soft outputs and an Markovian structure of the human activities.

  6. Balanced sensitivity functions for tuning multi-dimensional Bayesian network classifiers

    NARCIS (Netherlands)

    Bolt, J.H.; van der Gaag, L.C.

    Multi-dimensional Bayesian network classifiers are Bayesian networks of restricted topological structure, which are tailored to classifying data instances into multiple dimensions. Like more traditional classifiers, multi-dimensional classifiers are typically learned from data and may include

  7. Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

    Science.gov (United States)

    Geng, S.; Ren, G.; Ogihara, M.

    2017-05-01

    Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.

  8. A Critical Evaluation of Network and Pathway-Based Classifiers for Outcome Prediction in Breast Cancer

    NARCIS (Netherlands)

    C. Staiger (Christine); S. Cadot; R Kooter; M. Dittrich (Marcus); T. Müller (Tobias); G.W. Klau (Gunnar); L.F.A. Wessels (Lodewyk)

    2012-01-01

    htmlabstractRecently, several classifiers that combine primary tumor data, like gene expression data, and secondary data sources, such as protein-protein interaction networks, have been proposed for predicting outcome in breast cancer. In these approaches, new composite features are typically

  9. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  10. Combining Biometric Fractal Pattern and Particle Swarm Optimization-Based Classifier for Fingerprint Recognition

    Directory of Open Access Journals (Sweden)

    Chia-Hung Lin

    2010-01-01

    Full Text Available This paper proposes combining the biometric fractal pattern and particle swarm optimization (PSO-based classifier for fingerprint recognition. Fingerprints have arch, loop, whorl, and accidental morphologies, and embed singular points, resulting in the establishment of fingerprint individuality. An automatic fingerprint identification system consists of two stages: digital image processing (DIP and pattern recognition. DIP is used to convert to binary images, refine out noise, and locate the reference point. For binary images, Katz's algorithm is employed to estimate the fractal dimension (FD from a two-dimensional (2D image. Biometric features are extracted as fractal patterns using different FDs. Probabilistic neural network (PNN as a classifier performs to compare the fractal patterns among the small-scale database. A PSO algorithm is used to tune the optimal parameters and heighten the accuracy. For 30 subjects in the laboratory, the proposed classifier demonstrates greater efficiency and higher accuracy in fingerprint recognition.

  11. Classifying images using restricted Boltzmann machines and convolutional neural networks

    Science.gov (United States)

    Zhao, Zhijun; Xu, Tongde; Dai, Chenyu

    2017-07-01

    To improve the feature recognition ability of deep model transfer learning, we propose a hybrid deep transfer learning method for image classification based on restricted Boltzmann machines (RBM) and convolutional neural networks (CNNs). It integrates learning abilities of two models, which conducts subject classification by exacting structural higher-order statistics features of images. While the method transfers the trained convolutional neural networks to the target datasets, fully-connected layers can be replaced by restricted Boltzmann machine layers; then the restricted Boltzmann machine layers and Softmax classifier are retrained, and BP neural network can be used to fine-tuned the hybrid model. The restricted Boltzmann machine layers has not only fully integrated the whole feature maps, but also learns the statistical features of target datasets in the view of the biggest logarithmic likelihood, thus removing the effects caused by the content differences between datasets. The experimental results show that the proposed method has improved the accuracy of image classification, outperforming other methods on Pascal VOC2007 and Caltech101 datasets.

  12. Classifying magnetic resonance image modalities with convolutional neural networks

    Science.gov (United States)

    Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis

    2018-02-01

    Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.

  13. Learning Bayesian network classifiers for credit scoring using Markov Chain Monte Carlo search

    NARCIS (Netherlands)

    Baesens, B.; Egmont-Petersen, M.; Castelo, R.; Vanthienen, J.

    2001-01-01

    In this paper, we will evaluate the power and usefulness of Bayesian network classifiers for credit scoring. Various types of Bayesian network classifiers will be evaluated and contrasted including unrestricted Bayesian network classifiers learnt using Markov Chain Monte Carlo (MCMC) search.

  14. Generating prior probabilities for classifiers of brain tumours using belief networks

    Directory of Open Access Journals (Sweden)

    Arvanitis Theodoros N

    2007-09-01

    Full Text Available Abstract Background Numerous methods for classifying brain tumours based on magnetic resonance spectra and imaging have been presented in the last 15 years. Generally, these methods use supervised machine learning to develop a classifier from a database of cases for which the diagnosis is already known. However, little has been published on developing classifiers based on mixed modalities, e.g. combining imaging information with spectroscopy. In this work a method of generating probabilities of tumour class from anatomical location is presented. Methods The method of "belief networks" is introduced as a means of generating probabilities that a tumour is any given type. The belief networks are constructed using a database of paediatric tumour cases consisting of data collected over five decades; the problems associated with using this data are discussed. To verify the usefulness of the networks, an application of the method is presented in which prior probabilities were generated and combined with a classification of tumours based solely on MRS data. Results Belief networks were constructed from a database of over 1300 cases. These can be used to generate a probability that a tumour is any given type. Networks are presented for astrocytoma grades I and II, astrocytoma grades III and IV, ependymoma, pineoblastoma, primitive neuroectodermal tumour (PNET, germinoma, medulloblastoma, craniopharyngioma and a group representing rare tumours, "other". Using the network to generate prior probabilities for classification improves the accuracy when compared with generating prior probabilities based on class prevalence. Conclusion Bayesian belief networks are a simple way of using discrete clinical information to generate probabilities usable in classification. The belief network method can be robust to incomplete datasets. Inclusion of a priori knowledge is an effective way of improving classification of brain tumours by non-invasive methods.

  15. MAMMOGRAMS ANALYSIS USING SVM CLASSIFIER IN COMBINED TRANSFORMS DOMAIN

    Directory of Open Access Journals (Sweden)

    B.N. Prathibha

    2011-02-01

    Full Text Available Breast cancer is a primary cause of mortality and morbidity in women. Reports reveal that earlier the detection of abnormalities, better the improvement in survival. Digital mammograms are one of the most effective means for detecting possible breast anomalies at early stages. Digital mammograms supported with Computer Aided Diagnostic (CAD systems help the radiologists in taking reliable decisions. The proposed CAD system extracts wavelet features and spectral features for the better classification of mammograms. The Support Vector Machines classifier is used to analyze 206 mammogram images from Mias database pertaining to the severity of abnormality, i.e., benign and malign. The proposed system gives 93.14% accuracy for discrimination between normal-malign and 87.25% accuracy for normal-benign samples and 89.22% accuracy for benign-malign samples. The study reveals that features extracted in hybrid transform domain with SVM classifier proves to be a promising tool for analysis of mammograms.

  16. Neural-network classifiers for automatic real-world aerial image recognition

    Science.gov (United States)

    Greenberg, Shlomo; Guterman, Hugo

    1996-08-01

    We describe the application of the multilayer perceptron (MLP) network and a version of the adaptive resonance theory version 2-A (ART 2-A) network to the problem of automatic aerial image recognition (AAIR). The classification of aerial images, independent of their positions and orientations, is required for automatic tracking and target recognition. Invariance is achieved by the use of different invariant feature spaces in combination with supervised and unsupervised neural networks. The performance of neural-network-based classifiers in conjunction with several types of invariant AAIR global features, such as the Fourier-transform space, Zernike moments, central moments, and polar transforms, are examined. The advantages of this approach are discussed. The performance of the MLP network is compared with that of a classical correlator. The MLP neural-network correlator outperformed the binary phase-only filter (BPOF) correlator. It was found that the ART 2-A distinguished itself with its speed and its low number of required training vectors. However, only the MLP classifier was able to deal with a combination of shift and rotation geometric distortions.

  17. Deep Convolutional Neural Networks for Classifying Body Constitution Based on Face Image.

    Science.gov (United States)

    Huan, Er-Yang; Wen, Gui-Hua; Zhang, Shi-Jun; Li, Dan-Yang; Hu, Yang; Chang, Tian-Yuan; Wang, Qing; Huang, Bing-Lin

    2017-01-01

    Body constitution classification is the basis and core content of traditional Chinese medicine constitution research. It is to extract the relevant laws from the complex constitution phenomenon and finally build the constitution classification system. Traditional identification methods have the disadvantages of inefficiency and low accuracy, for instance, questionnaires. This paper proposed a body constitution recognition algorithm based on deep convolutional neural network, which can classify individual constitution types according to face images. The proposed model first uses the convolutional neural network to extract the features of face image and then combines the extracted features with the color features. Finally, the fusion features are input to the Softmax classifier to get the classification result. Different comparison experiments show that the algorithm proposed in this paper can achieve the accuracy of 65.29% about the constitution classification. And its performance was accepted by Chinese medicine practitioners.

  18. Variants of the Borda count method for combining ranked classifier hypotheses

    NARCIS (Netherlands)

    van Erp, Merijn; Schomaker, Lambert; Schomaker, Lambert; Vuurpijl, Louis

    2000-01-01

    The Borda count is a simple yet effective method of combining rankings. In pattern recognition, classifiers are often able to return a ranked set of results. Several experiments have been conducted to test the ability of the Borda count and two variant methods to combine these ranked classifier

  19. Lung Nodule Image Classification Based on Local Difference Pattern and Combined Classifier.

    Science.gov (United States)

    Mao, Keming; Deng, Zhuofu

    2016-01-01

    This paper proposes a novel lung nodule classification method for low-dose CT images. The method includes two stages. First, Local Difference Pattern (LDP) is proposed to encode the feature representation, which is extracted by comparing intensity difference along circular regions centered at the lung nodule. Then, the single-center classifier is trained based on LDP. Due to the diversity of feature distribution for different class, the training images are further clustered into multiple cores and the multicenter classifier is constructed. The two classifiers are combined to make the final decision. Experimental results on public dataset show the superior performance of LDP and the combined classifier.

  20. Lung Nodule Image Classification Based on Local Difference Pattern and Combined Classifier

    Directory of Open Access Journals (Sweden)

    Keming Mao

    2016-01-01

    Full Text Available This paper proposes a novel lung nodule classification method for low-dose CT images. The method includes two stages. First, Local Difference Pattern (LDP is proposed to encode the feature representation, which is extracted by comparing intensity difference along circular regions centered at the lung nodule. Then, the single-center classifier is trained based on LDP. Due to the diversity of feature distribution for different class, the training images are further clustered into multiple cores and the multicenter classifier is constructed. The two classifiers are combined to make the final decision. Experimental results on public dataset show the superior performance of LDP and the combined classifier.

  1. Diagnostic Classifiers: Revealing how Neural Networks Process Hierarchical Structure

    NARCIS (Netherlands)

    Veldhoen, S.; Hupkes, D.; Zuidema, W.

    2016-01-01

    We investigate how neural networks can be used for hierarchical, compositional semantics. To this end, we define the simple but nontrivial artificial task of processing nested arithmetic expressions and study whether different types of neural networks can learn to add and subtract. We find that

  2. A Bayesian method for comparing and combining binary classifiers in the absence of a gold standard

    Directory of Open Access Journals (Sweden)

    Keith Jonathan M

    2012-07-01

    Full Text Available Abstract Background Many problems in bioinformatics involve classification based on features such as sequence, structure or morphology. Given multiple classifiers, two crucial questions arise: how does their performance compare, and how can they best be combined to produce a better classifier? A classifier can be evaluated in terms of sensitivity and specificity using benchmark, or gold standard, data, that is, data for which the true classification is known. However, a gold standard is not always available. Here we demonstrate that a Bayesian model for comparing medical diagnostics without a gold standard can be successfully applied in the bioinformatics domain, to genomic scale data sets. We present a new implementation, which unlike previous implementations is applicable to any number of classifiers. We apply this model, for the first time, to the problem of finding the globally optimal logical combination of classifiers. Results We compared three classifiers of protein subcellular localisation, and evaluated our estimates of sensitivity and specificity against estimates obtained using a gold standard. The method overestimated sensitivity and specificity with only a small discrepancy, and correctly ranked the classifiers. Diagnostic tests for swine flu were then compared on a small data set. Lastly, classifiers for a genome-wide association study of macular degeneration with 541094 SNPs were analysed. In all cases, run times were feasible, and results precise. The optimal logical combination of classifiers was also determined for all three data sets. Code and data are available from http://bioinformatics.monash.edu.au/downloads/. Conclusions The examples demonstrate the methods are suitable for both small and large data sets, applicable to the wide range of bioinformatics classification problems, and robust to dependence between classifiers. In all three test cases, the globally optimal logical combination of the classifiers was found to be

  3. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    Science.gov (United States)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  4. Strategies for Transporting Data Between Classified and Unclassified Networks

    Science.gov (United States)

    2016-03-01

    datagram protocol (UDP) must be used. The UDP is typically used when speed is a higher priority than data integrity, such as in music or video streaming ...and the exit point of data are separate and can be tightly controlled. This does effectively prevent the comingling of data and is used in industry to...perform functions such as streaming video and audio from secure to insecure networks (ref. 1). A second disadvantage lies in the fact that the

  5. Use of artificial neural networks and geographic objects for classifying remote sensing imagery

    Directory of Open Access Journals (Sweden)

    Pedro Resende Silva

    2014-06-01

    Full Text Available The aim of this study was to develop a methodology for mapping land use and land cover in the northern region of Minas Gerais state, where, in addition to agricultural land, the landscape is dominated by native cerrado, deciduous forests, and extensive areas of vereda. Using forest inventory data, as well as RapidEye, Landsat TM and MODIS imagery, three specific objectives were defined: 1 to test use of image segmentation techniques for an object-based classification encompassing spectral, spatial and temporal information, 2 to test use of high spatial resolution RapidEye imagery combined with Landsat TM time series imagery for capturing the effects of seasonality, and 3 to classify data using Artificial Neural Networks. Using MODIS time series and forest inventory data, time signatures were extracted from the dominant vegetation formations, enabling selection of the best periods of the year to be represented in the classification process. Objects created with the segmentation of RapidEye images, along with the Landsat TM time series images, were classified by ten different Multilayer Perceptron network architectures. Results showed that the methodology in question meets both the purposes of this study and the characteristics of the local plant life. With excellent accuracy values for native classes, the study showed the importance of a well-structured database for classification and the importance of suitable image segmentation to meet specific purposes.

  6. Constructing and Classifying Email Networks from Raw Forensic Images

    Science.gov (United States)

    2016-09-01

    AUC value will be closer to 1. Figure 2.6 compares 3 different ROC curves. Figure 2.6. ROC curves compared. The dashed black curve at the top has the...main difference was the computation time. For example, on a 1275-node jazz musician network, the fast algorithm ran to completion in about one...königsberg bridges,” Scientific American , vol. 189, no. 1, pp. 66–70, 1953. [8] N. Biggs, E. K. Lloyd, and R. J. Wilson, Graph Theory, 1736-1936. Great

  7. Classifying medical relations in clinical text via convolutional neural networks.

    Science.gov (United States)

    He, Bin; Guan, Yi; Dai, Rui

    2018-05-16

    Deep learning research on relation classification has achieved solid performance in the general domain. This study proposes a convolutional neural network (CNN) architecture with a multi-pooling operation for medical relation classification on clinical records and explores a loss function with a category-level constraint matrix. Experiments using the 2010 i2b2/VA relation corpus demonstrate these models, which do not depend on any external features, outperform previous single-model methods and our best model is competitive with the existing ensemble-based method. Copyright © 2018. Published by Elsevier B.V.

  8. Feature selection for Bayesian network classifiers using the MDL-FS score

    NARCIS (Netherlands)

    Drugan, Madalina M.; Wiering, Marco A.

    When constructing a Bayesian network classifier from data, the more or less redundant features included in a dataset may bias the classifier and as a consequence may result in a relatively poor classification accuracy. In this paper, we study the problem of selecting appropriate subsets of features

  9. A deep convolutional neural network model to classify heartbeats.

    Science.gov (United States)

    Acharya, U Rajendra; Oh, Shu Lih; Hagiwara, Yuki; Tan, Jen Hong; Adam, Muhammad; Gertych, Arkadiusz; Tan, Ru San

    2017-10-01

    The electrocardiogram (ECG) is a standard test used to monitor the activity of the heart. Many cardiac abnormalities will be manifested in the ECG including arrhythmia which is a general term that refers to an abnormal heart rhythm. The basis of arrhythmia diagnosis is the identification of normal versus abnormal individual heart beats, and their correct classification into different diagnoses, based on ECG morphology. Heartbeats can be sub-divided into five categories namely non-ectopic, supraventricular ectopic, ventricular ectopic, fusion, and unknown beats. It is challenging and time-consuming to distinguish these heartbeats on ECG as these signals are typically corrupted by noise. We developed a 9-layer deep convolutional neural network (CNN) to automatically identify 5 different categories of heartbeats in ECG signals. Our experiment was conducted in original and noise attenuated sets of ECG signals derived from a publicly available database. This set was artificially augmented to even out the number of instances the 5 classes of heartbeats and filtered to remove high-frequency noise. The CNN was trained using the augmented data and achieved an accuracy of 94.03% and 93.47% in the diagnostic classification of heartbeats in original and noise free ECGs, respectively. When the CNN was trained with highly imbalanced data (original dataset), the accuracy of the CNN reduced to 89.07%% and 89.3% in noisy and noise-free ECGs. When properly trained, the proposed CNN model can serve as a tool for screening of ECG to quickly identify different types and frequency of arrhythmic heartbeats. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations.

    Science.gov (United States)

    Zhang, Yi; Ren, Jinchang; Jiang, Jianmin

    2015-01-01

    Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.

  11. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    2015-01-01

    Full Text Available Maximum likelihood classifier (MLC and support vector machines (SVM are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.

  12. Using Conjugate Gradient Network to Classify Stress Level of Patients.

    Directory of Open Access Journals (Sweden)

    Er. S. Pawar

    2013-02-01

    Full Text Available Diagnosis of stress is important because it can cause many diseases e.g., heart disease, headache, migraine, sleep problems, irritability etc. Diagnosis of stress in patients often involves acquisition of biological signals for example heart rate, electrocardiogram (ECG, electromyography signals (EMG etc. Stress diagnosis using biomedical signals is difficult and since the biomedical signals are too complex to generate any rule an experienced person or expert is needed to determine stress levels. Also, it is not feasible to use all the features that are available or possible to extract from the signal. So, relevant features should be chosen from the extracted features that are capable to diagnose stress. Electronics devices are increasingly being seen in the field of medicine for diagnosis, therapy, checking of stress levels etc. The research and development work of medical electronics engineers leads to the manufacturing of sophisticated diagnostic medical equipment needed to ensure good health care. Biomedical engineering combines the design and problem solving skills of engineering with medical and biological sciences to improve health care diagnosis and treatment.

  13. Combined Approach of PNN and Time-Frequency as the Classifier for Power System Transient Problems

    Directory of Open Access Journals (Sweden)

    Aslam Pervez Memon

    2013-04-01

    Full Text Available The transients in power system cause serious disturbances in the reliability, safety and economy of the system. The transient signals possess the nonstationary characteristics in which the frequency as well as varying time information is compulsory for the analysis. Hence, it is vital, first to detect and classify the type of transient fault and then to mitigate them. This article proposes time-frequency and FFNN (Feedforward Neural Network approach for the classification of power system transients problems. In this work it is suggested that all the major categories of transients are simulated, de-noised, and decomposed with DWT (Discrete Wavelet and MRA (Multiresolution Analysis algorithm and then distinctive features are extracted to get optimal vector as input for training of PNN (Probabilistic Neural Network classifier. The simulation results of proposed approach prove their simplicity, accurateness and effectiveness for the automatic detection and classification of PST (Power System Transient types

  14. Emotion recognition from speech by combining databases and fusion of classifiers

    NARCIS (Netherlands)

    Lefter, I.; Rothkrantz, L.J.M.; Wiggers, P.; Leeuwen, D.A. van

    2010-01-01

    We explore possibilities for enhancing the generality, portability and robustness of emotion recognition systems by combining data-bases and by fusion of classifiers. In a first experiment, we investigate the performance of an emotion detection system tested on a certain database given that it is

  15. A Constrained Multi-Objective Learning Algorithm for Feed-Forward Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    M. Njah

    2017-06-01

    Full Text Available This paper proposes a new approach to address the optimal design of a Feed-forward Neural Network (FNN based classifier. The originality of the proposed methodology, called CMOA, lie in the use of a new constraint handling technique based on a self-adaptive penalty procedure in order to direct the entire search effort towards finding only Pareto optimal solutions that are acceptable. Neurons and connections of the FNN Classifier are dynamically built during the learning process. The approach includes differential evolution to create new individuals and then keeps only the non-dominated ones as the basis for the next generation. The designed FNN Classifier is applied to six binary classification benchmark problems, obtained from the UCI repository, and results indicated the advantages of the proposed approach over other existing multi-objective evolutionary neural networks classifiers reported recently in the literature.

  16. An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data

    Directory of Open Access Journals (Sweden)

    Evangelos Stromatias

    2017-06-01

    Full Text Available This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77% and Poker-DVS (100% real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.

  17. An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data.

    Science.gov (United States)

    Stromatias, Evangelos; Soto, Miguel; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabé

    2017-01-01

    This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.

  18. The use of hyperspectral data for tree species discrimination: Combining binary classifiers

    CSIR Research Space (South Africa)

    Dastile, X

    2010-11-01

    Full Text Available classifier Classification system 7 class 1 class 2 new sample For 5-nearest neighbour classification: assign new sample to class 1. RU SASA 2010 ? Given learning task {(x1,t1),(x 2,t2),?,(x p,tp)} (xi ? Rn feature vectors, ti ? {?1,?, ?c...). A review on the combination of binary classifiers in multiclass problems. Springer science and Business Media B.V [7] Dietterich T.G and Bakiri G.(1995). Solving Multiclass Learning Problem via Error-Correcting Output Codes. AI Access Foundation...

  19. Automatic construction of a recurrent neural network based classifier for vehicle passage detection

    Science.gov (United States)

    Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur

    2017-03-01

    Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.

  20. Zooniverse: Combining Human and Machine Classifiers for the Big Survey Era

    Science.gov (United States)

    Fortson, Lucy; Wright, Darryl; Beck, Melanie; Lintott, Chris; Scarlata, Claudia; Dickinson, Hugh; Trouille, Laura; Willi, Marco; Laraia, Michael; Boyer, Amy; Veldhuis, Marten; Zooniverse

    2018-01-01

    Many analyses of astronomical data sets, ranging from morphological classification of galaxies to identification of supernova candidates, have relied on humans to classify data into distinct categories. Crowdsourced galaxy classifications via the Galaxy Zoo project provided a solution that scaled visual classification for extant surveys by harnessing the combined power of thousands of volunteers. However, the much larger data sets anticipated from upcoming surveys will require a different approach. Automated classifiers using supervised machine learning have improved considerably over the past decade but their increasing sophistication comes at the expense of needing ever more training data. Crowdsourced classification by human volunteers is a critical technique for obtaining these training data. But several improvements can be made on this zeroth order solution. Efficiency gains can be achieved by implementing a “cascade filtering” approach whereby the task structure is reduced to a set of binary questions that are more suited to simpler machines while demanding lower cognitive loads for humans.Intelligent subject retirement based on quantitative metrics of volunteer skill and subject label reliability also leads to dramatic improvements in efficiency. We note that human and machine classifiers may retire subjects differently leading to trade-offs in performance space. Drawing on work with several Zooniverse projects including Galaxy Zoo and Supernova Hunter, we will present recent findings from experiments that combine cohorts of human and machine classifiers. We show that the most efficient system results when appropriate subsets of the data are intelligently assigned to each group according to their particular capabilities.With sufficient online training, simple machines can quickly classify “easy” subjects, leaving more difficult (and discovery-oriented) tasks for volunteers. We also find humans achieve higher classification purity while samples

  1. FERAL : Network-based classifier with application to breast cancer outcome prediction

    NARCIS (Netherlands)

    Allahyar, A.; De Ridder, J.

    2015-01-01

    Motivation: Breast cancer outcome prediction based on gene expression profiles is an important strategy for personalize patient care. To improve performance and consistency of discovered markers of the initial molecular classifiers, network-based outcome prediction methods (NOPs) have been proposed.

  2. Classifying the molecular functions of Rab GTPases in membrane trafficking using deep convolutional neural networks.

    Science.gov (United States)

    Le, Nguyen-Quoc-Khanh; Ho, Quang-Thai; Ou, Yu-Yen

    2018-06-13

    Deep learning has been increasingly used to solve a number of problems with state-of-the-art performance in a wide variety of fields. In biology, deep learning can be applied to reduce feature extraction time and achieve high levels of performance. In our present work, we apply deep learning via two-dimensional convolutional neural networks and position-specific scoring matrices to classify Rab protein molecules, which are main regulators in membrane trafficking for transferring proteins and other macromolecules throughout the cell. The functional loss of specific Rab molecular functions has been implicated in a variety of human diseases, e.g., choroideremia, intellectual disabilities, cancer. Therefore, creating a precise model for classifying Rabs is crucial in helping biologists understand the molecular functions of Rabs and design drug targets according to such specific human disease information. We constructed a robust deep neural network for classifying Rabs that achieved an accuracy of 99%, 99.5%, 96.3%, and 97.6% for each of four specific molecular functions. Our approach demonstrates superior performance to traditional artificial neural networks. Therefore, from our proposed study, we provide both an effective tool for classifying Rab proteins and a basis for further research that can improve the performance of biological modeling using deep neural networks. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Feature extraction using convolutional neural network for classifying breast density in mammographic images

    Science.gov (United States)

    Thomaz, Ricardo L.; Carneiro, Pedro C.; Patrocinio, Ana C.

    2017-03-01

    Breast cancer is the leading cause of death for women in most countries. The high levels of mortality relate mostly to late diagnosis and to the direct proportionally relationship between breast density and breast cancer development. Therefore, the correct assessment of breast density is important to provide better screening for higher risk patients. However, in modern digital mammography the discrimination among breast densities is highly complex due to increased contrast and visual information for all densities. Thus, a computational system for classifying breast density might be a useful tool for aiding medical staff. Several machine-learning algorithms are already capable of classifying small number of classes with good accuracy. However, machinelearning algorithms main constraint relates to the set of features extracted and used for classification. Although well-known feature extraction techniques might provide a good set of features, it is a complex task to select an initial set during design of a classifier. Thus, we propose feature extraction using a Convolutional Neural Network (CNN) for classifying breast density by a usual machine-learning classifier. We used 307 mammographic images downsampled to 260x200 pixels to train a CNN and extract features from a deep layer. After training, the activation of 8 neurons from a deep fully connected layer are extracted and used as features. Then, these features are feedforward to a single hidden layer neural network that is cross-validated using 10-folds to classify among four classes of breast density. The global accuracy of this method is 98.4%, presenting only 1.6% of misclassification. However, the small set of samples and memory constraints required the reuse of data in both CNN and MLP-NN, therefore overfitting might have influenced the results even though we cross-validated the network. Thus, although we presented a promising method for extracting features and classifying breast density, a greater database is

  4. Ant colony optimization algorithm for interpretable Bayesian classifiers combination: application to medical predictions.

    Directory of Open Access Journals (Sweden)

    Salah Bouktif

    Full Text Available Prediction and classification techniques have been well studied by machine learning researchers and developed for several real-word problems. However, the level of acceptance and success of prediction models are still below expectation due to some difficulties such as the low performance of prediction models when they are applied in different environments. Such a problem has been addressed by many researchers, mainly from the machine learning community. A second problem, principally raised by model users in different communities, such as managers, economists, engineers, biologists, and medical practitioners, etc., is the prediction models' interpretability. The latter is the ability of a model to explain its predictions and exhibit the causality relationships between the inputs and the outputs. In the case of classification, a successful way to alleviate the low performance is to use ensemble classiers. It is an intuitive strategy to activate collaboration between different classifiers towards a better performance than individual classier. Unfortunately, ensemble classifiers method do not take into account the interpretability of the final classification outcome. It even worsens the original interpretability of the individual classifiers. In this paper we propose a novel implementation of classifiers combination approach that does not only promote the overall performance but also preserves the interpretability of the resulting model. We propose a solution based on Ant Colony Optimization and tailored for the case of Bayesian classifiers. We validate our proposed solution with case studies from medical domain namely, heart disease and Cardiotography-based predictions, problems where interpretability is critical to make appropriate clinical decisions.The datasets, Prediction Models and software tool together with supplementary materials are available at http://faculty.uaeu.ac.ae/salahb/ACO4BC.htm.

  5. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition

    Science.gov (United States)

    Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan

    2017-01-01

    Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition. PMID:28937987

  6. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.

    Science.gov (United States)

    Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan

    2017-01-01

    Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.

  7. Algorithms for Image Analysis and Combination of Pattern Classifiers with Application to Medical Diagnosis

    Science.gov (United States)

    Georgiou, Harris

    2009-10-01

    Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.

  8. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.

    Directory of Open Access Journals (Sweden)

    QingJun Song

    Full Text Available Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB algorithm plus Support vector machine (SVM is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.

  9. Autoregressive Integrated Adaptive Neural Networks Classifier for EEG-P300 Classification

    Directory of Open Access Journals (Sweden)

    Demi Soetraprawata

    2013-06-01

    Full Text Available Brain Computer Interface has a potency to be applied in mechatronics apparatus and vehicles in the future. Compared to the other techniques, EEG is the most preferred for BCI designs. In this paper, a new adaptive neural network classifier of different mental activities from EEG-based P300 signals is proposed. To overcome the over-training that is caused by noisy and non-stationary data, the EEG signals are filtered and extracted using autoregressive models before passed to the adaptive neural networks classifier. To test the improvement in the EEG classification performance with the proposed method, comparative experiments were conducted using Bayesian Linear Discriminant Analysis. The experiment results show that the all subjects achieve a classification accuracy of 100%.

  10. ELHnet: a convolutional neural network for classifying cochlear endolymphatic hydrops imaged with optical coherence tomography.

    Science.gov (United States)

    Liu, George S; Zhu, Michael H; Kim, Jinkyung; Raphael, Patrick; Applegate, Brian E; Oghalai, John S

    2017-10-01

    Detection of endolymphatic hydrops is important for diagnosing Meniere's disease, and can be performed non-invasively using optical coherence tomography (OCT) in animal models as well as potentially in the clinic. Here, we developed ELHnet, a convolutional neural network to classify endolymphatic hydrops in a mouse model using learned features from OCT images of mice cochleae. We trained ELHnet on 2159 training and validation images from 17 mice, using only the image pixels and observer-determined labels of endolymphatic hydrops as the inputs. We tested ELHnet on 37 images from 37 mice that were previously not used, and found that the neural network correctly classified 34 of the 37 mice. This demonstrates an improvement in performance from previous work on computer-aided classification of endolymphatic hydrops. To the best of our knowledge, this is the first deep CNN designed for endolymphatic hydrops classification.

  11. Protein Secondary Structure Prediction Using AutoEncoder Network and Bayes Classifier

    Science.gov (United States)

    Wang, Leilei; Cheng, Jinyong

    2018-03-01

    Protein secondary structure prediction is belong to bioinformatics,and it's important in research area. In this paper, we propose a new prediction way of protein using bayes classifier and autoEncoder network. Our experiments show some algorithms including the construction of the model, the classification of parameters and so on. The data set is a typical CB513 data set for protein. In terms of accuracy, the method is the cross validation based on the 3-fold. Then we can get the Q3 accuracy. Paper results illustrate that the autoencoder network improved the prediction accuracy of protein secondary structure.

  12. Machine learning classifier using abnormal brain network topological metrics in major depressive disorder.

    Science.gov (United States)

    Guo, Hao; Cao, Xiaohua; Liu, Zhifen; Li, Haifang; Chen, Junjie; Zhang, Kerang

    2012-12-05

    Resting state functional brain networks have been widely studied in brain disease research. However, it is currently unclear whether abnormal resting state functional brain network metrics can be used with machine learning for the classification of brain diseases. Resting state functional brain networks were constructed for 28 healthy controls and 38 major depressive disorder patients by thresholding partial correlation matrices of 90 regions. Three nodal metrics were calculated using graph theory-based approaches. Nonparametric permutation tests were then used for group comparisons of topological metrics, which were used as classified features in six different algorithms. We used statistical significance as the threshold for selecting features and measured the accuracies of six classifiers with different number of features. A sensitivity analysis method was used to evaluate the importance of different features. The result indicated that some of the regions exhibited significantly abnormal nodal centralities, including the limbic system, basal ganglia, medial temporal, and prefrontal regions. Support vector machine with radial basis kernel function algorithm and neural network algorithm exhibited the highest average accuracy (79.27 and 78.22%, respectively) with 28 features (Pdisorder is associated with abnormal functional brain network topological metrics and statistically significant nodal metrics can be successfully used for feature selection in classification algorithms.

  13. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    Science.gov (United States)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  14. Fault Diagnosis for Distribution Networks Using Enhanced Support Vector Machine Classifier with Classical Multidimensional Scaling

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Cho

    2017-09-01

    Full Text Available In this paper, a new fault diagnosis techniques based on time domain reflectometry (TDR method with pseudo-random binary sequence (PRBS stimulus and support vector machine (SVM classifier has been investigated to recognize the different types of fault in the radial distribution feeders. This novel technique has considered the amplitude of reflected signals and the peaks of cross-correlation (CCR between the reflected and incident wave for generating fault current dataset for SVM. Furthermore, this multi-layer enhanced SVM classifier is combined with classical multidimensional scaling (CMDS feature extraction algorithm and kernel parameter optimization to increase training speed and improve overall classification accuracy. The proposed technique has been tested on a radial distribution feeder to identify ten different types of fault considering 12 input features generated by using Simulink software and MATLAB Toolbox. The success rate of SVM classifier is over 95% which demonstrates the effectiveness and the high accuracy of proposed method.

  15. ELM BASED CAD SYSTEM TO CLASSIFY MAMMOGRAMS BY THE COMBINATION OF CLBP AND CONTOURLET

    Directory of Open Access Journals (Sweden)

    S Venkatalakshmi

    2017-05-01

    Full Text Available Breast cancer is a serious life threat to the womanhood, worldwide. Mammography is the promising screening tool, which can show the abnormality being detected. However, the physicians find it difficult to detect the affected regions, as the size of microcalcifications is very small. Hence it would be better, if a CAD system can accompany the physician in detecting the malicious regions. Taking this as a challenge, this paper presents a CAD system for mammogram classification which is proven to be accurate and reliable. The entire work is decomposed into four different stages and the outcome of a phase is passed as the input of the following phase. Initially, the mammogram is pre-processed by adaptive median filter and the segmentation is done by GHFCM. The features are extracted by combining the texture feature descriptors Completed Local Binary Pattern (CLBP and contourlet to frame the feature sets. In the training phase, Extreme Learning Machine (ELM is trained with the feature sets. During the testing phase, the ELM can classify between normal, malignant and benign type of cancer. The performance of the proposed approach is analysed by varying the classifier, feature extractors and parameters of the feature extractor. From the experimental analysis, it is evident that the proposed work outperforms the analogous techniques in terms of accuracy, sensitivity and specificity.

  16. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.

    Science.gov (United States)

    Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A

    2017-03-01

    Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust

  17. Classifying dysmorphic syndromes by using artificial neural network based hierarchical decision tree.

    Science.gov (United States)

    Özdemir, Merve Erkınay; Telatar, Ziya; Eroğul, Osman; Tunca, Yusuf

    2018-05-01

    Dysmorphic syndromes have different facial malformations. These malformations are significant to an early diagnosis of dysmorphic syndromes and contain distinctive information for face recognition. In this study we define the certain features of each syndrome by considering facial malformations and classify Fragile X, Hurler, Prader Willi, Down, Wolf Hirschhorn syndromes and healthy groups automatically. The reference points are marked on the face images and ratios between the points' distances are taken into consideration as features. We suggest a neural network based hierarchical decision tree structure in order to classify the syndrome types. We also implement k-nearest neighbor (k-NN) and artificial neural network (ANN) classifiers to compare classification accuracy with our hierarchical decision tree. The classification accuracy is 50, 73 and 86.7% with k-NN, ANN and hierarchical decision tree methods, respectively. Then, the same images are shown to a clinical expert who achieve a recognition rate of 46.7%. We develop an efficient system to recognize different syndrome types automatically in a simple, non-invasive imaging data, which is independent from the patient's age, sex and race at high accuracy. The promising results indicate that our method can be used for pre-diagnosis of the dysmorphic syndromes by clinical experts.

  18. Intelligent Recognition of Lung Nodule Combining Rule-based and C-SVM Classifiers

    Directory of Open Access Journals (Sweden)

    Bin Li

    2011-10-01

    Full Text Available Computer-aided detection(CAD system for lung nodules plays the important role in the diagnosis of lung cancer. In this paper, an improved intelligent recognition method of lung nodule in HRCT combing rule-based and costsensitive support vector machine(C-SVM classifiers is proposed for detecting both solid nodules and ground-glass opacity(GGO nodules(part solid and nonsolid. This method consists of several steps. Firstly, segmentation of regions of interest(ROIs, including pulmonary parenchyma and lung nodule candidates, is a difficult task. On one side, the presence of noise lowers the visibility of low-contrast objects. On the other side, different types of nodules, including small nodules, nodules connecting to vasculature or other structures, part-solid or nonsolid nodules, are complex, noisy, weak edge or difficult to define the boundary. In order to overcome the difficulties of obvious boundary-leak and slow evolvement speed problem in segmentatioin of weak edge, an overall segmentation method is proposed, they are: the lung parenchyma is extracted based on threshold and morphologic segmentation method; the image denoising and enhancing is realized by nonlinear anisotropic diffusion filtering(NADF method;candidate pulmonary nodules are segmented by the improved C-V level set method, in which the segmentation result of EM-based fuzzy threshold method is used as the initial contour of active contour model and a constrained energy term is added into the PDE of level set function. Then, lung nodules are classified by using the intelligent classifiers combining rules and C-SVM. Rule-based classification is first used to remove easily dismissible nonnodule objects, then C-SVM classification are used to further classify nodule candidates and reduce the number of false positive(FP objects. In order to increase the efficiency of SVM, an improved training method is used to train SVM, which uses the grid search method to search the optimal parameters

  19. Intelligent Recognition of Lung Nodule Combining Rule-based and C-SVM Classifiers

    Directory of Open Access Journals (Sweden)

    Bin Li

    2012-02-01

    Full Text Available Computer-aided detection(CAD system for lung nodules plays the important role in the diagnosis of lung cancer. In this paper, an improved intelligent recognition method of lung nodule in HRCT combing rule-based and cost-sensitive support vector machine(C-SVM classifiers is proposed for detecting both solid nodules and ground-glass opacity(GGO nodules(part solid and nonsolid. This method consists of several steps. Firstly, segmentation of regions of interest(ROIs, including pulmonary parenchyma and lung nodule candidates, is a difficult task. On one side, the presence of noise lowers the visibility of low-contrast objects. On the other side, different types of nodules, including small nodules, nodules connecting to vasculature or other structures, part-solid or nonsolid nodules, are complex, noisy, weak edge or difficult to define the boundary. In order to overcome the difficulties of obvious boundary-leak and slow evolvement speed problem in segmentatioin of weak edge, an overall segmentation method is proposed, they are: the lung parenchyma is extracted based on threshold and morphologic segmentation method; the image denoising and enhancing is realized by nonlinear anisotropic diffusion filtering(NADF method; candidate pulmonary nodules are segmented by the improved C-V level set method, in which the segmentation result of EM-based fuzzy threshold method is used as the initial contour of active contour model and a constrained energy term is added into the PDE of level set function. Then, lung nodules are classified by using the intelligent classifiers combining rules and C-SVM. Rule-based classification is first used to remove easily dismissible nonnodule objects, then C-SVM classification are used to further classify nodule candidates and reduce the number of false positive(FP objects. In order to increase the efficiency of SVM, an improved training method is used to train SVM, which uses the grid search method to search the optimal

  20. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    Full Text Available Recent advances in training deep (multi-layer architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM approach, which also enables a very rapid training time (∼ 10 minutes. Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  1. Segment convolutional neural networks (Seg-CNNs) for classifying relations in clinical notes.

    Science.gov (United States)

    Luo, Yuan; Cheng, Yu; Uzuner, Özlem; Szolovits, Peter; Starren, Justin

    2018-01-01

    We propose Segment Convolutional Neural Networks (Seg-CNNs) for classifying relations from clinical notes. Seg-CNNs use only word-embedding features without manual feature engineering. Unlike typical CNN models, relations between 2 concepts are identified by simultaneously learning separate representations for text segments in a sentence: preceding, concept1, middle, concept2, and succeeding. We evaluate Seg-CNN on the i2b2/VA relation classification challenge dataset. We show that Seg-CNN achieves a state-of-the-art micro-average F-measure of 0.742 for overall evaluation, 0.686 for classifying medical problem-treatment relations, 0.820 for medical problem-test relations, and 0.702 for medical problem-medical problem relations. We demonstrate the benefits of learning segment-level representations. We show that medical domain word embeddings help improve relation classification. Seg-CNNs can be trained quickly for the i2b2/VA dataset on a graphics processing unit (GPU) platform. These results support the use of CNNs computed over segments of text for classifying medical relations, as they show state-of-the-art performance while requiring no manual feature engineering. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Performance Analysis and Optimization for Cognitive Radio Networks with Classified Secondary Users and Impatient Packets

    Directory of Open Access Journals (Sweden)

    Yuan Zhao

    2017-01-01

    Full Text Available A cognitive radio network with classified Secondary Users (SUs is considered. There are two types of SU packets, namely, SU1 packets and SU2 packets, in the system. The SU1 packets have higher priority than the SU2 packets. Considering the diversity of the SU packets and the real-time need of the interrupted SU packets, a novel spectrum allocation strategy with classified SUs and impatient packets is proposed. Based on the number of PU packets, SU1 packets, and SU2 packets in the system, by modeling the queue dynamics of the networks users as a three-dimensional discrete-time Markov chain, the transition probability matrix of the Markov chain is given. Then with the steady-state analysis, some important performance measures of the SU2 packets are derived to show the system performance with numerical results. Specially, in order to optimize the system actions of the SU2 packets, the individually optimal strategy and the socially optimal strategy for the SU2 packets are demonstrated. Finally, a pricing mechanism is provided to oblige the SU2 packets to follow the socially optimal strategy.

  3. Classifying Sources Influencing Indoor Air Quality (IAQ Using Artificial Neural Network (ANN

    Directory of Open Access Journals (Sweden)

    Shaharil Mad Saad

    2015-05-01

    Full Text Available Monitoring indoor air quality (IAQ is deemed important nowadays. A sophisticated IAQ monitoring system which could classify the source influencing the IAQ is definitely going to be very helpful to the users. Therefore, in this paper, an IAQ monitoring system has been proposed with a newly added feature which enables the system to identify the sources influencing the level of IAQ. In order to achieve this, the data collected has been trained with artificial neural network or ANN—a proven method for pattern recognition. Basically, the proposed system consists of sensor module cloud (SMC, base station and service-oriented client. The SMC contain collections of sensor modules that measure the air quality data and transmit the captured data to base station through wireless network. The IAQ monitoring system is also equipped with IAQ Index and thermal comfort index which could tell the users about the room’s conditions. The results showed that the system is able to measure the level of air quality and successfully classify the sources influencing IAQ in various environments like ambient air, chemical presence, fragrance presence, foods and beverages and human activity.

  4. A decision support system using combined-classifier for high-speed data stream in smart grid

    Science.gov (United States)

    Yang, Hang; Li, Peng; He, Zhian; Guo, Xiaobin; Fong, Simon; Chen, Huajun

    2016-11-01

    Large volume of high-speed streaming data is generated by big power grids continuously. In order to detect and avoid power grid failure, decision support systems (DSSs) are commonly adopted in power grid enterprises. Among all the decision-making algorithms, incremental decision tree is the most widely used one. In this paper, we propose a combined classifier that is a composite of a cache-based classifier (CBC) and a main tree classifier (MTC). We integrate this classifier into a stream processing engine on top of the DSS such that high-speed steaming data can be transformed into operational intelligence efficiently. Experimental results show that our proposed classifier can return more accurate answers than other existing ones.

  5. Combined Heuristic Attack Strategy on Complex Networks

    Directory of Open Access Journals (Sweden)

    Marek Šimon

    2017-01-01

    Full Text Available Usually, the existence of a complex network is considered an advantage feature and efforts are made to increase its robustness against an attack. However, there exist also harmful and/or malicious networks, from social ones like spreading hoax, corruption, phishing, extremist ideology, and terrorist support up to computer networks spreading computer viruses or DDoS attack software or even biological networks of carriers or transport centers spreading disease among the population. New attack strategy can be therefore used against malicious networks, as well as in a worst-case scenario test for robustness of a useful network. A common measure of robustness of networks is their disintegration level after removal of a fraction of nodes. This robustness can be calculated as a ratio of the number of nodes of the greatest remaining network component against the number of nodes in the original network. Our paper presents a combination of heuristics optimized for an attack on a complex network to achieve its greatest disintegration. Nodes are deleted sequentially based on a heuristic criterion. Efficiency of classical attack approaches is compared to the proposed approach on Barabási-Albert, scale-free with tunable power-law exponent, and Erdős-Rényi models of complex networks and on real-world networks. Our attack strategy results in a faster disintegration, which is counterbalanced by its slightly increased computational demands.

  6. Machine-Learning Classifier for Patients with Major Depressive Disorder: Multifeature Approach Based on a High-Order Minimum Spanning Tree Functional Brain Network.

    Science.gov (United States)

    Guo, Hao; Qin, Mengna; Chen, Junjie; Xu, Yong; Xiang, Jie

    2017-01-01

    High-order functional connectivity networks are rich in time information that can reflect dynamic changes in functional connectivity between brain regions. Accordingly, such networks are widely used to classify brain diseases. However, traditional methods for processing high-order functional connectivity networks generally include the clustering method, which reduces data dimensionality. As a result, such networks cannot be effectively interpreted in the context of neurology. Additionally, due to the large scale of high-order functional connectivity networks, it can be computationally very expensive to use complex network or graph theory to calculate certain topological properties. Here, we propose a novel method of generating a high-order minimum spanning tree functional connectivity network. This method increases the neurological significance of the high-order functional connectivity network, reduces network computing consumption, and produces a network scale that is conducive to subsequent network analysis. To ensure the quality of the topological information in the network structure, we used frequent subgraph mining technology to capture the discriminative subnetworks as features and combined this with quantifiable local network features. Then we applied a multikernel learning technique to the corresponding selected features to obtain the final classification results. We evaluated our proposed method using a data set containing 38 patients with major depressive disorder and 28 healthy controls. The experimental results showed a classification accuracy of up to 97.54%.

  7. Discrimination of panti p → tanti t events by a neural network classifier

    International Nuclear Information System (INIS)

    Cherubini, A.; Odorico, R.

    1992-01-01

    Neural network and conventional statistical techniques are compared in the problem of discriminating panti p→tanti t events, with top quarks decaying into anything, from the associated hadronic background at the energy of the Fermilab collider. The NN we develop for this sake is an improved version of Kohonen's learning vector quantization scheme. Performance of the NN as a tanti t event classifier is found to be less satisfactory than that achievable by statistical methods. We conclude that the probable reasons for that are: i) The NN approach presents advantages only when dealing with event distributions in the feature space which substantially differ from Gaussians; ii) NN's require much larger training sets of events than statistical discrimination in order to give comparable results. (orig.)

  8. Distributed Classification of Localization Attacks in Sensor Networks Using Exchange-Based Feature Extraction and Classifier

    Directory of Open Access Journals (Sweden)

    Su-Zhe Wang

    2016-01-01

    Full Text Available Secure localization under different forms of attack has become an essential task in wireless sensor networks. Despite the significant research efforts in detecting the malicious nodes, the problem of localization attack type recognition has not yet been well addressed. Motivated by this concern, we propose a novel exchange-based attack classification algorithm. This is achieved by a distributed expectation maximization extractor integrated with the PECPR-MKSVM classifier. First, the mixed distribution features based on the probabilistic modeling are extracted using a distributed expectation maximization algorithm. After feature extraction, by introducing the theory from support vector machine, an extensive contractive Peaceman-Rachford splitting method is derived to build the distributed classifier that diffuses the iteration calculation among neighbor sensors. To verify the efficiency of the distributed recognition scheme, four groups of experiments were carried out under various conditions. The average success rate of the proposed classification algorithm obtained in the presented experiments for external attacks is excellent and has achieved about 93.9% in some cases. These testing results demonstrate that the proposed algorithm can produce much greater recognition rate, and it can be also more robust and efficient even in the presence of excessive malicious scenario.

  9. Deep convolutional neural network for classifying Fusarium wilt of radish from unmanned aerial vehicles

    Science.gov (United States)

    Ha, Jin Gwan; Moon, Hyeonjoon; Kwak, Jin Tae; Hassan, Syed Ibrahim; Dang, Minh; Lee, O. New; Park, Han Yong

    2017-10-01

    Recently, unmanned aerial vehicles (UAVs) have gained much attention. In particular, there is a growing interest in utilizing UAVs for agricultural applications such as crop monitoring and management. We propose a computerized system that is capable of detecting Fusarium wilt of radish with high accuracy. The system adopts computer vision and machine learning techniques, including deep learning, to process the images captured by UAVs at low altitudes and to identify the infected radish. The whole radish field is first segmented into three distinctive regions (radish, bare ground, and mulching film) via a softmax classifier and K-means clustering. Then, the identified radish regions are further classified into healthy radish and Fusarium wilt of radish using a deep convolutional neural network (CNN). In identifying radish, bare ground, and mulching film from a radish field, we achieved an accuracy of ≥97.4%. In detecting Fusarium wilt of radish, the CNN obtained an accuracy of 93.3%. It also outperformed the standard machine learning algorithm, obtaining 82.9% accuracy. Therefore, UAVs equipped with computational techniques are promising tools for improving the quality and efficiency of agriculture today.

  10. Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas.

    Science.gov (United States)

    Chang, P; Grinband, J; Weinberg, B D; Bardis, M; Khy, M; Cadena, G; Su, M-Y; Cha, S; Filippi, C G; Bota, D; Baldi, P; Poisson, L M; Jain, R; Chow, D

    2018-05-10

    The World Health Organization has recently placed new emphasis on the integration of genetic information for gliomas. While tissue sampling remains the criterion standard, noninvasive imaging techniques may provide complimentary insight into clinically relevant genetic mutations. Our aim was to train a convolutional neural network to independently predict underlying molecular genetic mutation status in gliomas with high accuracy and identify the most predictive imaging features for each mutation. MR imaging data and molecular information were retrospectively obtained from The Cancer Imaging Archives for 259 patients with either low- or high-grade gliomas. A convolutional neural network was trained to classify isocitrate dehydrogenase 1 ( IDH1 ) mutation status, 1p/19q codeletion, and O6-methylguanine-DNA methyltransferase ( MGMT ) promotor methylation status. Principal component analysis of the final convolutional neural network layer was used to extract the key imaging features critical for successful classification. Classification had high accuracy: IDH1 mutation status, 94%; 1p/19q codeletion, 92%; and MGMT promotor methylation status, 83%. Each genetic category was also associated with distinctive imaging features such as definition of tumor margins, T1 and FLAIR suppression, extent of edema, extent of necrosis, and textural features. Our results indicate that for The Cancer Imaging Archives dataset, machine-learning approaches allow classification of individual genetic mutations of both low- and high-grade gliomas. We show that relevant MR imaging features acquired from an added dimensionality-reduction technique demonstrate that neural networks are capable of learning key imaging components without prior feature selection or human-directed training. © 2018 by American Journal of Neuroradiology.

  11. Gas chimney detection based on improving the performance of combined multilayer perceptron and support vector classifier

    NARCIS (Netherlands)

    Hashemi, H.; Tax, D.M.J.; Duin, R.P.W.; Javaherian, A.; De Groot, P.

    2008-01-01

    Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a

  12. WAVELET ANALYSIS AND NEURAL NETWORK CLASSIFIERS TO DETECT MID-SAGITTAL SECTIONS FOR NUCHAL TRANSLUCENCY MEASUREMENT

    Directory of Open Access Journals (Sweden)

    Giuseppa Sciortino

    2016-04-01

    Full Text Available We propose a methodology to support the physician in the automatic identification of mid-sagittal sections of the fetus in ultrasound videos acquired during the first trimester of pregnancy. A good mid-sagittal section is a key requirement to make the correct measurement of nuchal translucency which is one of the main marker for screening of chromosomal defects such as trisomy 13, 18 and 21. NT measurement is beyond the scope of this article. The proposed methodology is mainly based on wavelet analysis and neural network classifiers to detect the jawbone and on radial symmetry analysis to detect the choroid plexus. Those steps allow to identify the frames which represent correct mid-sagittal sections to be processed. The performance of the proposed methodology was analyzed on 3000 random frames uniformly extracted from 10 real clinical ultrasound videos. With respect to a ground-truth provided by an expert physician, we obtained a true positive, a true negative and a balanced accuracy equal to 87.26%, 94.98% and 91.12% respectively.

  13. An Unobtrusive Fall Detection and Alerting System Based on Kalman Filter and Bayes Network Classifier.

    Science.gov (United States)

    He, Jian; Bai, Shuang; Wang, Xiaoyi

    2017-06-16

    Falls are one of the main health risks among the elderly. A fall detection system based on inertial sensors can automatically detect fall event and alert a caregiver for immediate assistance, so as to reduce injuries causing by falls. Nevertheless, most inertial sensor-based fall detection technologies have focused on the accuracy of detection while neglecting quantization noise caused by inertial sensor. In this paper, an activity model based on tri-axial acceleration and gyroscope is proposed, and the difference between activities of daily living (ADLs) and falls is analyzed. Meanwhile, a Kalman filter is proposed to preprocess the raw data so as to reduce noise. A sliding window and Bayes network classifier are introduced to develop a wearable fall detection system, which is composed of a wearable motion sensor and a smart phone. The experiment shows that the proposed system distinguishes simulated falls from ADLs with a high accuracy of 95.67%, while sensitivity and specificity are 99.0% and 95.0%, respectively. Furthermore, the smart phone can issue an alarm to caregivers so as to provide timely and accurate help for the elderly, as soon as the system detects a fall.

  14. Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction.

    Science.gov (United States)

    Kumudha, P; Venkatesan, R

    Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets.

  15. Automatic Assessing of Tremor Severity Using Nonlinear Dynamics, Artificial Neural Networks and Neuro-Fuzzy Classifier

    Directory of Open Access Journals (Sweden)

    GEMAN, O.

    2014-02-01

    Full Text Available Neurological diseases like Alzheimer, epilepsy, Parkinson's disease, multiple sclerosis and other dementias influence the lives of patients, their families and society. Parkinson's disease (PD is a neurodegenerative disease that occurs due to loss of dopamine, a neurotransmitter and slow destruction of neurons. Brain area affected by progressive destruction of neurons is responsible for controlling movements, and patients with PD reveal rigid and uncontrollable gestures, postural instability, small handwriting and tremor. Commercial activity-promoting gaming systems such as the Nintendo Wii and Xbox Kinect can be used as tools for tremor, gait or other biomedical signals acquisitions. They also can aid for rehabilitation in clinical settings. This paper emphasizes the use of intelligent optical sensors or accelerometers in biomedical signal acquisition, and of the specific nonlinear dynamics parameters or fuzzy logic in Parkinson's disease tremor analysis. Nowadays, there is no screening test for early detection of PD. So, we investigated a method to predict PD, based on the image processing of the handwriting belonging to a candidate of PD. For classification and discrimination between healthy people and PD people we used Artificial Neural Networks (Radial Basis Function - RBF and Multilayer Perceptron - MLP and an Adaptive Neuro-Fuzzy Classifier (ANFC. In general, the results may be expressed as a prognostic (risk degree to contact PD.

  16. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers.

    Science.gov (United States)

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-04-15

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject "at rest"). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing "signal" (brain activity) can be distinguished form the "noise" components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX ("FMRIB's ICA-based X-noiseifier"), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original

  17. Combining binary classifiers to improve tree species discrimination at leaf level

    CSIR Research Space (South Africa)

    Dastile, X

    2012-11-01

    Full Text Available ) for training. The neural networks toolbox (version 5.0.2 (R2007a))) of MATLAB was used. The training parameter ?goal? was set to 0.03. Training is stopped if the error function falls below this value. The training data was presented to the neural networks... and Systems, 2, 303?314. Diamond, P. & Kloeden, P. (1994). Metric Spaces of Fuzzy Sets Theory and Applications. World Scientific Publishing Co. Pty. Ltd. Dietterich, T. G. & Bakiri, G. (1995). Solving multiclass learning problems via error-correcting output...

  18. Convolutional Neural Networks with Batch Normalization for Classifying Hi-hat, Snare, and Bass Percussion Sound Samples

    DEFF Research Database (Denmark)

    Gajhede, Nicolai; Beck, Oliver; Purwins, Hendrik

    2016-01-01

    After having revolutionized image and speech processing, convolu- tional neural networks (CNN) are now starting to become more and more successful in music information retrieval as well. We compare four CNN types for classifying a dataset of more than 3000 acoustic and synthesized samples...

  19. A Novel HMM Distributed Classifier for the Detection of Gait Phases by Means of a Wearable Inertial Sensor Network

    Directory of Open Access Journals (Sweden)

    Juri Taborri

    2014-09-01

    Full Text Available In this work, we decided to apply a hierarchical weighted decision, proposed and used in other research fields, for the recognition of gait phases. The developed and validated novel distributed classifier is based on hierarchical weighted decision from outputs of scalar Hidden Markov Models (HMM applied to angular velocities of foot, shank, and thigh. The angular velocities of ten healthy subjects were acquired via three uni-axial gyroscopes embedded in inertial measurement units (IMUs during one walking task, repeated three times, on a treadmill. After validating the novel distributed classifier and scalar and vectorial classifiers-already proposed in the literature, with a cross-validation, classifiers were compared for sensitivity, specificity, and computational load for all combinations of the three targeted anatomical segments. Moreover, the performance of the novel distributed classifier in the estimation of gait variability in terms of mean time and coefficient of variation was evaluated. The highest values of specificity and sensitivity (>0.98 for the three classifiers examined here were obtained when the angular velocity of the foot was processed. Distributed and vectorial classifiers reached acceptable values (>0.95 when the angular velocity of shank and thigh were analyzed. Distributed and scalar classifiers showed values of computational load about 100 times lower than the one obtained with the vectorial classifier. In addition, distributed classifiers showed an excellent reliability for the evaluation of mean time and a good/excellent reliability for the coefficient of variation. In conclusion, due to the better performance and the small value of computational load, the here proposed novel distributed classifier can be implemented in the real-time application of gait phases recognition, such as to evaluate gait variability in patients or to control active orthoses for the recovery of mobility of lower limb joints.

  20. Classifying chemical mode of action using gene networks and machine learning: a case study with the herbicide linuron.

    Science.gov (United States)

    Ornostay, Anna; Cowie, Andrew M; Hindle, Matthew; Baker, Christopher J O; Martyniuk, Christopher J

    2013-12-01

    The herbicide linuron (LIN) is an endocrine disruptor with an anti-androgenic mode of action. The objectives of this study were to (1) improve knowledge of androgen and anti-androgen signaling in the teleostean ovary and to (2) assess the ability of gene networks and machine learning to classify LIN as an anti-androgen using transcriptomic data. Ovarian explants from vitellogenic fathead minnows (FHMs) were exposed to three concentrations of either 5α-dihydrotestosterone (DHT), flutamide (FLUT), or LIN for 12h. Ovaries exposed to DHT showed a significant increase in 17β-estradiol (E2) production while FLUT and LIN had no effect on E2. To improve understanding of androgen receptor signaling in the ovary, a reciprocal gene expression network was constructed for DHT and FLUT using pathway analysis and these data suggested that steroid metabolism, translation, and DNA replication are processes regulated through AR signaling in the ovary. Sub-network enrichment analysis revealed that FLUT and LIN shared more regulated gene networks in common compared to DHT. Using transcriptomic datasets from different fish species, machine learning algorithms classified LIN successfully with other anti-androgens. This study advances knowledge regarding molecular signaling cascades in the ovary that are responsive to androgens and anti-androgens and provides proof of concept that gene network analysis and machine learning can classify priority chemicals using experimental transcriptomic data collected from different fish species. © 2013.

  1. Using SAR images to delineate ocean oil slicks with a texture-classifying neural network algorithm (TCNNA)

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pineda, O.; MacDonald, I.R. [Florida State Univ., Tallahassee, FL (United States). Dept. of Oceanography; Zimmer, B. [Texas A and M Univ., Corpus Christi, TX (United States). Dept. of Mathematics and Statistics; Howard, M. [Texas A and M Univ., College Station, TX (United States). Dept. of Oceanography; Pichel, W. [National Oceanic and Atmospheric Administration, Camp Springs, MD (United States). Center for Satellite Applications and Research, National Environmental Satellite, Data and Information Service; Li, X. [National Oceanic and Atmospheric Administration, Camp Springs, MD (United States). Systems Group, National Environmental Satellite, Data and Information

    2009-10-15

    Synthetic aperture radar (SAR) is used to detect surfactant layers produced by floating oil on the ocean surface. This study presented details of a texture-classifying neural network algorithm (TCNNA) designed to process SAR data from a wide selection of beam modes. Patterns from SAR imagery were extracted in a semi-supervised procedure using a combination of edge-detection filters; texture descriptors; collection information; and environmental data. Various natural oil seeps in the Gulf of Mexico were used as case studies. An analysis of the case studies demonstrated that the TCNNA was able to extract targets and rapidly interpret images collected under a range of environmental conditions. Results presented by the TCNNA were used to evaluate the effects of different environmental conditions on the expressions of oil slicks detected by the data. Optimal incidence angle ranges and wind speed ranges for surfactant film detection were also presented. Results obtained by the TCNNA can be stored and manipulated in geographic information system (GIS) data layers. 26 refs., 1 tab., 7 figs.

  2. Combined natural gas and electricity network pricing

    Energy Technology Data Exchange (ETDEWEB)

    Morais, M.S.; Marangon Lima, J.W. [Universidade Federal de Itajuba, Rua Dr. Daniel de Carvalho, no. 296, Passa Quatro, Minas Gerais, CEP 37460-000 (Brazil)

    2007-04-15

    The introduction of competition to electricity generation and commercialization has been the main focus of many restructuring experiences around the world. The open access to the transmission network and a fair regulated tariff have been the keystones for the development of the electricity market. Parallel to the electricity industry, the natural gas business has great interaction with the electricity market in terms of fuel consumption and energy conversion. Given that the transmission and distribution monopolistic activities are very similar to the natural gas transportation through pipelines, economic regulation related to the natural gas network should be coherent with the transmission counterpart. This paper shows the application of the main wheeling charge methods, such as MW/gas-mile, invested related asset cost (IRAC) and Aumman-Shapley allocation, to both transmission and gas network. Stead-state equations are developed to adequate the various pricing methods. Some examples clarify the results, in terms of investments for thermal generation plants and end consumers, when combined pricing methods are used for transmission and gas networks. The paper also shows that the synergies between gas and electricity industry should be adequately considered, otherwise wrong economic signals are sent to the market players. (author)

  3. Label-Driven Learning Framework: Towards More Accurate Bayesian Network Classifiers through Discrimination of High-Confidence Labels

    Directory of Open Access Journals (Sweden)

    Yi Sun

    2017-12-01

    Full Text Available Bayesian network classifiers (BNCs have demonstrated competitive classification accuracy in a variety of real-world applications. However, it is error-prone for BNCs to discriminate among high-confidence labels. To address this issue, we propose the label-driven learning framework, which incorporates instance-based learning and ensemble learning. For each testing instance, high-confidence labels are first selected by a generalist classifier, e.g., the tree-augmented naive Bayes (TAN classifier. Then, by focusing on these labels, conditional mutual information is redefined to more precisely measure mutual dependence between attributes, thus leading to a refined generalist with a more reasonable network structure. To enable finer discrimination, an expert classifier is tailored for each high-confidence label. Finally, the predictions of the refined generalist and the experts are aggregated. We extend TAN to LTAN (Label-driven TAN by applying the proposed framework. Extensive experimental results demonstrate that LTAN delivers superior classification accuracy to not only several state-of-the-art single-structure BNCs but also some established ensemble BNCs at the expense of reasonable computation overhead.

  4. Using Unsupervised Learning to Improve the Naive Bayes Classifier for Wireless Sensor Networks

    NARCIS (Netherlands)

    Zwartjes, G.J.; Havinga, Paul J.M.; Smit, Gerardus Johannes Maria; Hurink, Johann L.

    2012-01-01

    Online processing is essential for many sensor network applications. Sensor nodes can sample far more data than what can practically be transmitted using state of the art sensor network radios. Online processing, however, is complicated due to limited resources of individual nodes. The naive Bayes

  5. Classifying Sensors Depending on their IDs to Reduce Power Consumption in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Ayman Mohammd Brisha

    2010-05-01

    Full Text Available Wireless sensor networks produce a large amount of data that needs to be processed, delivered, and assessed according to the application objectives. Cluster-based is an effective architecture for data-gathering in wireless sensor networks. Clustering provides an effective way for prolonging the lifetime of a wireless sensor network. Current clustering algorithms usually utilize two techniques, selecting cluster heads with more residual energy and rotating cluster heads periodically, in order to distribute the energy consumption among nodes in each cluster and extend the network lifetime. Clustering sensors are divided into groups, so that sensors will communicate information only to cluster heads and then the cluster heads will communicate the aggregated information to the processing center, and this may save energy. In this paper we show Two Relay Sensor Algorithm (TRSA, which divide wireless Sensor Network (WSN into unequaled clusters, showing that it can effectively save power for maximizing the life time of the network. Simulation results show that the proposed unequal clustering mechanism (TRSA balances the energy consumption among all sensor nodes and achieves an obvious improvement on the network lifetime.

  6. Graphic Symbol Recognition using Graph Based Signature and Bayesian Network Classifier

    OpenAIRE

    Luqman, Muhammad Muzzamil; Brouard, Thierry; Ramel, Jean-Yves

    2010-01-01

    We present a new approach for recognition of complex graphic symbols in technical documents. Graphic symbol recognition is a well known challenge in the field of document image analysis and is at heart of most graphic recognition systems. Our method uses structural approach for symbol representation and statistical classifier for symbol recognition. In our system we represent symbols by their graph based signatures: a graphic symbol is vectorized and is converted to an attributed relational g...

  7. Carbon classified?

    DEFF Research Database (Denmark)

    Lippert, Ingmar

    2012-01-01

    . Using an actor- network theory (ANT) framework, the aim is to investigate the actors who bring together the elements needed to classify their carbon emission sources and unpack the heterogeneous relations drawn on. Based on an ethnographic study of corporate agents of ecological modernisation over...... a period of 13 months, this paper provides an exploration of three cases of enacting classification. Drawing on ANT, we problematise the silencing of a range of possible modalities of consumption facts and point to the ontological ethics involved in such performances. In a context of global warming...

  8. Detecting Cyber-Attacks on Wireless Mobile Networks Using Multicriterion Fuzzy Classifier with Genetic Attribute Selection

    Directory of Open Access Journals (Sweden)

    El-Sayed M. El-Alfy

    2015-01-01

    Full Text Available With the proliferation of wireless and mobile network infrastructures and capabilities, a wide range of exploitable vulnerabilities emerges due to the use of multivendor and multidomain cross-network services for signaling and transport of Internet- and wireless-based data. Consequently, the rates and types of cyber-attacks have grown considerably and current security countermeasures for protecting information and communication may be no longer sufficient. In this paper, we investigate a novel methodology based on multicriterion decision making and fuzzy classification that can provide a viable second-line of defense for mitigating cyber-attacks. The proposed approach has the advantage of dealing with various types and sizes of attributes related to network traffic such as basic packet headers, content, and time. To increase the effectiveness and construct optimal models, we augmented the proposed approach with a genetic attribute selection strategy. This allows efficient and simpler models which can be replicated at various network components to cooperatively detect and report malicious behaviors. Using three datasets covering a variety of network attacks, the performance enhancements due to the proposed approach are manifested in terms of detection errors and model construction times.

  9. Model of hierarchical self-organizing neural networks for detecting and classifying diabetic retinopathy

    Directory of Open Access Journals (Sweden)

    Hossein Ghayoumi Zadeh

    2018-04-01

    Conclusion: These days, the cases of diabetes with hypertension are constantly increasing, and one of the main adverse effects of this disease is related to eyes. In this respect, the diagnosis of retinopathy, which is the same as identification of exudates, microanurysm and bleeding, is of particular importance. The results show that the proposed model is able to detect lesions in diabetic retinopathy images and classify them with an acceptable accuracy. In addition, the results suggest that this method has an acceptable performance compared to other methods.

  10. Network Intrusion Detection System (NIDS in Cloud Environment based on Hidden Naïve Bayes Multiclass Classifier

    Directory of Open Access Journals (Sweden)

    Hafza A. Mahmood

    2018-04-01

    Full Text Available Cloud Environment is next generation internet based computing system that supplies customiza-ble services to the end user to work or access to the various cloud applications. In order to provide security and decrease the damage of information system, network and computer system it is im-portant to provide intrusion detection system (IDS. Now Cloud environment are under threads from network intrusions, as one of most prevalent and offensive means Denial of Service (DoS attacks that cause dangerous impact on cloud computing systems. This paper propose Hidden naïve Bayes (HNB Classifier to handle DoS attacks which is a data mining (DM model used to relaxes the conditional independence assumption of Naïve Bayes classifier (NB, proposed sys-tem used HNB Classifier supported with discretization and feature selection where select the best feature enhance the performance of the system and reduce consuming time. To evaluate the per-formance of proposal system, KDD 99 CUP and NSL KDD Datasets has been used. The experi-mental results show that the HNB classifier improves the performance of NIDS in terms of accu-racy and detecting DoS attacks, where the accuracy of detect DoS is 100% in three test KDD cup 99 dataset by used only 12 feature that selected by use gain ratio while in NSL KDD Dataset the accuracy of detect DoS attack is 90 % in three Experimental NSL KDD dataset by select 10 fea-ture only.

  11. A Machine Learning Approach for Identifying and Classifying Faults in Wireless Sensor Networks

    NARCIS (Netherlands)

    Warriach, Ehsan Ullah; Aiello, Marco; Tei, Kenji

    2012-01-01

    Wireless Sensor Network (WSN) deployment experiences show that collected data is prone to be faulty. Faults are due to internal and external influences, such as calibration, low battery, environmental interference and sensor aging. However, only few solutions exist to deal with faulty sensory data

  12. On the complexity of neural network classifiers: a comparison between shallow and deep architectures.

    Science.gov (United States)

    Bianchini, Monica; Scarselli, Franco

    2014-08-01

    Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.

  13. Multi-categorical deep learning neural network to classify retinal images: A pilot study employing small database.

    Science.gov (United States)

    Choi, Joon Yul; Yoo, Tae Keun; Seo, Jeong Gi; Kwak, Jiyong; Um, Terry Taewoong; Rim, Tyler Hyungtaek

    2017-01-01

    Deep learning emerges as a powerful tool for analyzing medical images. Retinal disease detection by using computer-aided diagnosis from fundus image has emerged as a new method. We applied deep learning convolutional neural network by using MatConvNet for an automated detection of multiple retinal diseases with fundus photographs involved in STructured Analysis of the REtina (STARE) database. Dataset was built by expanding data on 10 categories, including normal retina and nine retinal diseases. The optimal outcomes were acquired by using a random forest transfer learning based on VGG-19 architecture. The classification results depended greatly on the number of categories. As the number of categories increased, the performance of deep learning models was diminished. When all 10 categories were included, we obtained results with an accuracy of 30.5%, relative classifier information (RCI) of 0.052, and Cohen's kappa of 0.224. Considering three integrated normal, background diabetic retinopathy, and dry age-related macular degeneration, the multi-categorical classifier showed accuracy of 72.8%, 0.283 RCI, and 0.577 kappa. In addition, several ensemble classifiers enhanced the multi-categorical classification performance. The transfer learning incorporated with ensemble classifier of clustering and voting approach presented the best performance with accuracy of 36.7%, 0.053 RCI, and 0.225 kappa in the 10 retinal diseases classification problem. First, due to the small size of datasets, the deep learning techniques in this study were ineffective to be applied in clinics where numerous patients suffering from various types of retinal disorders visit for diagnosis and treatment. Second, we found that the transfer learning incorporated with ensemble classifiers can improve the classification performance in order to detect multi-categorical retinal diseases. Further studies should confirm the effectiveness of algorithms with large datasets obtained from hospitals.

  14. Crystal surface analysis using matrix textural features classified by a Probabilistic Neural Network

    International Nuclear Information System (INIS)

    Sawyer, C.R.; Quach, V.T.; Nason, D.; van den Berg, L.

    1991-01-01

    A system is under development in which surface quality of a growing bulk mercuric iodide crystal is monitored by video camera at regular intervals for early detection of growth irregularities. Mercuric iodide single crystals are employed in radiation detectors. A microcomputer system is used for image capture and processing. The digitized image is divided into multiple overlappings subimage and features are extracted from each subimage based on statistical measures of the gray tone distribution, according to the method of Haralick [1]. Twenty parameters are derived from each subimage and presented to a Probabilistic Neural Network (PNN) [2] for classification. This number of parameters was found to be optimal for the system. The PNN is a hierarchical, feed-forward network that can be rapidly reconfigured as additional training data become available. Training data is gathered by reviewing digital images of many crystals during their growth cycle and compiling two sets of images, those with and without irregularities. 6 refs., 4 figs

  15. Picasso: A Modular Framework for Visualizing the Learning Process of Neural Network Image Classifiers

    Directory of Open Access Journals (Sweden)

    Ryan Henderson

    2017-09-01

    Full Text Available Picasso is a free open-source (Eclipse Public License web application written in Python for rendering standard visualizations useful for analyzing convolutional neural networks. Picasso ships with occlusion maps and saliency maps, two visualizations which help reveal issues that evaluation metrics like loss and accuracy might hide: for example, learning a proxy classification task. Picasso works with the Tensorflow deep learning framework, and Keras (when the model can be loaded into the Tensorflow backend. Picasso can be used with minimal configuration by deep learning researchers and engineers alike across various neural network architectures. Adding new visualizations is simple: the user can specify their visualization code and HTML template separately from the application code.

  16. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    Science.gov (United States)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  17. Robust Template Decomposition without Weight Restriction for Cellular Neural Networks Implementing Arbitrary Boolean Functions Using Support Vector Classifiers

    Directory of Open Access Journals (Sweden)

    Yih-Lon Lin

    2013-01-01

    Full Text Available If the given Boolean function is linearly separable, a robust uncoupled cellular neural network can be designed as a maximal margin classifier. On the other hand, if the given Boolean function is linearly separable but has a small geometric margin or it is not linearly separable, a popular approach is to find a sequence of robust uncoupled cellular neural networks implementing the given Boolean function. In the past research works using this approach, the control template parameters and thresholds are restricted to assume only a given finite set of integers, and this is certainly unnecessary for the template design. In this study, we try to remove this restriction. Minterm- and maxterm-based decomposition algorithms utilizing the soft margin and maximal margin support vector classifiers are proposed to design a sequence of robust templates implementing an arbitrary Boolean function. Several illustrative examples are simulated to demonstrate the efficiency of the proposed method by comparing our results with those produced by other decomposition methods with restricted weights.

  18. Neural network and SVM classifiers accurately predict lipid binding proteins, irrespective of sequence homology.

    Science.gov (United States)

    Bakhtiarizadeh, Mohammad Reza; Moradi-Shahrbabak, Mohammad; Ebrahimi, Mansour; Ebrahimie, Esmaeil

    2014-09-07

    Due to the central roles of lipid binding proteins (LBPs) in many biological processes, sequence based identification of LBPs is of great interest. The major challenge is that LBPs are diverse in sequence, structure, and function which results in low accuracy of sequence homology based methods. Therefore, there is a need for developing alternative functional prediction methods irrespective of sequence similarity. To identify LBPs from non-LBPs, the performances of support vector machine (SVM) and neural network were compared in this study. Comprehensive protein features and various techniques were employed to create datasets. Five-fold cross-validation (CV) and independent evaluation (IE) tests were used to assess the validity of the two methods. The results indicated that SVM outperforms neural network. SVM achieved 89.28% (CV) and 89.55% (IE) overall accuracy in identification of LBPs from non-LBPs and 92.06% (CV) and 92.90% (IE) (in average) for classification of different LBPs classes. Increasing the number and the range of extracted protein features as well as optimization of the SVM parameters significantly increased the efficiency of LBPs class prediction in comparison to the only previous report in this field. Altogether, the results showed that the SVM algorithm can be run on broad, computationally calculated protein features and offers a promising tool in detection of LBPs classes. The proposed approach has the potential to integrate and improve the common sequence alignment based methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. A Neural Network Classifier Model for Forecasting Safety Behavior at Workplaces

    Directory of Open Access Journals (Sweden)

    Fakhradin Ghasemi

    2017-07-01

    Full Text Available The construction industry is notorious for having an unacceptable rate of fatal accidents. Unsafe behavior has been recognized as the main cause of most accidents occurring at workplaces, particularly construction sites. Having a predictive model of safety behavior can be helpful in preventing construction accidents. The aim of the present study was to build a predictive model of unsafe behavior using the Artificial Neural Network approach. A brief literature review was conducted on factors affecting safe behavior at workplaces and nine factors were selected to be included in the study. Data were gathered using a validated questionnaire from several construction sites. Multilayer perceptron approach was utilized for constructing the desired neural network. Several models with various architectures were tested to find the best one. Sensitivity analysis was conducted to find the most influential factors. The model with one hidden layer containing fourteen hidden neurons demonstrated the best performance (Sum of Squared Errors=6.73. The error rate of the model was approximately 21 percent. The results of sensitivity analysis showed that safety attitude, safety knowledge, supportive environment, and management commitment had the highest effects on safety behavior, while the effects from resource allocation and perceived work pressure were identified to be lower than those of others. The complex nature of human behavior at workplaces and the presence of many influential factors make it difficult to achieve a model with perfect performance.

  20. Generalized Network Psychometrics : Combining Network and Latent Variable Models

    NARCIS (Netherlands)

    Epskamp, S.; Rhemtulla, M.; Borsboom, D.

    2017-01-01

    We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between

  1. Muon Neutrino Disappearance in NOvA with a Deep Convolutional Neural Network Classifier

    Energy Technology Data Exchange (ETDEWEB)

    Rocco, Dominick Rosario [Minnesota U.

    2016-03-01

    The NuMI Off-axis Neutrino Appearance Experiment (NOvA) is designed to study neutrino oscillation in the NuMI (Neutrinos at the Main Injector) beam. NOvA observes neutrino oscillation using two detectors separated by a baseline of 810 km; a 14 kt Far Detector in Ash River, MN and a functionally identical 0.3 kt Near Detector at Fermilab. The experiment aims to provide new measurements of Δm2 and θ23 and has potential to determine the neutrino mass hierarchy as well as observe CP violation in the neutrino sector. Essential to these analyses is the classification of neutrino interaction events in NOvA detectors. Raw detector output from NOvA is interpretable as a pair of images which provide orthogonal views of particle interactions. A recent advance in the field of computer vision is the advent of convolutional neural networks, which have delivered top results in the latest image recognition contests. This work presents an approach novel to particle physics analysis in which a convolutional neural network is used for classification of particle interactions. The approach has been demonstrated to improve the signal efficiency and purity of the event selection, and thus physics sensitivity. Early NOvA data has been analyzed (2.74×1020 POT, 14 kt equivalent) to provide new best- fit measurements of sin2(θ23) = 0.43 (with a statistically-degenerate compliment near 0.60) and |Δm2 | = 2.48 × 10-3 eV2.

  2. Classifying and profiling Social Networking Site users: a latent segmentation approach.

    Science.gov (United States)

    Alarcón-del-Amo, María-del-Carmen; Lorenzo-Romero, Carlota; Gómez-Borja, Miguel-Ángel

    2011-09-01

    Social Networking Sites (SNSs) have showed an exponential growth in the last years. The first step for an efficient use of SNSs stems from an understanding of the individuals' behaviors within these sites. In this research, we have obtained a typology of SNS users through a latent segmentation approach, based on the frequency by which users perform different activities within the SNSs, sociodemographic variables, experience in SNSs, and dimensions related to their interaction patterns. Four different segments have been obtained. The "introvert" and "novel" users are the more occasional. They utilize SNSs mainly to communicate with friends, although "introverts" are more passive users. The "versatile" user performs different activities, although occasionally. Finally, the "expert-communicator" performs a greater variety of activities with a higher frequency. They tend to perform some marketing-related activities such as commenting on ads or gathering information about products and brands as well as commenting ads. The companies can take advantage of these segmentation schemes in different ways: first, by tracking and monitoring information interchange between users regarding their products and brands. Second, they should match the SNS users' profiles with their market targets to use SNSs as marketing tools. Finally, for most business, the expert users could be interesting opinion leaders and potential brand influencers.

  3. Monitoring and classifying animal behavior using ZigBee-based mobile ad hoc wireless sensor networks and artificial neural networks

    DEFF Research Database (Denmark)

    S. Nadimi, Esmaeil; Nyholm Jørgensen, Rasmus; Blanes-Vidal, Victoria

    2012-01-01

    Animal welfare is an issue of great importance in modern food production systems. Because animal behavior provides reliable information about animal health and welfare, recent research has aimed at designing monitoring systems capable of measuring behavioral parameters and transforming them...... into their corresponding behavioral modes. However, network unreliability and high-energy consumption have limited the applicability of those systems. In this study, a 2.4-GHz ZigBee-based mobile ad hoc wireless sensor network (MANET) that is able to overcome those problems is presented. The designed MANET showed high...... communication reliability, low energy consumption and low packet loss rate (14.8%) due to the deployment of modern communication protocols (e.g. multi-hop communication and handshaking protocol). The measured behavioral parameters were transformed into the corresponding behavioral modes using a multilayer...

  4. COMBINED AND STORM SEWER NETWORK MONITORING

    OpenAIRE

    Justyna Synowiecka; Ewa Burszta-Adamiak; Tomasz Konieczny; Paweł Malinowski

    2014-01-01

    Monitoring of the drainage networks is an extremely important tool used to understand the phenomena occurring in them. In an era of urbanization and increased run-off, at the expense of natural retention in the catchment, it helps to minimize the risk of local flooding and pollution. In its scope includes measurement of the amount of rainfall, with the use of rain gauges, and their measure in the sewer network, in matter of flows and channel filling, with the help of flow meters. An indispens...

  5. COMBINED AND STORM SEWER NETWORK MONITORING

    Directory of Open Access Journals (Sweden)

    Justyna Synowiecka

    2014-10-01

    Full Text Available Monitoring of the drainage networks is an extremely important tool used to understand the phenomena occurring in them. In an era of urbanization and increased run-off, at the expense of natural retention in the catchment, it helps to minimize the risk of local flooding and pollution. In its scope includes measurement of the amount of rainfall, with the use of rain gauges, and their measure in the sewer network, in matter of flows and channel filling, with the help of flow meters. An indispensable part in this step is their proper calibration calibration. In addition to ongoing monitoring of the sewer system, periodic inspections by the qualified employees of Water and Sewage Company should be done. The following article reviews measurement devices, their calibration methods, as well as the phenomena that occur during operation in the sewer network. It provides a solution for monitoring and control based on the experience of the Municipal Water and Sewage Company in Wroclaw, describing common operational problems, their causes, prevention methods and a network operation walkthrough with the improve of performance indicators KPI (Key Performance Indicators according the ECB (European Benchmarking Co-operation.

  6. Combining neural networks for protein secondary structure prediction

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1995-01-01

    In this paper structured neural networks are applied to the problem of predicting the secondary structure of proteins. A hierarchical approach is used where specialized neural networks are designed for each structural class and then combined using another neural network. The submodels are designed...... by using a priori knowledge of the mapping between protein building blocks and the secondary structure and by using weight sharing. Since none of the individual networks have more than 600 adjustable weights over-fitting is avoided. When ensembles of specialized experts are combined the performance...

  7. Novel method to classify hemodynamic response obtained using multi-channel fNIRS measurements into two groups: Exploring the combinations of channels

    Directory of Open Access Journals (Sweden)

    Hiroko eIchikawa

    2014-07-01

    Full Text Available Near-infrared spectroscopy (NIRS in psychiatric studies has widely demonstrated that cerebral hemodynamics differs among psychiatric patients. Recently we found that children with attention attention-deficit / hyperactivity disorder (ADHD and children with autism spectrum disorders (ASD showed different hemodynamic responses to their own mother’s face. Based on this finding, we may be able to classify their hemodynamic data into two those groups and predict which diagnostic group an unknown participant belongs to. In the present study, we proposed a novel statistical method for classifying the hemodynamic data of these two groups. By applying a support vector machine (SVM, we searched the combination of measurement channels at which the hemodynamic response differed between the two groups; ADHD and ASD. The SVM found the optimal subset of channels in each data set and successfully classified the ADHD data from the ASD data. For the 24-dimentional hemodynamic data, two optimal subsets classified the hemodynamic data with 84% classification accuracy while the subset contains all 24 channels classified with 62% classification accuracy. These results indicate the potential application of our novel method for classifying the hemodynamic data into two groups and revealing the combinations of channels that efficiently differentiate the two groups.

  8. Classification of epileptic seizures using wavelet packet log energy and norm entropies with recurrent Elman neural network classifier.

    Science.gov (United States)

    Raghu, S; Sriraam, N; Kumar, G Pradeep

    2017-02-01

    Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.

  9. Combined principal component preprocessing and n-tuple neural networks for improved classification

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Linneberg, Christian

    2000-01-01

    We present a combined principal component analysis/neural network scheme for classification. The data used to illustrate the method consist of spectral fluorescence recordings from seven different production facilities, and the task is to relate an unknown sample to one of these seven factories....... The data are first preprocessed by performing an individual principal component analysis on each of the seven groups of data. The components found are then used for classifying the data, but instead of making a single multiclass classifier, we follow the ideas of turning a multiclass problem into a number...... of two-class problems. For each possible pair of classes we further apply a transformation to the calculated principal components in order to increase the separation between the classes. Finally we apply the so-called n-tuple neural network to the transformed data in order to give the classification...

  10. SNRFCB: sub-network based random forest classifier for predicting chemotherapy benefit on survival for cancer treatment.

    Science.gov (United States)

    Shi, Mingguang; He, Jianmin

    2016-04-01

    Adjuvant chemotherapy (CTX) should be individualized to provide potential survival benefit and avoid potential harm to cancer patients. Our goal was to establish a computational approach for making personalized estimates of the survival benefit from adjuvant CTX. We developed Sub-Network based Random Forest classifier for predicting Chemotherapy Benefit (SNRFCB) based gene expression datasets of lung cancer. The SNRFCB approach was then validated in independent test cohorts for identifying chemotherapy responder cohorts and chemotherapy non-responder cohorts. SNRFCB involved the pre-selection of gene sub-network signatures based on the mutations and on protein-protein interaction data as well as the application of the random forest algorithm to gene expression datasets. Adjuvant CTX was significantly associated with the prolonged overall survival of lung cancer patients in the chemotherapy responder group (P = 0.008), but it was not beneficial to patients in the chemotherapy non-responder group (P = 0.657). Adjuvant CTX was significantly associated with the prolonged overall survival of lung cancer squamous cell carcinoma (SQCC) subtype patients in the chemotherapy responder cohorts (P = 0.024), but it was not beneficial to patients in the chemotherapy non-responder cohorts (P = 0.383). SNRFCB improved prediction performance as compared to the machine learning method, support vector machine (SVM). To test the general applicability of the predictive model, we further applied the SNRFCB approach to human breast cancer datasets and also observed superior performance. SNRFCB could provide recurrent probability for individual patients and identify which patients may benefit from adjuvant CTX in clinical trials.

  11. A cross-sectional evaluation of meditation experience on electroencephalography data by artificial neural network and support vector machine classifiers.

    Science.gov (United States)

    Lee, Yu-Hao; Hsieh, Ya-Ju; Shiah, Yung-Jong; Lin, Yu-Huei; Chen, Chiao-Yun; Tyan, Yu-Chang; GengQiu, JiaCheng; Hsu, Chung-Yao; Chen, Sharon Chia-Ju

    2017-04-01

    To quantitate the meditation experience is a subjective and complex issue because it is confounded by many factors such as emotional state, method of meditation, and personal physical condition. In this study, we propose a strategy with a cross-sectional analysis to evaluate the meditation experience with 2 artificial intelligence techniques: artificial neural network and support vector machine. Within this analysis system, 3 features of the electroencephalography alpha spectrum and variant normalizing scaling are manipulated as the evaluating variables for the detection of accuracy. Thereafter, by modulating the sliding window (the period of the analyzed data) and shifting interval of the window (the time interval to shift the analyzed data), the effect of immediate analysis for the 2 methods is compared. This analysis system is performed on 3 meditation groups, categorizing their meditation experiences in 10-year intervals from novice to junior and to senior. After an exhausted calculation and cross-validation across all variables, the high accuracy rate >98% is achievable under the criterion of 0.5-minute sliding window and 2 seconds shifting interval for both methods. In a word, the minimum analyzable data length is 0.5 minute and the minimum recognizable temporal resolution is 2 seconds in the decision of meditative classification. Our proposed classifier of the meditation experience promotes a rapid evaluation system to distinguish meditation experience and a beneficial utilization of artificial techniques for the big-data analysis.

  12. Optimum Combining for Rapidly Fading Channels in Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Sonia Furman

    2003-10-01

    Full Text Available Research and technology in wireless communication systems such as radar and cellular networks have successfully implemented alternative design approaches that utilize antenna array techniques such as optimum combining, to mitigate the degradation effects of multipath in rapid fading channels. In ad hoc networks, these methods have not yet been exploited primarily due to the complexity inherent in the network's architecture. With the high demand for improved signal link quality, devices configured with omnidirectional antennas can no longer meet the growing need for link quality and spectrum efficiency. This study takes an empirical approach to determine an optimum combining antenna array based on 3 variants of interelement spacing. For rapid fading channels, the simulation results show that the performance in the network of devices retrofitted with our antenna arrays consistently exceeded those with an omnidirectional antenna. Further, with the optimum combiner, the performance increased by over 60% compared to that of an omnidirectional antenna in a rapid fading channel.

  13. Classification of mass and normal breast tissue: A convolution neural network classifier with spatial domain and texture images

    International Nuclear Information System (INIS)

    Sahiner, B.; Chan, H.P.; Petrick, N.; Helvie, M.A.; Adler, D.D.; Goodsitt, M.M.; Wei, D.

    1996-01-01

    The authors investigated the classification of regions of interest (ROI's) on mammograms as either mass or normal tissue using a convolution neural network (CNN). A CNN is a back-propagation neural network with two-dimensional (2-D) weight kernels that operate on images. A generalized, fast and stable implementation of the CNN was developed. The input images to the CNN were obtained form the ROI's using two techniques. The first technique employed averaging and subsampling. The second technique employed texture feature extraction methods applied to small subregions inside the ROI. Features computed over different subregions were arranged as texture images, which were subsequently used as CNN inputs. The effects of CNN architecture and texture feature parameters on classification accuracy were studied. Receiver operating characteristic (ROC) methodology was used to evaluate the classification accuracy. A data set consisting of 168 ROI's containing biopsy-proven masses and 504 ROI's containing normal breast tissue was extracted from 168 mammograms by radiologists experienced in mammography. This data set was used for training and testing the CNN. With the best combination of CNN architecture and texture feature parameters, the area under the test ROC curve reached 0.87, which corresponded to a true-positive fraction of 90% at a false positive fraction of 31%. The results demonstrate the feasibility of using a CNN for classification of masses and normal tissue on mammograms

  14. Classifying Microorganisms

    DEFF Research Database (Denmark)

    Sommerlund, Julie

    2006-01-01

    This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris......This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological...... characteristics. The coexistence of the classification systems does not lead to a conflict between them. Rather, the systems seem to co-exist in different configurations, through which they are complementary, contradictory and inclusive in different situations-sometimes simultaneously. The systems come...

  15. Classifying injury narratives of large administrative databases for surveillance-A practical approach combining machine learning ensembles and human review.

    Science.gov (United States)

    Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R

    2017-01-01

    Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as

  16. Using the TensorFlow Deep Neural Network to Classify Mainland China Visitor Behaviours in Hong Kong from Check-in Data

    Directory of Open Access Journals (Sweden)

    Shanshan Han

    2018-04-01

    Full Text Available Over the past decade, big data, including Global Positioning System (GPS data, mobile phone tracking data and social media check-in data, have been widely used to analyse human movements and behaviours. Tourism management researchers have noted the potential of applying these data to study tourist behaviours, and many studies have shown that social media check-in data can provide new opportunities for extracting tourism activities and tourist behaviours. However, traditional methods may not be suitable for extracting comprehensive tourist behaviours due to the complexity and diversity of human behaviours. Studies have shown that deep neural networks have outpaced the abilities of human beings in many fields and that deep neural networks can be explained in a psychological manner. Thus, deep neural network methods can potentially be used to understand human behaviours. In this paper, a deep learning neural network constructed in TensorFlow is applied to classify Mainland China visitor behaviours in Hong Kong, and the characteristics of these visitors are analysed to verify the classification results. For the social science classification problem investigated in this study, the deep neural network classifier in TensorFlow provides better accuracy and more lucid visualisation than do traditional neural network methods, even for erratic classification rules. Furthermore, the results of this study reveal that TensorFlow has considerable potential for application in the human geography field.

  17. Propagation of New Innovations: An Approach to Classify Human Behavior and Movement from Available Social Network Data

    Science.gov (United States)

    Mahmud, Faisal; Samiul, Hasan

    2010-01-01

    It is interesting to observe new innovations, products, or ideas propagating into the society. One important factor of this propagation is the role of individual's social network; while another factor is individual's activities. In this paper, an approach will be made to analyze the propagation of different ideas in a popular social network. Individuals' responses to different activities in the network will be analyzed. The properties of network will also be investigated for successful propagation of innovations.

  18. Combine harvester monitor system based on wireless sensor network

    Science.gov (United States)

    A measurement method based on Wireless Sensor Network (WSN) was developed to monitor the working condition of combine harvester for remote application. Three JN5139 modules were chosen for sensor data acquisition and another two as a router and a coordinator, which could create a tree topology netwo...

  19. A simple network agreement-based approach for combining evidences in a heterogeneous sensor network

    Directory of Open Access Journals (Sweden)

    Raúl Eusebio-Grande

    2015-12-01

    Full Text Available In this research we investigate how the evidences provided by both static and mobile nodes that are part of a heterogenous sensor network can be combined to have trustworthy results. A solution relying on a network agreement-based approach was implemented and tested.

  20. A case study of a precision fertilizer application task generation for wheat based on classified hyperspectral data from UAV combined with farm history data

    Science.gov (United States)

    Kaivosoja, Jere; Pesonen, Liisa; Kleemola, Jouko; Pölönen, Ilkka; Salo, Heikki; Honkavaara, Eija; Saari, Heikki; Mäkynen, Jussi; Rajala, Ari

    2013-10-01

    Different remote sensing methods for detecting variations in agricultural fields have been studied in last two decades. There are already existing systems for planning and applying e.g. nitrogen fertilizers to the cereal crop fields. However, there are disadvantages such as high costs, adaptability, reliability, resolution aspects and final products dissemination. With an unmanned aerial vehicle (UAV) based airborne methods, data collection can be performed cost-efficiently with desired spatial and temporal resolutions, below clouds and under diverse weather conditions. A new Fabry-Perot interferometer based hyperspectral imaging technology implemented in an UAV has been introduced. In this research, we studied the possibilities of exploiting classified raster maps from hyperspectral data to produce a work task for a precision fertilizer application. The UAV flight campaign was performed in a wheat test field in Finland in the summer of 2012. Based on the campaign, we have classified raster maps estimating the biomass and nitrogen contents at approximately stage 34 in the Zadoks scale. We combined the classified maps with farm history data such as previous yield maps. Then we generalized the combined results and transformed it to a vectorized zonal task map suitable for farm machinery. We present the selected weights for each dataset in the processing chain and the resultant variable rate application (VRA) task. The additional fertilization according to the generated task was shown to be beneficial for the amount of yield. However, our study is indicating that there are still many uncertainties within the process chain.

  1. A combined video and synchronous VSAT data network

    Science.gov (United States)

    Rowse, William

    Private Satellite Network currently operates Business Television networks for Fortune 500 companies. Several of these satellite-based networks, using VSAT technology, are combining the transmission of video with the broadcast of one-way data. This is made possible by use of the PSN Business Television Terminal which incorporates Scientific Atlanta's B-MAC system. In addition to providing high quality video, B-MAC can provide six channels of 204.5 kbs audio. Four of the six channels may be used to directly carry up to 19.2 kbs of asynchronous data or up to 56 kbs of synchronous data using circuitry jointly developed by PSN and Scientific Atlanta. The approach PSN has taken to provide one network customer in the financial industry with both video and broadcast data is described herein.

  2. Combining inferences from models of capture efficiency, detectability, and suitable habitat to classify landscapes for conservation of threatened bull trout

    Science.gov (United States)

    Peterson, J.; Dunham, J.B.

    2003-01-01

    Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult-to-sample species, and models of species presence may produce biased predictions. We present a Bayesian approach that combines sampling and model-based inferences for estimating species presence. The accuracy and cost-effectiveness of this approach were compared to those of sampling surveys and predictive models for estimating the presence of the threatened bull trout ( Salvelinus confluentus ) via simulation with existing models and empirical sampling data. Simulations indicated that a sampling-only approach would be the most effective and would result in the lowest presence and absence misclassification error rates for three thresholds of detection probability. When sampling effort was considered, however, the combined approach resulted in the lowest error rates per unit of sampling effort. Hence, lower probability-of-detection thresholds can be specified with the combined approach, resulting in lower misclassification error rates and improved cost-effectiveness.

  3. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  4. New neural network classifier of fall-risk based on the Mahalanobis distance and kinematic parameters assessed by a wearable device

    International Nuclear Information System (INIS)

    Giansanti, Daniele; Macellari, Velio; Maccioni, Giovanni

    2008-01-01

    Fall prevention lacks easy, quantitative and wearable methods for the classification of fall-risk (FR). Efforts must be thus devoted to the choice of an ad hoc classifier both to reduce the size of the sample used to train the classifier and to improve performances. A new methodology that uses a neural network (NN) and a wearable device are hereby proposed for this purpose. The NN uses kinematic parameters assessed by a wearable device with accelerometers and rate gyroscopes during a posturography protocol. The training of the NN was based on the Mahalanobis distance and was carried out on two groups of 30 elderly subjects with varying fall-risk Tinetti scores. The validation was done on two groups of 100 subjects with different fall-risk Tinetti scores and showed that, both in terms of specificity and sensitivity, the NN performed better than other classifiers (naive Bayes, Bayes net, multilayer perceptron, support vector machines, statistical classifiers). In particular, (i) the proposed NN methodology improved the specificity and sensitivity by a mean of 3% when compared to the statistical classifier based on the Mahalanobis distance (SCMD) described in Giansanti (2006 Physiol. Meas. 27 1081–90); (ii) the assessed specificity was 97%, the assessed sensitivity was 98% and the area under receiver operator characteristics was 0.965. (note)

  5. COMBINING PCA ANALYSIS AND ARTIFICIAL NEURAL NETWORKS IN MODELLING ENTREPRENEURIAL INTENTIONS OF STUDENTS

    Directory of Open Access Journals (Sweden)

    Marijana Zekić-Sušac

    2013-02-01

    Full Text Available Despite increased interest in the entrepreneurial intentions and career choices of young adults, reliable prediction models are yet to be developed. Two nonparametric methods were used in this paper to model entrepreneurial intentions: principal component analysis (PCA and artificial neural networks (ANNs. PCA was used to perform feature extraction in the first stage of modelling, while artificial neural networks were used to classify students according to their entrepreneurial intentions in the second stage. Four modelling strategies were tested in order to find the most efficient model. Dataset was collected in an international survey on entrepreneurship self-efficacy and identity. Variables describe students’ demographics, education, attitudes, social and cultural norms, self-efficacy and other characteristics. The research reveals benefits from the combination of the PCA and ANNs in modeling entrepreneurial intentions, and provides some ideas for further research.

  6. 76 FR 63811 - Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and...

    Science.gov (United States)

    2011-10-13

    ... implementation of policies and minimum standards regarding information security, personnel security, and systems security; address both internal and external security threats and vulnerabilities; and provide policies and... policies and minimum standards will address all agencies that operate or access classified computer...

  7. Sustainability of Hydrogen Supply Chain. Part II: Prioritizing and Classifying the Sustainability of Hydrogen Supply Chains based on the Combination of Extension Theory and AHP

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Manzardo, Alessandro; Toniolo, Sara

    2013-01-01

    The purpose of this study is to develop a method for prioritizing and classifying the sustainability of hydrogen supply chains and assist decision-making for the stakeholders/decision-makers. Multiple criteria for sustainability assessment of hydrogen supply chains are considered and multiple...... decision-makers are allowed to participate in the decision-making using linguistic terms. In this study, extension theory and analytic hierarchy process are combined to rate the sustainability of hydrogen supply chains. The sustainability of hydrogen supply chains could be identified according...

  8. Selection of discriminant mid-infrared wavenumbers by combining a naïve Bayesian classifier and a genetic algorithm: Application to the evaluation of lignocellulosic biomass biodegradation.

    Science.gov (United States)

    Rammal, Abbas; Perrin, Eric; Vrabie, Valeriu; Assaf, Rabih; Fenniri, Hassan

    2017-07-01

    Infrared spectroscopy provides useful information on the molecular compositions of biological systems related to molecular vibrations, overtones, and combinations of fundamental vibrations. Mid-infrared (MIR) spectroscopy is sensitive to organic and mineral components and has attracted growing interest in the development of biomarkers related to intrinsic characteristics of lignocellulose biomass. However, not all spectral information is valuable for biomarker construction or for applying analysis methods such as classification. Better processing and interpretation can be achieved by identifying discriminating wavenumbers. The selection of wavenumbers has been addressed through several variable- or feature-selection methods. Some of them have not been adapted for use in large data sets or are difficult to tune, and others require additional information, such as concentrations. This paper proposes a new approach by combining a naïve Bayesian classifier with a genetic algorithm to identify discriminating spectral wavenumbers. The genetic algorithm uses a linear combination of an a posteriori probability and the Bayes error rate as the fitness function for optimization. Such a function allows the improvement of both the compactness and the separation of classes. This approach was tested to classify a small set of maize roots in soil according to their biodegradation process based on their MIR spectra. The results show that this optimization method allows better discrimination of the biodegradation process, compared with using the information of the entire MIR spectrum, the use of the spectral information at wavenumbers selected by a genetic algorithm based on a classical validity index or the use of the spectral information selected by combining a genetic algorithm with other methods, such as Linear Discriminant Analysis. The proposed method selects wavenumbers that correspond to principal vibrations of chemical functional groups of compounds that undergo degradation

  9. Accurate Natural Trail Detection Using a Combination of a Deep Neural Network and Dynamic Programming.

    Science.gov (United States)

    Adhikari, Shyam Prasad; Yang, Changju; Slot, Krzysztof; Kim, Hyongsuk

    2018-01-10

    This paper presents a vision sensor-based solution to the challenging problem of detecting and following trails in highly unstructured natural environments like forests, rural areas and mountains, using a combination of a deep neural network and dynamic programming. The deep neural network (DNN) concept has recently emerged as a very effective tool for processing vision sensor signals. A patch-based DNN is trained with supervised data to classify fixed-size image patches into "trail" and "non-trail" categories, and reshaped to a fully convolutional architecture to produce trail segmentation map for arbitrary-sized input images. As trail and non-trail patches do not exhibit clearly defined shapes or forms, the patch-based classifier is prone to misclassification, and produces sub-optimal trail segmentation maps. Dynamic programming is introduced to find an optimal trail on the sub-optimal DNN output map. Experimental results showing accurate trail detection for real-world trail datasets captured with a head mounted vision system are presented.

  10. Combined techniques for network measurements at accelerator facilities

    International Nuclear Information System (INIS)

    Pschorn, I.

    1999-01-01

    Usually network measurements at GSi (Gesellschaft fur Schwerionen forschung) are carried out by employing the Leica tachymeter TC2002K etc. Due to time constraints and the fact that GSi possesses only one of these selected, high precision total-stations, it was suddenly necessary to think about employing a Laser tracker as the major instrument for a reference network measurement. The idea was to compare the different instruments and to proof if it is possible at all to carry out a precise network measurement using a laser tracker. In the end the SMX Tracker4500 combined with Leica NA3000 for network measurements at GSi, Darmstadt and at BESSY Il, Berlin (both located in Germany) was applied. A few results are shown in the following chapters. A new technology in 3D metrology came up. Some ideas of applying these new tools in the field of accelerator measurements are given. Finally aspects of calibration and checking the performance of the employed high precision instrument are pointed out in this paper. (author)

  11. Comparison of two neural network classifiers in the differential diagnosis of essential tremor and Parkinson's disease by {sup 123}I-FP-CIT brain SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Palumbo, Barbara [University of Perugia, Nuclear Medicine Section, Department of Surgical, Radiological and Odontostomatological Sciences, Ospedale S. Maria della Misericordia, Perugia (Italy); Fravolini, Mario Luca [University of Perugia, Department of Electronic and Information Engineering, Perugia (Italy); Nuvoli, Susanna; Spanu, Angela; Madeddu, Giuseppe [University of Sassari, Department of Nuclear Medicine, Sassari (Italy); Paulus, Kai Stephan [University of Sassari, Department of Neurology, Sassari (Italy); Schillaci, Orazio [University Tor Vergata, Department of Biopathology and Diagnostic Imaging, Rome (Italy); IRCSS Neuromed, Pozzilli (Italy)

    2010-11-15

    To contribute to the differentiation of Parkinson's disease (PD) and essential tremor (ET), we compared two different artificial neural network classifiers using {sup 123}I-FP-CIT SPECT data, a probabilistic neural network (PNN) and a classification tree (ClT). {sup 123}I-FP-CIT brain SPECT with semiquantitative analysis was performed in 216 patients: 89 with ET, 64 with PD with a Hoehn and Yahr (H and Y) score of {<=}2 (early PD), and 63 with PD with a H and Y score of {>=}2.5 (advanced PD). For each of the 1,000 experiments carried out, 108 patients were randomly selected as the PNN training set, while the remaining 108 validated the trained PNN, and the percentage of the validation data correctly classified in the three groups of patients was computed. The expected performance of an ''average performance PNN'' was evaluated. In analogy, for ClT 1,000 classification trees with similar structures were generated. For PNN, the probability of correct classification in patients with early PD was 81.9{+-}8.1% (mean{+-}SD), in patients with advanced PD 78.9{+-}8.1%, and in ET patients 96.6{+-}2.6%. For ClT, the first decision rule gave a mean value for the putamen of 5.99, which resulted in a probability of correct classification of 93.5{+-}3.4%. This means that patients with putamen values >5.99 were classified as having ET, while patients with putamen values <5.99 were classified as having PD. Furthermore, if the caudate nucleus value was higher than 6.97 patients were classified as having early PD (probability 69.8{+-}5.3%), and if the value was <6.97 patients were classified as having advanced PD (probability 88.1%{+-}8.8%). These results confirm that PNN achieved valid classification results. Furthermore, ClT provided reliable cut-off values able to differentiate ET and PD of different severities. (orig.)

  12. Comparison of two neural network classifiers in the differential diagnosis of essential tremor and Parkinson's disease by 123I-FP-CIT brain SPECT

    International Nuclear Information System (INIS)

    Palumbo, Barbara; Fravolini, Mario Luca; Nuvoli, Susanna; Spanu, Angela; Madeddu, Giuseppe; Paulus, Kai Stephan; Schillaci, Orazio

    2010-01-01

    To contribute to the differentiation of Parkinson's disease (PD) and essential tremor (ET), we compared two different artificial neural network classifiers using 123 I-FP-CIT SPECT data, a probabilistic neural network (PNN) and a classification tree (ClT). 123 I-FP-CIT brain SPECT with semiquantitative analysis was performed in 216 patients: 89 with ET, 64 with PD with a Hoehn and Yahr (H and Y) score of ≤2 (early PD), and 63 with PD with a H and Y score of ≥2.5 (advanced PD). For each of the 1,000 experiments carried out, 108 patients were randomly selected as the PNN training set, while the remaining 108 validated the trained PNN, and the percentage of the validation data correctly classified in the three groups of patients was computed. The expected performance of an ''average performance PNN'' was evaluated. In analogy, for ClT 1,000 classification trees with similar structures were generated. For PNN, the probability of correct classification in patients with early PD was 81.9±8.1% (mean±SD), in patients with advanced PD 78.9±8.1%, and in ET patients 96.6±2.6%. For ClT, the first decision rule gave a mean value for the putamen of 5.99, which resulted in a probability of correct classification of 93.5±3.4%. This means that patients with putamen values >5.99 were classified as having ET, while patients with putamen values <5.99 were classified as having PD. Furthermore, if the caudate nucleus value was higher than 6.97 patients were classified as having early PD (probability 69.8±5.3%), and if the value was <6.97 patients were classified as having advanced PD (probability 88.1%±8.8%). These results confirm that PNN achieved valid classification results. Furthermore, ClT provided reliable cut-off values able to differentiate ET and PD of different severities. (orig.)

  13. Combining morphometric features and convolutional networks fusion for glaucoma diagnosis

    Science.gov (United States)

    Perdomo, Oscar; Arevalo, John; González, Fabio A.

    2017-11-01

    Glaucoma is an eye condition that leads to loss of vision and blindness. Ophthalmoscopy exam evaluates the shape, color and proportion between the optic disc and physiologic cup, but the lack of agreement among experts is still the main diagnosis problem. The application of deep convolutional neural networks combined with automatic extraction of features such as: the cup-to-disc distance in the four quadrants, the perimeter, area, eccentricity, the major radio, the minor radio in optic disc and cup, in addition to all the ratios among the previous parameters may help with a better automatic grading of glaucoma. This paper presents a strategy to merge morphological features and deep convolutional neural networks as a novel methodology to support the glaucoma diagnosis in eye fundus images.

  14. Accurate Traffic Flow Prediction in Heterogeneous Vehicular Networks in an Intelligent Transport System Using a Supervised Non-Parametric Classifier

    Directory of Open Access Journals (Sweden)

    Hesham El-Sayed

    2018-05-01

    Full Text Available Heterogeneous vehicular networks (HETVNETs evolve from vehicular ad hoc networks (VANETs, which allow vehicles to always be connected so as to obtain safety services within intelligent transportation systems (ITSs. The services and data provided by HETVNETs should be neither interrupted nor delayed. Therefore, Quality of Service (QoS improvement of HETVNETs is one of the topics attracting the attention of researchers and the manufacturing community. Several methodologies and frameworks have been devised by researchers to address QoS-prediction service issues. In this paper, to improve QoS, we evaluate various traffic characteristics of HETVNETs and propose a new supervised learning model to capture knowledge on all possible traffic patterns. This model is a refinement of support vector machine (SVM kernels with a radial basis function (RBF. The proposed model produces better results than SVMs, and outperforms other prediction methods used in a traffic context, as it has lower computational complexity and higher prediction accuracy.

  15. Promotion of active ageing combining sensor and social network data.

    Science.gov (United States)

    Bilbao, Aritz; Almeida, Aitor; López-de-Ipiña, Diego

    2016-12-01

    The increase of life expectancy in modern society has caused an increase in elderly population. Elderly people want to live independently in their home environment for as long as possible. However, as we age, our physical skills tend to worsen and our social circle tends to become smaller, something that often leads to a considerable decrease of both our physical and social activities. In this paper, we present an AAL framework developed within the SONOPA project, whose objective is to promote active ageing by combining a social network with information inferred using in-home sensors. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Combination of Bayesian Network and Overlay Model in User Modeling

    Directory of Open Access Journals (Sweden)

    Loc Nguyen

    2009-12-01

    Full Text Available The core of adaptive system is user model containing personal information such as knowledge, learning styles, goals… which is requisite for learning personalized process. There are many modeling approaches, for example: stereotype, overlay, plan recognition… but they don’t bring out the solid method for reasoning from user model. This paper introduces the statistical method that combines Bayesian network and overlay modeling so that it is able to infer user’s knowledge from evidences collected during user’s learning process.

  17. Discrimination of soft tissues using laser-induced breakdown spectroscopy in combination with k nearest neighbors (kNN) and support vector machine (SVM) classifiers

    Science.gov (United States)

    Li, Xiaohui; Yang, Sibo; Fan, Rongwei; Yu, Xin; Chen, Deying

    2018-06-01

    In this paper, discrimination of soft tissues using laser-induced breakdown spectroscopy (LIBS) in combination with multivariate statistical methods is presented. Fresh pork fat, skin, ham, loin and tenderloin muscle tissues are manually cut into slices and ablated using a 1064 nm pulsed Nd:YAG laser. Discrimination analyses between fat, skin and muscle tissues, and further between highly similar ham, loin and tenderloin muscle tissues, are performed based on the LIBS spectra in combination with multivariate statistical methods, including principal component analysis (PCA), k nearest neighbors (kNN) classification, and support vector machine (SVM) classification. Performances of the discrimination models, including accuracy, sensitivity and specificity, are evaluated using 10-fold cross validation. The classification models are optimized to achieve best discrimination performances. The fat, skin and muscle tissues can be definitely discriminated using both kNN and SVM classifiers, with accuracy of over 99.83%, sensitivity of over 0.995 and specificity of over 0.998. The highly similar ham, loin and tenderloin muscle tissues can also be discriminated with acceptable performances. The best performances are achieved with SVM classifier using Gaussian kernel function, with accuracy of 76.84%, sensitivity of over 0.742 and specificity of over 0.869. The results show that the LIBS technique assisted with multivariate statistical methods could be a powerful tool for online discrimination of soft tissues, even for tissues of high similarity, such as muscles from different parts of the animal body. This technique could be used for discrimination of tissues suffering minor clinical changes, thus may advance the diagnosis of early lesions and abnormalities.

  18. A Bayesian classifier for symbol recognition

    OpenAIRE

    Barrat , Sabine; Tabbone , Salvatore; Nourrissier , Patrick

    2007-01-01

    URL : http://www.buyans.com/POL/UploadedFile/134_9977.pdf; International audience; We present in this paper an original adaptation of Bayesian networks to symbol recognition problem. More precisely, a descriptor combination method, which enables to improve significantly the recognition rate compared to the recognition rates obtained by each descriptor, is presented. In this perspective, we use a simple Bayesian classifier, called naive Bayes. In fact, probabilistic graphical models, more spec...

  19. Synergy Maps: exploring compound combinations using network-based visualization.

    Science.gov (United States)

    Lewis, Richard; Guha, Rajarshi; Korcsmaros, Tamás; Bender, Andreas

    2015-01-01

    The phenomenon of super-additivity of biological response to compounds applied jointly, termed synergy, has the potential to provide many therapeutic benefits. Therefore, high throughput screening of compound combinations has recently received a great deal of attention. Large compound libraries and the feasibility of all-pairs screening can easily generate large, information-rich datasets. Previously, these datasets have been visualized using either a heat-map or a network approach-however these visualizations only partially represent the information encoded in the dataset. A new visualization technique for pairwise combination screening data, termed "Synergy Maps", is presented. In a Synergy Map, information about the synergistic interactions of compounds is integrated with information about their properties (chemical structure, physicochemical properties, bioactivity profiles) to produce a single visualization. As a result the relationships between compound and combination properties may be investigated simultaneously, and thus may afford insight into the synergy observed in the screen. An interactive web app implementation, available at http://richlewis42.github.io/synergy-maps, has been developed for public use, which may find use in navigating and filtering larger scale combination datasets. This tool is applied to a recent all-pairs dataset of anti-malarials, tested against Plasmodium falciparum, and a preliminary analysis is given as an example, illustrating the disproportionate synergism of histone deacetylase inhibitors previously described in literature, as well as suggesting new hypotheses for future investigation. Synergy Maps improve the state of the art in compound combination visualization, by simultaneously representing individual compound properties and their interactions. The web-based tool allows straightforward exploration of combination data, and easier identification of correlations between compound properties and interactions.

  20. A Combined Approach to Classifying Land Surface Cover of Urban Domestic Gardens Using Citizen Science Data and High Resolution Image Analysis

    Directory of Open Access Journals (Sweden)

    Fraser Baker

    2018-03-01

    Full Text Available Domestic gardens are an important component of cities, contributing significantly to urban green infrastructure (GI and its associated ecosystem services. However, domestic gardens are incredibly heterogeneous which presents challenges for quantifying their GI contribution and associated benefits for sustainable urban development. This study applies an innovative methodology that combines citizen science data with high resolution image analysis to create a garden dataset in the case study city of Manchester, UK. An online Citizen Science Survey (CSS collected estimates of proportional coverage for 10 garden land surface types from 1031 city residents. High resolution image analysis was conducted to validate the CSS estimates, and to classify 7 land surface cover categories for all garden parcels in the city. Validation of the CSS land surface estimations revealed a mean accuracy of 76.63% (s = 15.24%, demonstrating that citizens are able to provide valid estimates of garden surface coverage proportions. An Object Based Image Analysis (OBIA classification achieved an estimated overall accuracy of 82%, with further processing required to classify shadow objects. CSS land surface estimations were then extrapolated across the entire classification through calculation of within image class proportions, to provide the proportional coverage of 10 garden land surface types (buildings, hard impervious surfaces, hard pervious surfaces, bare soil, trees, shrubs, mown grass, rough grass, cultivated land, water within every garden parcel in the city. The final dataset provides a better understanding of the composition of GI in domestic gardens and how this varies across the city. An average garden in Manchester has 50.23% GI, including trees (16.54%, mown grass (14.46%, shrubs (9.19%, cultivated land (7.62%, rough grass (1.97% and water (0.45%. At the city scale, Manchester has 49.0% GI, and around one fifth (20.94% of this GI is contained within domestic

  1. Fingerprint prediction using classifier ensembles

    CSIR Research Space (South Africa)

    Molale, P

    2011-11-01

    Full Text Available ); logistic discrimination (LgD), k-nearest neighbour (k-NN), artificial neural network (ANN), association rules (AR) decision tree (DT), naive Bayes classifier (NBC) and the support vector machine (SVM). The performance of several multiple classifier systems...

  2. Selection combining for noncoherent decode-and-forward relay networks

    Directory of Open Access Journals (Sweden)

    Nguyen Ha

    2011-01-01

    Full Text Available Abstract This paper studies a new decode-and-forward relaying scheme for a cooperative wireless network composed of one source, K relays, and one destination and with binary frequency-shift keying modulation. A single threshold is employed to select retransmitting relays as follows: a relay retransmits to the destination if its decision variable is larger than the threshold; otherwise, it remains silent. The destination then performs selection combining for the detection of transmitted information. The average end-to-end bit-error-rate (BER is analytically determined in a closed-form expression. Based on the derived BER, the problem of choosing an optimal threshold or jointly optimal threshold and power allocation to minimize the end-to-end BER is also investigated. Both analytical and simulation results reveal that the obtained optimal threshold scheme or jointly optimal threshold and power-allocation scheme can significantly improve the BER performance compared to a previously proposed scheme.

  3. Comparison of Two Classifiers; K-Nearest Neighbor and Artificial Neural Network, for Fault Diagnosis on a Main Engine Journal-Bearing

    Directory of Open Access Journals (Sweden)

    A. Moosavian

    2013-01-01

    Full Text Available Vibration analysis is an accepted method in condition monitoring of machines, since it can provide useful and reliable information about machine working condition. This paper surveys a new scheme for fault diagnosis of main journal-bearings of internal combustion (IC engine based on power spectral density (PSD technique and two classifiers, namely, K-nearest neighbor (KNN and artificial neural network (ANN. Vibration signals for three different conditions of journal-bearing; normal, with oil starvation condition and extreme wear fault were acquired from an IC engine. PSD was applied to process the vibration signals. Thirty features were extracted from the PSD values of signals as a feature source for fault diagnosis. KNN and ANN were trained by training data set and then used as diagnostic classifiers. Variable K value and hidden neuron count (N were used in the range of 1 to 20, with a step size of 1 for KNN and ANN to gain the best classification results. The roles of PSD, KNN and ANN techniques were studied. From the results, it is shown that the performance of ANN is better than KNN. The experimental results dèmonstrate that the proposed diagnostic method can reliably separate different fault conditions in main journal-bearings of IC engine.

  4. Opening up the blackbox: an interpretable deep neural network-based classifier for cell-type specific enhancer predictions.

    Science.gov (United States)

    Kim, Seong Gon; Theera-Ampornpunt, Nawanol; Fang, Chih-Hao; Harwani, Mrudul; Grama, Ananth; Chaterji, Somali

    2016-08-01

    Gene expression is mediated by specialized cis-regulatory modules (CRMs), the most prominent of which are called enhancers. Early experiments indicated that enhancers located far from the gene promoters are often responsible for mediating gene transcription. Knowing their properties, regulatory activity, and genomic targets is crucial to the functional understanding of cellular events, ranging from cellular homeostasis to differentiation. Recent genome-wide investigation of epigenomic marks has indicated that enhancer elements could be enriched for certain epigenomic marks, such as, combinatorial patterns of histone modifications. Our efforts in this paper are motivated by these recent advances in epigenomic profiling methods, which have uncovered enhancer-associated chromatin features in different cell types and organisms. Specifically, in this paper, we use recent state-of-the-art Deep Learning methods and develop a deep neural network (DNN)-based architecture, called EP-DNN, to predict the presence and types of enhancers in the human genome. It uses as features, the expression levels of the histone modifications at the peaks of the functional sites as well as in its adjacent regions. We apply EP-DNN to four different cell types: H1, IMR90, HepG2, and HeLa S3. We train EP-DNN using p300 binding sites as enhancers, and TSS and random non-DHS sites as non-enhancers. We perform EP-DNN predictions to quantify the validation rate for different levels of confidence in the predictions and also perform comparisons against two state-of-the-art computational models for enhancer predictions, DEEP-ENCODE and RFECS. We find that EP-DNN has superior accuracy and takes less time to make predictions. Next, we develop methods to make EP-DNN interpretable by computing the importance of each input feature in the classification task. This analysis indicates that the important histone modifications were distinct for different cell types, with some overlaps, e.g., H3K27ac was

  5. Combining morphological analysis and Bayesian networks for strategic decision support

    Directory of Open Access Journals (Sweden)

    A de Waal

    2007-12-01

    Full Text Available Morphological analysis (MA and Bayesian networks (BN are two closely related modelling methods, each of which has its advantages and disadvantages for strategic decision support modelling. MA is a method for defining, linking and evaluating problem spaces. BNs are graphical models which consist of a qualitative and quantitative part. The qualitative part is a cause-and-effect, or causal graph. The quantitative part depicts the strength of the causal relationships between variables. Combining MA and BN, as two phases in a modelling process, allows us to gain the benefits of both of these methods. The strength of MA lies in defining, linking and internally evaluating the parameters of problem spaces and BN modelling allows for the definition and quantification of causal relationships between variables. Short summaries of MA and BN are provided in this paper, followed by discussions how these two computer aided methods may be combined to better facilitate modelling procedures. A simple example is presented, concerning a recent application in the field of environmental decision support.

  6. Mapping Robinia Pseudoacacia Forest Health Conditions by Using Combined Spectral, Spatial, and Textural Information Extracted from IKONOS Imagery and Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2015-07-01

    Full Text Available The textural and spatial information extracted from very high resolution (VHR remote sensing imagery provides complementary information for applications in which the spectral information is not sufficient for identification of spectrally similar landscape features. In this study grey-level co-occurrence matrix (GLCM textures and a local statistical analysis Getis statistic (Gi, computed from IKONOS multispectral (MS imagery acquired from the Yellow River Delta in China, along with a random forest (RF classifier, were used to discriminate Robina pseudoacacia tree health levels. Specifically, eight GLCM texture features (mean, variance, homogeneity, dissimilarity, contrast, entropy, angular second moment, and correlation were first calculated from IKONOS NIR band (Band 4 to determine an optimal window size (13 × 13 and an optimal direction (45°. Then, the optimal window size and direction were applied to the three other IKONOS MS bands (blue, green, and red for calculating the eight GLCM textures. Next, an optimal distance value (5 and an optimal neighborhood rule (Queen’s case were determined for calculating the four Gi features from the four IKONOS MS bands. Finally, different RF classification results of the three forest health conditions were created: (1 an overall accuracy (OA of 79.5% produced using the four MS band reflectances only; (2 an OA of 97.1% created with the eight GLCM features calculated from IKONOS Band 4 with the optimal window size of 13 × 13 and direction 45°; (3 an OA of 93.3% created with the all 32 GLCM features calculated from the four IKONOS MS bands with a window size of 13 × 13 and direction of 45°; (4 an OA of 94.0% created using the four Gi features calculated from the four IKONOS MS bands with the optimal distance value of 5 and Queen’s neighborhood rule; and (5 an OA of 96.9% created with the combined 16 spectral (four, spatial (four, and textural (eight features. The most important feature ranked by RF

  7. Natural and Unnatural Oil Layers on the Surface of the Gulf of Mexico Detected and Quantified in Synthetic Aperture RADAR Images with Texture Classifying Neural Network Algorithms

    Science.gov (United States)

    MacDonald, I. R.; Garcia-Pineda, O. G.; Morey, S. L.; Huffer, F.

    2011-12-01

    Effervescent hydrocarbons rise naturally from hydrocarbon seeps in the Gulf of Mexico and reach the ocean surface. This oil forms thin (~0.1 μm) layers that enhance specular reflectivity and have been widely used to quantify the abundance and distribution of natural seeps using synthetic aperture radar (SAR). An analogous process occurred at a vastly greater scale for oil and gas discharged from BP's Macondo well blowout. SAR data allow direct comparison of the areas of the ocean surface covered by oil from natural sources and the discharge. We used a texture classifying neural network algorithm to quantify the areas of naturally occurring oil-covered water in 176 SAR image collections from the Gulf of Mexico obtained between May 1997 and November 2007, prior to the blowout. Separately we also analyzed 36 SAR images collections obtained between 26 April and 30 July, 2010 while the discharged oil was visible in the Gulf of Mexico. For the naturally occurring oil, we removed pollution events and transient oceanographic effects by including only the reflectance anomalies that that recurred in the same locality over multiple images. We measured the area of oil layers in a grid of 10x10 km cells covering the entire Gulf of Mexico. Floating oil layers were observed in only a fraction of the total Gulf area amounting to 1.22x10^5 km^2. In a bootstrap sample of 2000 replications, the combined average area of these layers was 7.80x10^2 km^2 (sd 86.03). For a regional comparison, we divided the Gulf of Mexico into four quadrates along 90° W longitude, and 25° N latitude. The NE quadrate, where the BP discharge occurred, received on average 7.0% of the total natural seepage in the Gulf of Mexico (5.24 x10^2 km^2, sd 21.99); the NW quadrate received on average 68.0% of this total (5.30 x10^2 km^2, sd 69.67). The BP blowout occurred in the NE quadrate of the Gulf of Mexico; discharged oil that reached the surface drifted over a large area north of 25° N. Performing a

  8. Composite Classifiers for Automatic Target Recognition

    National Research Council Canada - National Science Library

    Wang, Lin-Cheng

    1998-01-01

    ...) using forward-looking infrared (FLIR) imagery. Two existing classifiers, one based on learning vector quantization and the other on modular neural networks, are used as the building blocks for our composite classifiers...

  9. Combination of support vector machine, artificial neural network and random forest for improving the classification of convective and stratiform rain using spectral features of SEVIRI data

    Science.gov (United States)

    Lazri, Mourad; Ameur, Soltane

    2018-05-01

    A model combining three classifiers, namely Support vector machine, Artificial neural network and Random forest (SAR) is designed for improving the classification of convective and stratiform rain. This model (SAR model) has been trained and then tested on a datasets derived from MSG-SEVIRI (Meteosat Second Generation-Spinning Enhanced Visible and Infrared Imager). Well-classified, mid-classified and misclassified pixels are determined from the combination of three classifiers. Mid-classified and misclassified pixels that are considered unreliable pixels are reclassified by using a novel training of the developed scheme. In this novel training, only the input data corresponding to the pixels in question to are used. This whole process is repeated a second time and applied to mid-classified and misclassified pixels separately. Learning and validation of the developed scheme are realized against co-located data observed by ground radar. The developed scheme outperformed different classifiers used separately and reached 97.40% of overall accuracy of classification.

  10. Inferring the lithology of borehole rocks by applying neural network classifiers to downhole logs: an example from the Ocean Drilling Program

    Science.gov (United States)

    Benaouda, D.; Wadge, G.; Whitmarsh, R. B.; Rothwell, R. G.; MacLeod, C.

    1999-02-01

    In boreholes with partial or no core recovery, interpretations of lithology in the remainder of the hole are routinely attempted using data from downhole geophysical sensors. We present a practical neural net-based technique that greatly enhances lithological interpretation in holes with partial core recovery by using downhole data to train classifiers to give a global classification scheme for those parts of the borehole for which no core was retrieved. We describe the system and its underlying methods of data exploration, selection and classification, and present a typical example of the system in use. Although the technique is equally applicable to oil industry boreholes, we apply it here to an Ocean Drilling Program (ODP) borehole (Hole 792E, Izu-Bonin forearc, a mixture of volcaniclastic sandstones, conglomerates and claystones). The quantitative benefits of quality-control measures and different subsampling strategies are shown. Direct comparisons between a number of discriminant analysis methods and the use of neural networks with back-propagation of error are presented. The neural networks perform better than the discriminant analysis techniques both in terms of performance rates with test data sets (2-3 per cent better) and in qualitative correlation with non-depth-matched core. We illustrate with the Hole 792E data how vital it is to have a system that permits the number and membership of training classes to be changed as analysis proceeds. The initial classification for Hole 792E evolved from a five-class to a three-class and then to a four-class scheme with resultant classification performance rates for the back-propagation neural network method of 83, 84 and 93 per cent respectively.

  11. LCC: Light Curves Classifier

    Science.gov (United States)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  12. Combining morphological analysis and Bayesian Networks for strategic decision support

    CSIR Research Space (South Africa)

    De Waal, AJ

    2007-12-01

    Full Text Available Morphological analysis (MA) and Bayesian networks (BN) are two closely related modelling methods, each of which has its advantages and disadvantages for strategic decision support modelling. MA is a method for defining, linking and evaluating...

  13. Output-feedback control of combined sewer networks through receding horizon control with moving horizon estimation

    OpenAIRE

    Joseph-Duran, Bernat; Ocampo-Martinez, Carlos; Cembrano, Gabriela

    2015-01-01

    An output-feedback control strategy for pollution mitigation in combined sewer networks is presented. The proposed strategy provides means to apply model-based predictive control to large-scale sewer networks, in-spite of the lack of measurements at most of the network sewers. In previous works, the authors presented a hybrid linear control-oriented model for sewer networks together with the formulation of Optimal Control Problems (OCP) and State Estimation Problems (SEP). By iteratively solv...

  14. [Rapid Identification of Epicarpium Citri Grandis via Infrared Spectroscopy and Fluorescence Spectrum Imaging Technology Combined with Neural Network].

    Science.gov (United States)

    Pan, Sha-sha; Huang, Fu-rong; Xiao, Chi; Xian, Rui-yi; Ma, Zhi-guo

    2015-10-01

    To explore rapid reliable methods for detection of Epicarpium citri grandis (ECG), the experiment using Fourier Transform Attenuated Total Reflection Infrared Spectroscopy (FTIR/ATR) and Fluorescence Spectrum Imaging Technology combined with Multilayer Perceptron (MLP) Neural Network pattern recognition, for the identification of ECG, and the two methods are compared. Infrared spectra and fluorescence spectral images of 118 samples, 81 ECG and 37 other kinds of ECG, are collected. According to the differences in tspectrum, the spectra data in the 550-1 800 cm(-1) wavenumber range and 400-720 nm wavelength are regarded as the study objects of discriminant analysis. Then principal component analysis (PCA) is applied to reduce the dimension of spectroscopic data of ECG and MLP Neural Network is used in combination to classify them. During the experiment were compared the effects of different methods of data preprocessing on the model: multiplicative scatter correction (MSC), standard normal variable correction (SNV), first-order derivative(FD), second-order derivative(SD) and Savitzky-Golay (SG). The results showed that: after the infrared spectra data via the Savitzky-Golay (SG) pretreatment through the MLP Neural Network with the hidden layer function as sigmoid, we can get the best discrimination of ECG, the correct percent of training set and testing set are both 100%. Using fluorescence spectral imaging technology, corrected by the multiple scattering (MSC) results in the pretreatment is the most ideal. After data preprocessing, the three layers of the MLP Neural Network of the hidden layer function as sigmoid function can get 100% correct percent of training set and 96.7% correct percent of testing set. It was shown that the FTIR/ATR and fluorescent spectral imaging technology combined with MLP Neural Network can be used for the identification study of ECG and has the advantages of rapid, reliable effect.

  15. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier

    Science.gov (United States)

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task.

  16. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier.

    Science.gov (United States)

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  17. Constraint satisfaction adaptive neural network and heuristics combined approaches for generalized job-shop scheduling.

    Science.gov (United States)

    Yang, S; Wang, D

    2000-01-01

    This paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.

  18. Improving predictions of protein-protein interfaces by combining amino acid-specific classifiers based on structural and physicochemical descriptors with their weighted neighbor averages.

    Directory of Open Access Journals (Sweden)

    Fábio R de Moraes

    Full Text Available Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR from free surface residues (FSR. We formulated a linear discriminative analysis (LDA classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/ are suitable for such a task. Receiver operating characteristic (ROC analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study

  19. Improving predictions of protein-protein interfaces by combining amino acid-specific classifiers based on structural and physicochemical descriptors with their weighted neighbor averages.

    Science.gov (United States)

    de Moraes, Fábio R; Neshich, Izabella A P; Mazoni, Ivan; Yano, Inácio H; Pereira, José G C; Salim, José A; Jardine, José G; Neshich, Goran

    2014-01-01

    Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR) from free surface residues (FSR). We formulated a linear discriminative analysis (LDA) classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/) are suitable for such a task. Receiver operating characteristic (ROC) analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication) or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study is now

  20. Improving Predictions of Protein-Protein Interfaces by Combining Amino Acid-Specific Classifiers Based on Structural and Physicochemical Descriptors with Their Weighted Neighbor Averages

    Science.gov (United States)

    de Moraes, Fábio R.; Neshich, Izabella A. P.; Mazoni, Ivan; Yano, Inácio H.; Pereira, José G. C.; Salim, José A.; Jardine, José G.; Neshich, Goran

    2014-01-01

    Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR) from free surface residues (FSR). We formulated a linear discriminative analysis (LDA) classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/) are suitable for such a task. Receiver operating characteristic (ROC) analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication) or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study is now

  1. Combining complex networks and data mining: Why and how

    Science.gov (United States)

    Zanin, M.; Papo, D.; Sousa, P. A.; Menasalvas, E.; Nicchi, A.; Kubik, E.; Boccaletti, S.

    2016-05-01

    The increasing power of computer technology does not dispense with the need to extract meaningful information out of data sets of ever growing size, and indeed typically exacerbates the complexity of this task. To tackle this general problem, two methods have emerged, at chronologically different times, that are now commonly used in the scientific community: data mining and complex network theory. Not only do complex network analysis and data mining share the same general goal, that of extracting information from complex systems to ultimately create a new compact quantifiable representation, but they also often address similar problems too. In the face of that, a surprisingly low number of researchers turn out to resort to both methodologies. One may then be tempted to conclude that these two fields are either largely redundant or totally antithetic. The starting point of this review is that this state of affairs should be put down to contingent rather than conceptual differences, and that these two fields can in fact advantageously be used in a synergistic manner. An overview of both fields is first provided, some fundamental concepts of which are illustrated. A variety of contexts in which complex network theory and data mining have been used in a synergistic manner are then presented. Contexts in which the appropriate integration of complex network metrics can lead to improved classification rates with respect to classical data mining algorithms and, conversely, contexts in which data mining can be used to tackle important issues in complex network theory applications are illustrated. Finally, ways to achieve a tighter integration between complex networks and data mining, and open lines of research are discussed.

  2. Prototype real-time baseband signal combiner. [deep space network

    Science.gov (United States)

    Howard, L. D.

    1980-01-01

    The design and performance of a prototype real-time baseband signal combiner, used to enhance the received Voyager 2 spacecraft signals during the Jupiter flyby, is described. Hardware delay paths, operating programs, and firmware are discussed.

  3. A systems biology-based classifier for hepatocellular carcinoma diagnosis.

    Directory of Open Access Journals (Sweden)

    Yanqiong Zhang

    Full Text Available AIM: The diagnosis of hepatocellular carcinoma (HCC in the early stage is crucial to the application of curative treatments which are the only hope for increasing the life expectancy of patients. Recently, several large-scale studies have shed light on this problem through analysis of gene expression profiles to identify markers correlated with HCC progression. However, those marker sets shared few genes in common and were poorly validated using independent data. Therefore, we developed a systems biology based classifier by combining the differential gene expression with topological features of human protein interaction networks to enhance the ability of HCC diagnosis. METHODS AND RESULTS: In the Oncomine platform, genes differentially expressed in HCC tissues relative to their corresponding normal tissues were filtered by a corrected Q value cut-off and Concept filters. The identified genes that are common to different microarray datasets were chosen as the candidate markers. Then, their networks were analyzed by GeneGO Meta-Core software and the hub genes were chosen. After that, an HCC diagnostic classifier was constructed by Partial Least Squares modeling based on the microarray gene expression data of the hub genes. Validations of diagnostic performance showed that this classifier had high predictive accuracy (85.88∼92.71% and area under ROC curve (approximating 1.0, and that the network topological features integrated into this classifier contribute greatly to improving the predictive performance. Furthermore, it has been demonstrated that this modeling strategy is not only applicable to HCC, but also to other cancers. CONCLUSION: Our analysis suggests that the systems biology-based classifier that combines the differential gene expression and topological features of human protein interaction network may enhance the diagnostic performance of HCC classifier.

  4. Assessment of the predictive accuracy of five in silico prediction tools, alone or in combination, and two metaservers to classify long QT syndrome gene mutations.

    Science.gov (United States)

    Leong, Ivone U S; Stuckey, Alexander; Lai, Daniel; Skinner, Jonathan R; Love, Donald R

    2015-05-13

    Long QT syndrome (LQTS) is an autosomal dominant condition predisposing to sudden death from malignant arrhythmia. Genetic testing identifies many missense single nucleotide variants of uncertain pathogenicity. Establishing genetic pathogenicity is an essential prerequisite to family cascade screening. Many laboratories use in silico prediction tools, either alone or in combination, or metaservers, in order to predict pathogenicity; however, their accuracy in the context of LQTS is unknown. We evaluated the accuracy of five in silico programs and two metaservers in the analysis of LQTS 1-3 gene variants. The in silico tools SIFT, PolyPhen-2, PROVEAN, SNPs&GO and SNAP, either alone or in all possible combinations, and the metaservers Meta-SNP and PredictSNP, were tested on 312 KCNQ1, KCNH2 and SCN5A gene variants that have previously been characterised by either in vitro or co-segregation studies as either "pathogenic" (283) or "benign" (29). The accuracy, sensitivity, specificity and Matthews Correlation Coefficient (MCC) were calculated to determine the best combination of in silico tools for each LQTS gene, and when all genes are combined. The best combination of in silico tools for KCNQ1 is PROVEAN, SNPs&GO and SIFT (accuracy 92.7%, sensitivity 93.1%, specificity 100% and MCC 0.70). The best combination of in silico tools for KCNH2 is SIFT and PROVEAN or PROVEAN, SNPs&GO and SIFT. Both combinations have the same scores for accuracy (91.1%), sensitivity (91.5%), specificity (87.5%) and MCC (0.62). In the case of SCN5A, SNAP and PROVEAN provided the best combination (accuracy 81.4%, sensitivity 86.9%, specificity 50.0%, and MCC 0.32). When all three LQT genes are combined, SIFT, PROVEAN and SNAP is the combination with the best performance (accuracy 82.7%, sensitivity 83.0%, specificity 80.0%, and MCC 0.44). Both metaservers performed better than the single in silico tools; however, they did not perform better than the best performing combination of in silico

  5. Bayes classifiers for imbalanced traffic accidents datasets.

    Science.gov (United States)

    Mujalli, Randa Oqab; López, Griselda; Garach, Laura

    2016-03-01

    Traffic accidents data sets are usually imbalanced, where the number of instances classified under the killed or severe injuries class (minority) is much lower than those classified under the slight injuries class (majority). This, however, supposes a challenging problem for classification algorithms and may cause obtaining a model that well cover the slight injuries instances whereas the killed or severe injuries instances are misclassified frequently. Based on traffic accidents data collected on urban and suburban roads in Jordan for three years (2009-2011); three different data balancing techniques were used: under-sampling which removes some instances of the majority class, oversampling which creates new instances of the minority class and a mix technique that combines both. In addition, different Bayes classifiers were compared for the different imbalanced and balanced data sets: Averaged One-Dependence Estimators, Weightily Average One-Dependence Estimators, and Bayesian networks in order to identify factors that affect the severity of an accident. The results indicated that using the balanced data sets, especially those created using oversampling techniques, with Bayesian networks improved classifying a traffic accident according to its severity and reduced the misclassification of killed and severe injuries instances. On the other hand, the following variables were found to contribute to the occurrence of a killed causality or a severe injury in a traffic accident: number of vehicles involved, accident pattern, number of directions, accident type, lighting, surface condition, and speed limit. This work, to the knowledge of the authors, is the first that aims at analyzing historical data records for traffic accidents occurring in Jordan and the first to apply balancing techniques to analyze injury severity of traffic accidents. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Combining neural networks and genetic algorithms for hydrological flow forecasting

    Science.gov (United States)

    Neruda, Roman; Srejber, Jan; Neruda, Martin; Pascenko, Petr

    2010-05-01

    We present a neural network approach to rainfall-runoff modeling for small size river basins based on several time series of hourly measured data. Different neural networks are considered for short time runoff predictions (from one to six hours lead time) based on runoff and rainfall data observed in previous time steps. Correlation analysis shows that runoff data, short time rainfall history, and aggregated API values are the most significant data for the prediction. Neural models of multilayer perceptron and radial basis function networks with different numbers of units are used and compared with more traditional linear time series predictors. Out of possible 48 hours of relevant history of all the input variables, the most important ones are selected by means of input filters created by a genetic algorithm. The genetic algorithm works with population of binary encoded vectors defining input selection patterns. Standard genetic operators of two-point crossover, random bit-flipping mutation, and tournament selection were used. The evaluation of objective function of each individual consists of several rounds of building and testing a particular neural network model. The whole procedure is rather computational exacting (taking hours to days on a desktop PC), thus a high-performance mainframe computer has been used for our experiments. Results based on two years worth data from the Ploucnice river in Northern Bohemia suggest that main problems connected with this approach to modeling are ovetraining that can lead to poor generalization, and relatively small number of extreme events which makes it difficult for a model to predict the amplitude of the event. Thus, experiments with both absolute and relative runoff predictions were carried out. In general it can be concluded that the neural models show about 5 per cent improvement in terms of efficiency coefficient over liner models. Multilayer perceptrons with one hidden layer trained by back propagation algorithm and

  7. Single and combined fault diagnosis of reciprocating compressor valves using a hybrid deep belief network

    NARCIS (Netherlands)

    Tran, Van Tung; Thobiani, Faisal Al; Tinga, Tiedo; Ball, Andrew David; Niu, Gang

    2017-01-01

    In this paper, a hybrid deep belief network is proposed to diagnose single and combined faults of suction and discharge valves in a reciprocating compressor. This hybrid integrates the deep belief network structured by multiple stacked restricted Boltzmann machines for pre-training and simplified

  8. Combining epidemiological and genetic networks signifies the importance of early treatment in HIV-1 transmission.

    Science.gov (United States)

    Zarrabi, Narges; Prosperi, Mattia; Belleman, Robert G; Colafigli, Manuela; De Luca, Andrea; Sloot, Peter M A

    2012-01-01

    Inferring disease transmission networks is important in epidemiology in order to understand and prevent the spread of infectious diseases. Reconstruction of the infection transmission networks requires insight into viral genome data as well as social interactions. For the HIV-1 epidemic, current research either uses genetic information of patients' virus to infer the past infection events or uses statistics of sexual interactions to model the network structure of viral spreading. Methods for a reliable reconstruction of HIV-1 transmission dynamics, taking into account both molecular and societal data are still lacking. The aim of this study is to combine information from both genetic and epidemiological scales to characterize and analyse a transmission network of the HIV-1 epidemic in central Italy.We introduce a novel filter-reduction method to build a network of HIV infected patients based on their social and treatment information. The network is then combined with a genetic network, to infer a hypothetical infection transmission network. We apply this method to a cohort study of HIV-1 infected patients in central Italy and find that patients who are highly connected in the network have longer untreated infection periods. We also find that the network structures for homosexual males and heterosexual populations are heterogeneous, consisting of a majority of 'peripheral nodes' that have only a few sexual interactions and a minority of 'hub nodes' that have many sexual interactions. Inferring HIV-1 transmission networks using this novel combined approach reveals remarkable correlations between high out-degree individuals and longer untreated infection periods. These findings signify the importance of early treatment and support the potential benefit of wide population screening, management of early diagnoses and anticipated antiretroviral treatment to prevent viral transmission and spread. The approach presented here for reconstructing HIV-1 transmission networks

  9. Wavelet classifier used for diagnosing shock absorbers in cars

    Directory of Open Access Journals (Sweden)

    Janusz GARDULSKI

    2007-01-01

    Full Text Available The paper discusses some commonly used methods of hydraulic absorbertesting. Disadvantages of the methods are described. A vibro-acoustic method is presented and recommended for practical use on existing test rigs. The method is based on continuous wavelet analysis combined with neural classifier and 25-neuron, one-way, three-layer back propagation network. The analysis satisfies the intended aim.

  10. Principal Component Analysis Coupled with Artificial Neural Networks—A Combined Technique Classifying Small Molecular Structures Using a Concatenated Spectral Database

    Directory of Open Access Journals (Sweden)

    Mihail Lucian Birsa

    2011-10-01

    Full Text Available In this paper we present several expert systems that predict the class identity of the modeled compounds, based on a preprocessed spectral database. The expert systems were built using Artificial Neural Networks (ANN and are designed to predict if an unknown compound has the toxicological activity of amphetamines (stimulant and hallucinogen, or whether it is a nonamphetamine. In attempts to circumvent the laws controlling drugs of abuse, new chemical structures are very frequently introduced on the black market. They are obtained by slightly modifying the controlled molecular structures by adding or changing substituents at various positions on the banned molecules. As a result, no substance similar to those forming a prohibited class may be used nowadays, even if it has not been specifically listed. Therefore, reliable, fast and accessible systems capable of modeling and then identifying similarities at molecular level, are highly needed for epidemiological, clinical, and forensic purposes. In order to obtain the expert systems, we have preprocessed a concatenated spectral database, representing the GC-FTIR (gas chromatography-Fourier transform infrared spectrometry and GC-MS (gas chromatography-mass spectrometry spectra of 103 forensic compounds. The database was used as input for a Principal Component Analysis (PCA. The scores of the forensic compounds on the main principal components (PCs were then used as inputs for the ANN systems. We have built eight PC-ANN systems (principal component analysis coupled with artificial neural network with a different number of input variables: 15 PCs, 16 PCs, 17 PCs, 18 PCs, 19 PCs, 20 PCs, 21 PCs and 22 PCs. The best expert system was found to be the ANN network built with 18 PCs, which accounts for an explained variance of 77%. This expert system has the best sensitivity (a rate of classification C = 100% and a rate of true positives TP = 100%, as well as a good selectivity (a rate of true negatives TN

  11. Problem-Solving Skills among Precollege Students in Clinical Immunology and Microbiology: Classifying Strategies with a Rubric and Artificial Neural Network Technology.

    Science.gov (United States)

    Kanowith-Klein, Susan; Stave, Mel; Stevens, Ron; Casillas, Adrian M.

    2001-01-01

    Investigates methods for classifying problem solving strategies of high school students who studied infectious and non-infectious diseases by using a software system that can generate a picture of students' strategies in solving problems. (Contains 24 references.) (Author/YDS)

  12. Combining PubMed knowledge and EHR data to develop a weighted bayesian network for pancreatic cancer prediction.

    Science.gov (United States)

    Zhao, Di; Weng, Chunhua

    2011-10-01

    In this paper, we propose a novel method that combines PubMed knowledge and Electronic Health Records to develop a weighted Bayesian Network Inference (BNI) model for pancreatic cancer prediction. We selected 20 common risk factors associated with pancreatic cancer and used PubMed knowledge to weigh the risk factors. A keyword-based algorithm was developed to extract and classify PubMed abstracts into three categories that represented positive, negative, or neutral associations between each risk factor and pancreatic cancer. Then we designed a weighted BNI model by adding the normalized weights into a conventional BNI model. We used this model to extract the EHR values for patients with or without pancreatic cancer, which then enabled us to calculate the prior probabilities for the 20 risk factors in the BNI. The software iDiagnosis was designed to use this weighted BNI model for predicting pancreatic cancer. In an evaluation using a case-control dataset, the weighted BNI model significantly outperformed the conventional BNI and two other classifiers (k-Nearest Neighbor and Support Vector Machine). We conclude that the weighted BNI using PubMed knowledge and EHR data shows remarkable accuracy improvement over existing representative methods for pancreatic cancer prediction. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Quantum ensembles of quantum classifiers.

    Science.gov (United States)

    Schuld, Maria; Petruccione, Francesco

    2018-02-09

    Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.

  14. Combining region- and network-level brain-behavior relationships in a structural equation model.

    Science.gov (United States)

    Bolt, Taylor; Prince, Emily B; Nomi, Jason S; Messinger, Daniel; Llabre, Maria M; Uddin, Lucina Q

    2018-01-15

    Brain-behavior associations in fMRI studies are typically restricted to a single level of analysis: either a circumscribed brain region-of-interest (ROI) or a larger network of brain regions. However, this common practice may not always account for the interdependencies among ROIs of the same network or potentially unique information at the ROI-level, respectively. To account for both sources of information, we combined measurement and structural components of structural equation modeling (SEM) approaches to empirically derive networks from ROI activity, and to assess the association of both individual ROIs and their respective whole-brain activation networks with task performance using three large task-fMRI datasets and two separate brain parcellation schemes. The results for working memory and relational tasks revealed that well-known ROI-performance associations are either non-significant or reversed when accounting for the ROI's common association with its corresponding network, and that the network as a whole is instead robustly associated with task performance. The results for the arithmetic task revealed that in certain cases, an ROI can be robustly associated with task performance, even when accounting for its associated network. The SEM framework described in this study provides researchers additional flexibility in testing brain-behavior relationships, as well as a principled way to combine ROI- and network-levels of analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Classifier fusion for VoIP attacks classification

    Science.gov (United States)

    Safarik, Jakub; Rezac, Filip

    2017-05-01

    SIP is one of the most successful protocols in the field of IP telephony communication. It establishes and manages VoIP calls. As the number of SIP implementation rises, we can expect a higher number of attacks on the communication system in the near future. This work aims at malicious SIP traffic classification. A number of various machine learning algorithms have been developed for attack classification. The paper presents a comparison of current research and the use of classifier fusion method leading to a potential decrease in classification error rate. Use of classifier combination makes a more robust solution without difficulties that may affect single algorithms. Different voting schemes, combination rules, and classifiers are discussed to improve the overall performance. All classifiers have been trained on real malicious traffic. The concept of traffic monitoring depends on the network of honeypot nodes. These honeypots run in several networks spread in different locations. Separation of honeypots allows us to gain an independent and trustworthy attack information.

  16. A sequential Monte Carlo model of the combined GB gas and electricity network

    International Nuclear Information System (INIS)

    Chaudry, Modassar; Wu, Jianzhong; Jenkins, Nick

    2013-01-01

    A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets

  17. Research on Large-Scale Road Network Partition and Route Search Method Combined with Traveler Preferences

    Directory of Open Access Journals (Sweden)

    De-Xin Yu

    2013-01-01

    Full Text Available Combined with improved Pallottino parallel algorithm, this paper proposes a large-scale route search method, which considers travelers’ route choice preferences. And urban road network is decomposed into multilayers effectively. Utilizing generalized travel time as road impedance function, the method builds a new multilayer and multitasking road network data storage structure with object-oriented class definition. Then, the proposed path search algorithm is verified by using the real road network of Guangzhou city as an example. By the sensitive experiments, we make a comparative analysis of the proposed path search method with the current advanced optimal path algorithms. The results demonstrate that the proposed method can increase the road network search efficiency by more than 16% under different search proportion requests, node numbers, and computing process numbers, respectively. Therefore, this method is a great breakthrough in the guidance field of urban road network.

  18. A proposal to define when combined cycle can be classified as a cogeneration plants; Proposta di definizione di impianto di cogenerazione a ciclo combinato

    Energy Technology Data Exchange (ETDEWEB)

    Macchi, E. [Milan Politecnico, Milan (Italy)

    1999-09-01

    The recent decree on liberalization of the Italian electric market assigns to the authority for electric energy and natural gas the task of defining under which conditions a combined heat and power plant (CHP) obtains a significant primary energy saving when compared to separate productions. The present paper outlines and discusses the proposal made by a working group of CTI (Italian thermo-technical committee). The most significant features of the proposal are the following: i the use of IRE (Energy saving index), based upon net, annual energy production, certified by independent institutions; ii the adoption of an automatic procedure of yearly updating the reference performance related to conventional power generation, accounting for technology evolution; iii the assumption of a lower limit for the thermal/fuel energy ratio and iv correction procedures in case of usage of non-conventional fuels (municipal wastes, process gases, etc.) [Italian] Il recente decreto sulla liberalizzazione del mercato elettrico prevede che l'autorita per l'energie elettrica e il gas definisca le condizioni per cui un impianto di produzione combinata di energia elettrica e calore garantisce un significativo risparmio di energia rispetto alle produzioni separate. Nella presente nota viene descritta e commentata una proposta operativa avanzata da un gruppo di lavoro del Comitato Termotecnico Italiano (CTI). Elementi caratterizzanti la proposta sono: i) il riferimento all'indice IRE (indice di Risparmio di Energia primaria), valutato su prestazioni annue nette, a consuntivo e certificate da Enti indipendenti, ii) l'introduzione di un meccanismo automatico di revisione annuale di parametri di confronto relativi alla generazione separata che tenga conto dell'evoluzione tecnologica, iii) l'introduzione di un limite inferiore al rapporto fra la generazione di energia termica utile e l'energia introdotta con il combustibile e iv l'inserimento di

  19. A combined Bodian-Nissl stain for improved network analysis in neuronal cell culture.

    Science.gov (United States)

    Hightower, M; Gross, G W

    1985-11-01

    Bodian and Nissl procedures were combined to stain dissociated mouse spinal cord cells cultured on coverslips. The Bodian technique stains fine neuronal processes in great detail as well as an intracellular fibrillar network concentrated around the nucleus and in proximal neurites. The Nissl stain clearly delimits neuronal cytoplasm in somata and in large dendrites. A combination of these techniques allows the simultaneous depiction of neuronal perikarya and all afferent and efferent processes. Costaining with little background staining by either procedure suggests high specificity for neurons. This procedure could be exploited for routine network analysis of cultured neurons.

  20. Classification of protein fold classes by knot theory and prediction of folds by neural networks: A combined theoretical and experimental approach

    DEFF Research Database (Denmark)

    Ramnarayan, K.; Bohr, Henrik; Jalkanen, Karl J.

    2008-01-01

    We present different means of classifying protein structure. One is made rigorous by mathematical knot invariants that coincide reasonably well with ordinary graphical fold classification and another classification is by packing analysis. Furthermore when constructing our mathematical fold...... classifications, we utilize standard neural network methods for predicting protein fold classes from amino acid sequences. We also make an analysis of the redundancy of the structural classifications in relation to function and ligand binding. Finally we advocate the use of combining the measurement of the VA...

  1. Modeling the future evolution of the virtual water trade network: A combination of network and gravity models

    Science.gov (United States)

    Sartori, Martina; Schiavo, Stefano; Fracasso, Andrea; Riccaboni, Massimo

    2017-12-01

    The paper investigates how the topological features of the virtual water (VW) network and the size of the associated VW flows are likely to change over time, under different socio-economic and climate scenarios. We combine two alternative models of network formation -a stochastic and a fitness model, used to describe the structure of VW flows- with a gravity model of trade to predict the intensity of each bilateral flow. This combined approach is superior to existing methodologies in its ability to replicate the observed features of VW trade. The insights from the models are used to forecast future VW flows in 2020 and 2050, under different climatic scenarios, and compare them with future water availability. Results suggest that the current trend of VW exports is not sustainable for all countries. Moreover, our approach highlights that some VW importers might be exposed to "imported water stress" as they rely heavily on imports from countries whose water use is unsustainable.

  2. Classifying Wheat Hyperspectral Pixels of Healthy Heads and Fusarium Head Blight Disease Using a Deep Neural Network in the Wild Field

    Directory of Open Access Journals (Sweden)

    Xiu Jin

    2018-03-01

    Full Text Available Classification of healthy and diseased wheat heads in a rapid and non-destructive manner for the early diagnosis of Fusarium head blight disease research is difficult. Our work applies a deep neural network classification algorithm to the pixels of hyperspectral image to accurately discern the disease area. The spectra of hyperspectral image pixels in a manually selected region of interest are preprocessed via mean removal to eliminate interference, due to the time interval and the environment. The generalization of the classification model is considered, and two improvements are made to the model framework. First, the pixel spectra data are reshaped into a two-dimensional data structure for the input layer of a Convolutional Neural Network (CNN. After training two types of CNNs, the assessment shows that a two-dimensional CNN model is more efficient than a one-dimensional CNN. Second, a hybrid neural network with a convolutional layer and bidirectional recurrent layer is reconstructed to improve the generalization of the model. When considering the characteristics of the dataset and models, the confusion matrices that are based on the testing dataset indicate that the classification model is effective for background and disease classification of hyperspectral image pixels. The results of the model show that the two-dimensional convolutional bidirectional gated recurrent unit neural network (2D-CNN-BidGRU has an F1 score and accuracy of 0.75 and 0.743, respectively, for the total testing dataset. A comparison of all the models shows that the hybrid neural network of 2D-CNN-BidGRU is the best at preventing over-fitting and optimize the generalization. Our results illustrate that the hybrid structure deep neural network is an excellent classification algorithm for healthy and Fusarium head blight diseased classification in the field of hyperspectral imagery.

  3. Disrupting Cocaine Trafficking Networks: Interdicting a Combined Social-Functional Network Model

    Science.gov (United States)

    2016-03-01

    BENEFITS OF STUDY ....................................14  E.  SOCIAL-FUNCTIONAL NETWORK DESCRIPTION .....................16  1.  A Representative...data to maintain appropriate classification levels) of cocaine produced each month by the Colombian sources to the U.S. homeland, netting the...Tactical interdiction-centric operational approaches have improved over the years due to previous studies and research, but these approaches rely upon one

  4. Material basis of Chinese herbal formulas explored by combining pharmacokinetics with network pharmacology.

    Directory of Open Access Journals (Sweden)

    Lixia Pei

    Full Text Available The clinical application of Traditional Chinese medicine (TCM, using several herbs in combination (called formulas, has a history of more than one thousand years. However, the bioactive compounds that account for their therapeutic effects remain unclear. We hypothesized that the material basis of a formula are those compounds with a high content in the decoction that are maintained at a certain level in the system circulation. Network pharmacology provides new methodological insights for complicated system studies. In this study, we propose combining pharmacokinetic (PK analysis with network pharmacology to explore the material basis of TCM formulas as exemplified by the Bushen Zhuanggu formula (BZ composed of Psoralea corylifolia L., Aconitum carmichaeli Debx., and Cnidium monnieri (L. Cuss. A sensitive and credible liquid chromatography tandem mass spectrometry (LC-MS/MS method was established for the simultaneous determination of 15 compounds present in the three herbs. The concentrations of these compounds in the BZ decoction and in rat plasma after oral BZ administration were determined. Up to 12 compounds were detected in the BZ decoction, but only 5 could be analyzed using PK parameters. Combined PK results, network pharmacology analysis revealed that 4 compounds might serve as the material basis for BZ. We concluded that a sensitive, reliable, and suitable LC-MS/MS method for both the composition and pharmacokinetic study of BZ has been established. The combination of PK with network pharmacology might be a potent method for exploring the material basis of TCM formulas.

  5. Promotion of cooperation in the form C0C1D classified by 'degree grads' in a scale-free network

    International Nuclear Information System (INIS)

    Zhao, Li; Ye, Xiang-Jun; Huang, Zi-Gang; Sun, Jin-Tu; Yang, Lei; Wang, Ying-Hai; Do, Younghae

    2010-01-01

    In this paper, we revisit the issue of the public goods game (PGG) on a heterogeneous graph. By introducing a new effective topology parameter, 'degree grads' ψ, we clearly classify the agents into three kinds, namely, C 0 , C 1 , and D. The mechanism for the heterogeneous topology promoting cooperation is discussed in detail from the perspective of C 0 C 1 D, which reflects the fact that the unreasoning imitation behaviour of C 1 agents, who are 'cheated' by the well-paid C 0 agents inhabiting special positions, stabilizes the formation of the cooperation community. The analytical and simulation results for certain parameters are found to coincide well with each other. The C 0 C 1 D case provides a picture of the actual behaviours in real society and thus is potentially of interest

  6. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    ved for the isolated English text, but for the handwritten Devanagari script it is not ... characters, lack of standard benchmarking and ground truth dataset, lack of ..... theory, proposed by Glen Shafer as a way to represent cognitive knowledge.

  7. Entropy Based Classifier Combination for Sentence Segmentation

    Science.gov (United States)

    2007-01-01

    speaker diarization system to divide the audio data into hypothetical speakers [17...the prosodic feature also includes turn-based features which describe the position of a word in relation to diarization seg- mentation. The speaker ...ro- bust speaker segmentation: the ICSI-SRI fall 2004 diarization system,” in Proc. RT-04F Workshop, 2004. [18] “The rich transcription fall 2003,” http://nist.gov/speech/tests/rt/rt2003/fall/docs/rt03-fall-eval- plan-v9.pdf.

  8. Synergistic target combination prediction from curated signaling networks: Machine learning meets systems biology and pharmacology.

    Science.gov (United States)

    Chua, Huey Eng; Bhowmick, Sourav S; Tucker-Kellogg, Lisa

    2017-10-01

    Given a signaling network, the target combination prediction problem aims to predict efficacious and safe target combinations for combination therapy. State-of-the-art in silico methods use Monte Carlo simulated annealing (mcsa) to modify a candidate solution stochastically, and use the Metropolis criterion to accept or reject the proposed modifications. However, such stochastic modifications ignore the impact of the choice of targets and their activities on the combination's therapeutic effect and off-target effects, which directly affect the solution quality. In this paper, we present mascot, a method that addresses this limitation by leveraging two additional heuristic criteria to minimize off-target effects and achieve synergy for candidate modification. Specifically, off-target effects measure the unintended response of a signaling network to the target combination and is often associated with toxicity. Synergy occurs when a pair of targets exerts effects that are greater than the sum of their individual effects, and is generally a beneficial strategy for maximizing effect while minimizing toxicity. mascot leverages on a machine learning-based target prioritization method which prioritizes potential targets in a given disease-associated network to select more effective targets (better therapeutic effect and/or lower off-target effects); and on Loewe additivity theory from pharmacology which assesses the non-additive effects in a combination drug treatment to select synergistic target activities. Our experimental study on two disease-related signaling networks demonstrates the superiority of mascot in comparison to existing approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Combined effect of storm movement and drainage network configuration on flood peaks

    Science.gov (United States)

    Seo, Yongwon; Son, Kwang Ik; Choi, Hyun Il

    2016-04-01

    This presentation reports the combined effect of storm movement and drainage network layout on resulting hydrographs and its implication to flood process and also flood mitigation. First, we investigate, in general terms, the effects of storm movement on the resulting flood peaks, and the underlying process controls. For this purpose, we utilize a broad theoretical framework that uses characteristic time and space scales associated with stationary rainstorms as well as moving rainstorms. For a stationary rainstorm the characteristic timescales that govern the peak response include two intrinsic timescales of a catchment and one extrinsic timescale of a rainstorm. On the other hand, for a moving rainstorm, two additional extrinsic scales are required; the storm travel time and storm size. We show that the relationship between the peak response and the timescales appropriate for a stationary rainstorm can be extended in a straightforward manner to describe the peak response for a moving rainstorm. For moving rainstorms, we show that the augmentation of peak response arises from both effect of overlaying the responses from subcatchments (resonance condition) and effect of increased responses from subcatchments due to increased duration (interdependence), which results in maximum peak response when the moving rainstorm is slower than the channel flow velocity. Second, we show the relation between channel network configurations and hydrograph sensitivity to storm kinematics. For this purpose, Gibbs' model is used to evaluate the network characteristics. The results show that the storm kinematics that produces the maximum peak discharge depends on the network configuration because the resonance condition changes with the network configuration. We show that an "efficient" network layout is more sensitive and results in higher increase in peak response compared to "inefficient" one. These results imply different flood potential risks for river networks depending on network

  10. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    Science.gov (United States)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  11. Hierarchical mixtures of naive Bayes classifiers

    NARCIS (Netherlands)

    Wiering, M.A.

    2002-01-01

    Naive Bayes classifiers tend to perform very well on a large number of problem domains, although their representation power is quite limited compared to more sophisticated machine learning algorithms. In this pa- per we study combining multiple naive Bayes classifiers by using the hierar- chical

  12. Dynamic Response Genes in CD4+ T Cells Reveal a Network of Interactive Proteins that Classifies Disease Activity in Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Sandra Hellberg

    2016-09-01

    Full Text Available Multiple sclerosis (MS is a chronic inflammatory disease of the CNS and has a varying disease course as well as variable response to treatment. Biomarkers may therefore aid personalized treatment. We tested whether in vitro activation of MS patient-derived CD4+ T cells could reveal potential biomarkers. The dynamic gene expression response to activation was dysregulated in patient-derived CD4+ T cells. By integrating our findings with genome-wide association studies, we constructed a highly connected MS gene module, disclosing cell activation and chemotaxis as central components. Changes in several module genes were associated with differences in protein levels, which were measurable in cerebrospinal fluid and were used to classify patients from control individuals. In addition, these measurements could predict disease activity after 2 years and distinguish low and high responders to treatment in two additional, independent cohorts. While further validation is needed in larger cohorts prior to clinical implementation, we have uncovered a set of potentially promising biomarkers.

  13. Combining many interaction networks to predict gene function and analyze gene lists.

    Science.gov (United States)

    Mostafavi, Sara; Morris, Quaid

    2012-05-01

    In this article, we review how interaction networks can be used alone or in combination in an automated fashion to provide insight into gene and protein function. We describe the concept of a "gene-recommender system" that can be applied to any large collection of interaction networks to make predictions about gene or protein function based on a query list of proteins that share a function of interest. We discuss these systems in general and focus on one specific system, GeneMANIA, that has unique features and uses different algorithms from the majority of other systems. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model

    Directory of Open Access Journals (Sweden)

    Bogdan Gliwa

    2011-01-01

    Full Text Available The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods. Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented.

  15. Boolean network identification from perturbation time series data combining dynamics abstraction and logic programming.

    Science.gov (United States)

    Ostrowski, M; Paulevé, L; Schaub, T; Siegel, A; Guziolowski, C

    2016-11-01

    Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Classification of wheat varieties: Use of two-dimensional gel electrophoresis for varieties that can not be classified by matrix assisted laser desorption/ionization-time of flight-mass spectrometry and an artificial neural network

    DEFF Research Database (Denmark)

    Jacobsen, Susanne; Nesic, Ljiljana; Petersen, Marianne Kjerstine

    2001-01-01

    Analyzing a gliadin extract by matrix assisted laser desorption/ionization-time of flight-mass spectrometry (MALDI- TOF-MS) combined with an artificial neural network (ANN) is a suitable method for identification of wheat varieties. However, the ANN can not distinguish between all different wheat...

  17. Classifying features in CT imagery: accuracy for some single- and multiple-species classifiers

    Science.gov (United States)

    Daniel L. Schmoldt; Jing He; A. Lynn Abbott

    1998-01-01

    Our current approach to automatically label features in CT images of hardwood logs classifies each pixel of an image individually. These feature classifiers use a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Initially, this type of...

  18. High-risk plaque features can be detected in non-stenotic carotid plaques of patients with ischaemic stroke classified as cryptogenic using combined 18F-FDG PET/MR imaging

    International Nuclear Information System (INIS)

    Hyafil, Fabien; Schindler, Andreas; Obenhuber, Tilman; Saam, Tobias; Sepp, Dominik; Hoehn, Sabine; Poppert, Holger; Bayer-Karpinska, Anna; Boeckh-Behrens, Tobias; Hacker, Marcus; Nekolla, Stephan G.; Rominger, Axel; Dichgans, Martin; Schwaiger, Markus

    2016-01-01

    The aim of this study was to investigate in 18 patients with ischaemic stroke classified as cryptogenic and presenting non-stenotic carotid atherosclerotic plaques the morphological and biological aspects of these plaques with magnetic resonance imaging (MRI) and 18 F-fluoro-deoxyglucose positron emission tomography ( 18 F-FDG PET) imaging. Carotid arteries were imaged 150 min after injection of 18 F-FDG with a combined PET/MRI system. American Heart Association (AHA) lesion type and plaque composition were determined on consecutive MRI axial sections (n = 460) in both carotid arteries. 18 F-FDG uptake in carotid arteries was quantified using tissue to background ratio (TBR) on corresponding PET sections. The prevalence of complicated atherosclerotic plaques (AHA lesion type VI) detected with high-resolution MRI was significantly higher in the carotid artery ipsilateral to the ischaemic stroke as compared to the contralateral side (39 vs 0 %; p = 0.001). For all other AHA lesion types, no significant differences were found between ipsilateral and contralateral sides. In addition, atherosclerotic plaques classified as high-risk lesions with MRI (AHA lesion type VI) were associated with higher 18 F-FDG uptake in comparison with other AHA lesions (TBR = 3.43 ± 1.13 vs 2.41 ± 0.84, respectively; p < 0.001). Furthermore, patients presenting at least one complicated lesion (AHA lesion type VI) with MRI showed significantly higher 18 F-FDG uptake in both carotid arteries (ipsilateral and contralateral to the stroke) in comparison with carotid arteries of patients showing no complicated lesion with MRI (mean TBR = 3.18 ± 1.26 and 2.80 ± 0.94 vs 2.19 ± 0.57, respectively; p < 0.05) in favour of a diffuse inflammatory process along both carotid arteries associated with complicated plaques. Morphological and biological features of high-risk plaques can be detected with 18 F-FDG PET/MRI in non-stenotic atherosclerotic plaques ipsilateral to the stroke, suggesting a causal

  19. High-risk plaque features can be detected in non-stenotic carotid plaques of patients with ischaemic stroke classified as cryptogenic using combined {sup 18}F-FDG PET/MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Hyafil, Fabien [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany); Bichat University Hospital, Department of Nuclear Medicine, Paris (France); Schindler, Andreas; Obenhuber, Tilman; Saam, Tobias [Ludwig Maximilians University Hospital Munich, Institute for Clinical Radiology, Munich (Germany); Sepp, Dominik; Hoehn, Sabine; Poppert, Holger [Technische Universitaet Muenchen, Department of Neurology, Klinikum rechts der Isar, Munich (Germany); Bayer-Karpinska, Anna [Ludwig Maximilians University Hospital Munich, Institute for Stroke and Dementia Research, Munich (Germany); Boeckh-Behrens, Tobias [Technische Universitaet Muenchen, Department of Neuroradiology, Klinikum Rechts der Isar, Munich (Germany); Hacker, Marcus [Medical University of Vienna, Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Nekolla, Stephan G. [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany); Partner Site Munich Heart Alliance, German Centre for Cardiovascular Research (DZHK), Munich (Germany); Rominger, Axel [Ludwig Maximilians University Hospital Munich, Department of Nuclear Medicine, Munich (Germany); Dichgans, Martin [Technische Universitaet Muenchen, Department of Neurology, Klinikum rechts der Isar, Munich (Germany); Munich Cluster of Systems Neurology (SyNergy), Munich (Germany); Schwaiger, Markus [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany)

    2016-02-15

    The aim of this study was to investigate in 18 patients with ischaemic stroke classified as cryptogenic and presenting non-stenotic carotid atherosclerotic plaques the morphological and biological aspects of these plaques with magnetic resonance imaging (MRI) and {sup 18}F-fluoro-deoxyglucose positron emission tomography ({sup 18}F-FDG PET) imaging. Carotid arteries were imaged 150 min after injection of {sup 18}F-FDG with a combined PET/MRI system. American Heart Association (AHA) lesion type and plaque composition were determined on consecutive MRI axial sections (n = 460) in both carotid arteries. {sup 18}F-FDG uptake in carotid arteries was quantified using tissue to background ratio (TBR) on corresponding PET sections. The prevalence of complicated atherosclerotic plaques (AHA lesion type VI) detected with high-resolution MRI was significantly higher in the carotid artery ipsilateral to the ischaemic stroke as compared to the contralateral side (39 vs 0 %; p = 0.001). For all other AHA lesion types, no significant differences were found between ipsilateral and contralateral sides. In addition, atherosclerotic plaques classified as high-risk lesions with MRI (AHA lesion type VI) were associated with higher {sup 18}F-FDG uptake in comparison with other AHA lesions (TBR = 3.43 ± 1.13 vs 2.41 ± 0.84, respectively; p < 0.001). Furthermore, patients presenting at least one complicated lesion (AHA lesion type VI) with MRI showed significantly higher {sup 18}F-FDG uptake in both carotid arteries (ipsilateral and contralateral to the stroke) in comparison with carotid arteries of patients showing no complicated lesion with MRI (mean TBR = 3.18 ± 1.26 and 2.80 ± 0.94 vs 2.19 ± 0.57, respectively; p < 0.05) in favour of a diffuse inflammatory process along both carotid arteries associated with complicated plaques. Morphological and biological features of high-risk plaques can be detected with {sup 18}F-FDG PET/MRI in non-stenotic atherosclerotic plaques ipsilateral

  20. Combined expert system/neural networks method for process fault diagnosis

    Science.gov (United States)

    Reifman, Jaques; Wei, Thomas Y. C.

    1995-01-01

    A two-level hierarchical approach for process fault diagnosis is an operating system employs a function-oriented approach at a first level and a component characteristic-oriented approach at a second level, where the decision-making procedure is structured in order of decreasing intelligence with increasing precision. At the first level, the diagnostic method is general and has knowledge of the overall process including a wide variety of plant transients and the functional behavior of the process components. An expert system classifies malfunctions by function to narrow the diagnostic focus to a particular set of possible faulty components that could be responsible for the detected functional misbehavior of the operating system. At the second level, the diagnostic method limits its scope to component malfunctions, using more detailed knowledge of component characteristics. Trained artificial neural networks are used to further narrow the diagnosis and to uniquely identify the faulty component by classifying the abnormal condition data as a failure of one of the hypothesized components through component characteristics. Once an anomaly is detected, the hierarchical structure is used to successively narrow the diagnostic focus from a function misbehavior, i.e., a function oriented approach, until the fault can be determined, i.e., a component characteristic-oriented approach.

  1. Combined expert system/neural networks method for process fault diagnosis

    Science.gov (United States)

    Reifman, J.; Wei, T.Y.C.

    1995-08-15

    A two-level hierarchical approach for process fault diagnosis of an operating system employs a function-oriented approach at a first level and a component characteristic-oriented approach at a second level, where the decision-making procedure is structured in order of decreasing intelligence with increasing precision. At the first level, the diagnostic method is general and has knowledge of the overall process including a wide variety of plant transients and the functional behavior of the process components. An expert system classifies malfunctions by function to narrow the diagnostic focus to a particular set of possible faulty components that could be responsible for the detected functional misbehavior of the operating system. At the second level, the diagnostic method limits its scope to component malfunctions, using more detailed knowledge of component characteristics. Trained artificial neural networks are used to further narrow the diagnosis and to uniquely identify the faulty component by classifying the abnormal condition data as a failure of one of the hypothesized components through component characteristics. Once an anomaly is detected, the hierarchical structure is used to successively narrow the diagnostic focus from a function misbehavior, i.e., a function oriented approach, until the fault can be determined, i.e., a component characteristic-oriented approach. 9 figs.

  2. THERMODYNAMIC ANALYSIS AND SIMULATION OF A NEW COMBINED POWER AND REFRIGERATION CYCLE USING ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Hossein Rezvantalab

    2011-01-01

    Full Text Available In this study, a new combined power and refrigeration cycle is proposed, which combines the Rankine and absorption refrigeration cycles. Using a binary ammonia-water mixture as the working fluid, this combined cycle produces both power and refrigeration output simultaneously by employing only one external heat source. In order to achieve the highest possible exergy efficiency, a secondary turbine is inserted to expand the hot weak solution leaving the boiler. Moreover, an artificial neural network (ANN is used to simulate the thermodynamic properties and the relationship between the input thermodynamic variables on the cycle performance. It is shown that turbine inlet pressure, as well as heat source and refrigeration temperatures have significant effects on the net power output, refrigeration output and exergy efficiency of the combined cycle. In addition, the results of ANN are in excellent agreement with the mathematical simulation and cover a wider range for evaluation of cycle performance.

  3. Classifying Returns as Extreme

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2014-01-01

    I consider extreme returns for the stock and bond markets of 14 EU countries using two classification schemes: One, the univariate classification scheme from the previous literature that classifies extreme returns for each market separately, and two, a novel multivariate classification scheme tha...

  4. Towards Aiding Decision-Making in Social Networks by Using Sentiment and Stress Combined Analysis

    OpenAIRE

    Guillem Aguado; Vicente Julian; Ana Garcia-Fornes

    2018-01-01

    The present work is a study of the detection of negative emotional states that people have using social network sites (SNSs), and the effect that this negative state has on the repercussions of posted messages. We aim to discover in which grade a user having an affective state considered negative by an Analyzer can affect other users and generate bad repercussions. Those Analyzers that we propose are a Sentiment Analyzer, a Stress Analyzer and a novel combined Analyzer. We also want to discov...

  5. Output-feedback control of combined sewer networks through receding horizon control with moving horizon estimation

    Science.gov (United States)

    Joseph-Duran, Bernat; Ocampo-Martinez, Carlos; Cembrano, Gabriela

    2015-10-01

    An output-feedback control strategy for pollution mitigation in combined sewer networks is presented. The proposed strategy provides means to apply model-based predictive control to large-scale sewer networks, in-spite of the lack of measurements at most of the network sewers. In previous works, the authors presented a hybrid linear control-oriented model for sewer networks together with the formulation of Optimal Control Problems (OCP) and State Estimation Problems (SEP). By iteratively solving these problems, preliminary Receding Horizon Control with Moving Horizon Estimation (RHC/MHE) results, based on flow measurements, were also obtained. In this work, the RHC/MHE algorithm has been extended to take into account both flow and water level measurements and the resulting control loop has been extensively simulated to assess the system performance according different measurement availability scenarios and rain events. All simulations have been carried out using a detailed physically based model of a real case-study network as virtual reality.

  6. Optimal Operation of Network-Connected Combined Heat and Powers for Customer Profit Maximization

    Directory of Open Access Journals (Sweden)

    Da Xie

    2016-06-01

    Full Text Available Network-connected combined heat and powers (CHPs, owned by a community, can export surplus heat and electricity to corresponding heat and electric networks after community loads are satisfied. This paper proposes a new optimization model for network-connected CHP operation. Both CHPs’ overall efficiency and heat to electricity ratio (HTER are assumed to vary with loading levels. Based on different energy flow scenarios where heat and electricity are exported to the network from the community or imported, four profit models are established accordingly. They reflect the different relationships between CHP energy supply and community load demand across time. A discrete optimization model is then developed to maximize the profit for the community. The models are derived from the intervals determined by the daily operation modes of CHP and real-time buying and selling prices of heat, electricity and natural gas. By demonstrating the proposed models on a 1 MW network-connected CHP, results show that the community profits are maximized in energy markets. Thus, the proposed optimization approach can help customers to devise optimal CHP operating strategies for maximizing benefits.

  7. Prediction of Increasing Production Activities using Combination of Query Aggregation on Complex Events Processing and Neural Network

    Directory of Open Access Journals (Sweden)

    Achmad Arwan

    2016-07-01

    Full Text Available AbstrakProduksi, order, penjualan, dan pengiriman adalah serangkaian event yang saling terkait dalam industri manufaktur. Selanjutnya hasil dari event tersebut dicatat dalam event log. Complex Event Processing adalah metode yang digunakan untuk menganalisis apakah terdapat pola kombinasi peristiwa tertentu (peluang/ancaman yang terjadi pada sebuah sistem, sehingga dapat ditangani secara cepat dan tepat. Jaringan saraf tiruan adalah metode yang digunakan untuk mengklasifikasi data peningkatan proses produksi. Hasil pencatatan rangkaian proses yang menyebabkan peningkatan produksi digunakan sebagai data latih untuk mendapatkan fungsi aktivasi dari jaringan saraf tiruan. Penjumlahan hasil catatan event log dimasukkan ke input jaringan saraf tiruan untuk perhitungan nilai aktivasi. Ketika nilai aktivasi lebih dari batas yang ditentukan, maka sistem mengeluarkan sinyal untuk meningkatkan produksi, jika tidak, sistem tetap memantau kejadian. Hasil percobaan menunjukkan bahwa akurasi dari metode ini adalah 77% dari 39 rangkaian aliran event.Kata kunci: complex event processing, event, jaringan saraf tiruan, prediksi peningkatan produksi, proses. AbstractProductions, orders, sales, and shipments are series of interrelated events within manufacturing industry. Further these events were recorded in the event log. Complex event processing is a method that used to analyze whether there are patterns of combinations of certain events (opportunities / threats that occur in a system, so it can be addressed quickly and appropriately. Artificial neural network is a method that we used to classify production increase activities. The series of events that cause the increase of the production used as a dataset to train the weight of neural network which result activation value. An aggregate stream of events inserted into the neural network input to compute the value of activation. When the value is over a certain threshold (the activation value results

  8. Use of Bayesian networks classifiers for long-term mean wind turbine energy output estimation at a potential wind energy conversion site

    Energy Technology Data Exchange (ETDEWEB)

    Carta, Jose A. [Department of Mechanical Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Velazquez, Sergio [Department of Electronics and Automatics Engineering, University of Las Palmas de Gran Canaria, Campus de Tafira s/n, 35017 Las Palmas de Gran Canaria, Canary Islands (Spain); Matias, J.M. [Department of Statistics, University of Vigo, Lagoas Marcosende, 36200 Vigo (Spain)

    2011-02-15

    Due to the interannual variability of wind speed a feasibility analysis for the installation of a Wind Energy Conversion System at a particular site requires estimation of the long-term mean wind turbine energy output. A method is proposed in this paper which, based on probabilistic Bayesian networks (BNs), enables estimation of the long-term mean wind speed histogram for a site where few measurements of the wind resource are available. For this purpose, the proposed method allows the use of multiple reference stations with a long history of wind speed and wind direction measurements. That is to say, the model that is proposed in this paper is able to involve and make use of regional information about the wind resource. With the estimated long-term wind speed histogram and the power curve of a wind turbine it is possible to use the method of bins to determine the long-term mean energy output for that wind turbine. The intelligent system employed, the knowledgebase of which is a joint probability function of all the model variables, uses efficient calculation techniques for conditional probabilities to perform the reasoning. This enables automatic model learning and inference to be performed efficiently based on the available evidence. The proposed model is applied in this paper to wind speeds and wind directions recorded at four weather stations located in the Canary Islands (Spain). Ten years of mean hourly wind speed and direction data are available for these stations. One of the conclusions reached is that the BN with three reference stations gave fewer errors between the real and estimated long-term mean wind turbine energy output than when using two measure-correlate-predict algorithms which were evaluated and which use a linear regression between the candidate station and one reference station. (author)

  9. Use of Bayesian networks classifiers for long-term mean wind turbine energy output estimation at a potential wind energy conversion site

    International Nuclear Information System (INIS)

    Carta, Jose A.; Velazquez, Sergio; Matias, J.M.

    2011-01-01

    Due to the interannual variability of wind speed a feasibility analysis for the installation of a Wind Energy Conversion System at a particular site requires estimation of the long-term mean wind turbine energy output. A method is proposed in this paper which, based on probabilistic Bayesian networks (BNs), enables estimation of the long-term mean wind speed histogram for a site where few measurements of the wind resource are available. For this purpose, the proposed method allows the use of multiple reference stations with a long history of wind speed and wind direction measurements. That is to say, the model that is proposed in this paper is able to involve and make use of regional information about the wind resource. With the estimated long-term wind speed histogram and the power curve of a wind turbine it is possible to use the method of bins to determine the long-term mean energy output for that wind turbine. The intelligent system employed, the knowledgebase of which is a joint probability function of all the model variables, uses efficient calculation techniques for conditional probabilities to perform the reasoning. This enables automatic model learning and inference to be performed efficiently based on the available evidence. The proposed model is applied in this paper to wind speeds and wind directions recorded at four weather stations located in the Canary Islands (Spain). Ten years of mean hourly wind speed and direction data are available for these stations. One of the conclusions reached is that the BN with three reference stations gave fewer errors between the real and estimated long-term mean wind turbine energy output than when using two measure-correlate-predict algorithms which were evaluated and which use a linear regression between the candidate station and one reference station.

  10. Energy-Efficient Neuromorphic Classifiers.

    Science.gov (United States)

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

  11. Intelligent Garbage Classifier

    Directory of Open Access Journals (Sweden)

    Ignacio Rodríguez Novelle

    2008-12-01

    Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  12. Classifying Linear Canonical Relations

    OpenAIRE

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  13. What Combinations of Contents is Driving Popularity in IPTV-based Social Networks?

    Science.gov (United States)

    Bhatt, Rajen

    IPTV-based Social Networks are gaining popularity with TV programs coming over IP connection and internet like applications available on home TV. One such application is rating TV programs over some predefined genres. In this paper, we suggest an approach for building a recommender system to be used by content distributors, publishers, and motion pictures producers-directors to decide on what combinations of contents may drive popularity or unpopularity. This may be used then for creating a proper mixture of media contents which can drive high popularity. This may also be used for the purpose of catering customized contents for group of users whose taste is similar and thus combinations of contents driving popularity for a certain group is also similar. We use a novel approach for this formulation utilizing fuzzy decision trees. Computational experiments performed over real-world program review database shows that the proposed approach is very efficient towards understanding of the content combinations.

  14. Application of artificial neural network model combined with four biomarkers in auxiliary diagnosis of lung cancer.

    Science.gov (United States)

    Duan, Xiaoran; Yang, Yongli; Tan, Shanjuan; Wang, Sihua; Feng, Xiaolei; Cui, Liuxin; Feng, Feifei; Yu, Songcheng; Wang, Wei; Wu, Yongjun

    2017-08-01

    The purpose of the study was to explore the application of artificial neural network model in the auxiliary diagnosis of lung cancer and compare the effects of back-propagation (BP) neural network with Fisher discrimination model for lung cancer screening by the combined detections of four biomarkers of p16, RASSF1A and FHIT gene promoter methylation levels and the relative telomere length. Real-time quantitative methylation-specific PCR was used to detect the levels of three-gene promoter methylation, and real-time PCR method was applied to determine the relative telomere length. BP neural network and Fisher discrimination analysis were used to establish the discrimination diagnosis model. The levels of three-gene promoter methylation in patients with lung cancer were significantly higher than those of the normal controls. The values of Z(P) in two groups were 2.641 (0.008), 2.075 (0.038) and 3.044 (0.002), respectively. The relative telomere lengths of patients with lung cancer (0.93 ± 0.32) were significantly lower than those of the normal controls (1.16 ± 0.57), t = 4.072, P < 0.001. The areas under the ROC curve (AUC) and 95 % CI of prediction set from Fisher discrimination analysis and BP neural network were 0.670 (0.569-0.761) and 0.760 (0.664-0.840). The AUC of BP neural network was higher than that of Fisher discrimination analysis, and Z(P) was 0.76. Four biomarkers are associated with lung cancer. BP neural network model for the prediction of lung cancer is better than Fisher discrimination analysis, and it can provide an excellent and intelligent diagnosis tool for lung cancer.

  15. ComboCoding: Combined intra-/inter-flow network coding for TCP over disruptive MANETs

    Directory of Open Access Journals (Sweden)

    Chien-Chia Chen

    2011-07-01

    Full Text Available TCP over wireless networks is challenging due to random losses and ACK interference. Although network coding schemes have been proposed to improve TCP robustness against extreme random losses, a critical problem still remains of DATA–ACK interference. To address this issue, we use inter-flow coding between DATA and ACK to reduce the number of transmissions among nodes. In addition, we also utilize a “pipeline” random linear coding scheme with adaptive redundancy to overcome high packet loss over unreliable links. The resulting coding scheme, ComboCoding, combines intra-flow and inter-flow coding to provide robust TCP transmission in disruptive wireless networks. The main contributions of our scheme are twofold; the efficient combination of random linear coding and XOR coding on bi-directional streams (DATA and ACK, and the novel redundancy control scheme that adapts to time-varying and space-varying link loss. The adaptive ComboCoding was tested on a variable hop string topology with unstable links and on a multipath MANET with dynamic topology. Simulation results show that TCP with ComboCoding delivers higher throughput than with other coding options in high loss and mobile scenarios, while introducing minimal overhead in normal operation.

  16. Combining Topological Hardware and Topological Software: Color-Code Quantum Computing with Topological Superconductor Networks

    Science.gov (United States)

    Litinski, Daniel; Kesselring, Markus S.; Eisert, Jens; von Oppen, Felix

    2017-07-01

    We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall-superconductor hybrids.

  17. Exploring the Combination of Dempster-Shafer Theory and Neural Network for Predicting Trust and Distrust

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2016-01-01

    Full Text Available In social media, trust and distrust among users are important factors in helping users make decisions, dissect information, and receive recommendations. However, the sparsity and imbalance of social relations bring great difficulties and challenges in predicting trust and distrust. Meanwhile, there are numerous inducing factors to determine trust and distrust relations. The relationship among inducing factors may be dependency, independence, and conflicting. Dempster-Shafer theory and neural network are effective and efficient strategies to deal with these difficulties and challenges. In this paper, we study trust and distrust prediction based on the combination of Dempster-Shafer theory and neural network. We firstly analyze the inducing factors about trust and distrust, namely, homophily, status theory, and emotion tendency. Then, we quantify inducing factors of trust and distrust, take these features as evidences, and construct evidence prototype as input nodes of multilayer neural network. Finally, we propose a framework of predicting trust and distrust which uses multilayer neural network to model the implementing process of Dempster-Shafer theory in different hidden layers, aiming to overcome the disadvantage of Dempster-Shafer theory without optimization method. Experimental results on a real-world dataset demonstrate the effectiveness of the proposed framework.

  18. Combining Topological Hardware and Topological Software: Color-Code Quantum Computing with Topological Superconductor Networks

    Directory of Open Access Journals (Sweden)

    Daniel Litinski

    2017-09-01

    Full Text Available We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall–superconductor hybrids.

  19. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    Science.gov (United States)

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  20. Combining network analysis with Cognitive Work Analysis: insights into social organisational and cooperation analysis.

    Science.gov (United States)

    Houghton, Robert J; Baber, Chris; Stanton, Neville A; Jenkins, Daniel P; Revell, Kirsten

    2015-01-01

    Cognitive Work Analysis (CWA) allows complex, sociotechnical systems to be explored in terms of their potential configurations. However, CWA does not explicitly analyse the manner in which person-to-person communication is performed in these configurations. Consequently, the combination of CWA with Social Network Analysis provides a means by which CWA output can be analysed to consider communication structure. The approach is illustrated through a case study of a military planning team. The case study shows how actor-to-actor and actor-to-function mapping can be analysed, in terms of centrality, to produce metrics of system structure under different operating conditions. In this paper, a technique for building social network diagrams from CWA is demonstrated.The approach allows analysts to appreciate the potential impact of organisational structure on a command system.

  1. Analysing collaboration among HIV agencies through combining network theory and relational coordination.

    Science.gov (United States)

    Khosla, Nidhi; Marsteller, Jill Ann; Hsu, Yea Jen; Elliott, David L

    2016-02-01

    Agencies with different foci (e.g. nutrition, social, medical, housing) serve people living with HIV (PLHIV). Serving needs of PLHIV comprehensively requires a high degree of coordination among agencies which often benefits from more frequent communication. We combined Social Network theory and Relational Coordination theory to study coordination among HIV agencies in Baltimore. Social Network theory implies that actors (e.g., HIV agencies) establish linkages amongst themselves in order to access resources (e.g., information). Relational Coordination theory suggests that high quality coordination among agencies or teams relies on the seven dimensions of frequency, timeliness and accuracy of communication, problem-solving communication, knowledge of agencies' work, mutual respect and shared goals. We collected data on frequency of contact from 57 agencies using a roster method. Response options were ordinal ranging from 'not at all' to 'daily'. We analyzed data using social network measures. Next, we selected agencies with which at least one-third of the sample reported monthly or more frequent interaction. This yielded 11 agencies whom we surveyed on seven relational coordination dimensions with questions scored on a Likert scale of 1-5. Network density, defined as the proportion of existing connections to all possible connections, was 20% when considering monthly or higher interaction. Relational coordination scores from individual agencies to others ranged between 1.17 and 5.00 (maximum possible score 5). The average scores for different dimensions across all agencies ranged between 3.30 and 4.00. Shared goals (4.00) and mutual respect (3.91) scores were highest, while scores such as knowledge of each other's work and problem-solving communication were relatively lower. Combining theoretically driven analyses in this manner offers an innovative way to provide a comprehensive picture of inter-agency coordination and the quality of exchange that underlies

  2. Right putamen and age are the most discriminant features to diagnose Parkinson's disease by using 123I-FP-CIT brain SPET data by using an artificial neural network classifier, a classification tree (ClT).

    Science.gov (United States)

    Cascianelli, S; Tranfaglia, C; Fravolini, M L; Bianconi, F; Minestrini, M; Nuvoli, S; Tambasco, N; Dottorini, M E; Palumbo, B

    2017-01-01

    The differential diagnosis of Parkinson's disease (PD) and other conditions, such as essential tremor and drug-induced parkinsonian syndrome or normal aging brain, represents a diagnostic challenge. 123 I-FP-CIT brain SPET is able to contribute to the differential diagnosis. Semiquantitative analysis of radiopharmaceutical uptake in basal ganglia (caudate nuclei and putamina) is very useful to support the diagnostic process. An artificial neural network classifier using 123 I-FP-CIT brain SPET data, a classification tree (CIT), was applied. CIT is an automatic classifier composed of a set of logical rules, organized as a decision tree to produce an optimised threshold based classification of data to provide discriminative cut-off values. We applied a CIT to 123 I-FP-CIT brain SPET semiquantitave data, to obtain cut-off values of radiopharmaceutical uptake ratios in caudate nuclei and putamina with the aim to diagnose PD versus other conditions. We retrospectively investigated 187 patients undergoing 123 I-FP-CIT brain SPET (Millenium VG, G.E.M.S.) with semiquantitative analysis performed with Basal Ganglia (BasGan) V2 software according to EANM guidelines; among them 113 resulted affected by PD (PD group) and 74 (N group) by other non parkinsonian conditions, such as Essential Tremor and drug-induced PD. PD group included 113 subjects (60M and 53F of age: 60-81yrs) having Hoehn and Yahr score (HY): 0.5-1.5; Unified Parkinson Disease Rating Scale (UPDRS) score: 6-38; N group included 74 subjects (36M and 38 F range of age 60-80 yrs). All subjects were clinically followed for at least 6-18 months to confirm the diagnosis. To examinate data obtained by using CIT, for each of the 1,000 experiments carried out, 10% of patients were randomly selected as the CIT training set, while the remaining 90% validated the trained CIT, and the percentage of the validation data correctly classified in the two groups of patients was computed. The expected performance of an "average

  3. Enhanced three-dimensional stochastic adjustment for combined volcano geodetic networks

    Science.gov (United States)

    Del Potro, R.; Muller, C.

    2009-12-01

    Volcano geodesy is unquestionably a necessary technique in studies of physical volcanology and for eruption early warning systems. However, as every volcano geodesist knows, obtaining measurements of the required resolution using traditional campaigns and techniques is time consuming and requires a large manpower. Moreover, most volcano geodetic networks worldwide use a combination of data from traditional techniques; levelling, electronic distance measurements (EDM), triangulation and Global Navigation Satellite Systems (GNSS) but, in most cases, these data are surveyed, analysed and adjusted independently. This then leaves it to the authors’ criteria to decide which technique renders the most realistic results in each case. Herein we present a way of solving the problem of inter-methodology data integration in a cost-effective manner following a methodology were all the geodetic data of a redundant, combined network (e.g. surveyed by GNSS, levelling, distance, angular data, INSAR, extensometers, etc.) is adjusted stochastically within a single three-dimensional referential frame. The adjustment methodology is based on the least mean square method and links the data with its geometrical component providing combined, precise, three-dimensional, displacement vectors, relative to external reference points as well as stochastically-quantified, benchmark-specific, uncertainty ellipsoids. Three steps in the adjustment allow identifying, and hence dismissing, flagrant measurement errors (antenna height, atmospheric effects, etc.), checking the consistency of external reference points and a final adjustment of the data. Moreover, since the statistical indicators can be obtained from expected uncertainties in the measurements of the different geodetic techniques used (i.e. independent of the measured data), it is possible to run a priori simulations of a geodetic network in order to constrain its resolution, and reduce logistics, before the network is even built. In this

  4. Choice of implant combinations in total hip replacement: systematic review and network meta-analysis.

    Science.gov (United States)

    López-López, José A; Humphriss, Rachel L; Beswick, Andrew D; Thom, Howard H Z; Hunt, Linda P; Burston, Amanda; Fawsitt, Christopher G; Hollingworth, William; Higgins, Julian P T; Welton, Nicky J; Blom, Ashley W; Marques, Elsa M R

    2017-11-02

    Objective  To compare the survival of different implant combinations for primary total hip replacement (THR). Design  Systematic review and network meta-analysis. Data sources  Medline, Embase, The Cochrane Library, ClinicalTrials.gov, WHO International Clinical Trials Registry Platform, and the EU Clinical Trials Register. Review methods  Published randomised controlled trials comparing different implant combinations. Implant combinations were defined by bearing surface materials (metal-on-polyethylene, ceramic-on-polyethylene, ceramic-on-ceramic, or metal-on-metal), head size (large ≥36 mm or small meta-analysis for revision. There was no evidence that the risk of revision surgery was reduced by other implant combinations compared with the reference implant combination. Although estimates are imprecise, metal-on-metal, small head, cemented implants (hazard ratio 4.4, 95% credible interval 1.6 to 16.6) and resurfacing (12.1, 2.1 to 120.3) increase the risk of revision at 0-2 years after primary THR compared with the reference implant combination. Similar results were observed for the 2-10 years period. 31 studies (2888 patients) were included in the analysis of Harris hip score. No implant combination had a better score than the reference implant combination. Conclusions  Newer implant combinations were not found to be better than the reference implant combination (metal-on-polyethylene (not highly cross linked), small head, cemented) in terms of risk of revision surgery or Harris hip score. Metal-on-metal, small head, cemented implants and resurfacing increased the risk of revision surgery compared with the reference implant combination. The results were consistent with observational evidence and were replicated in sensitivity analysis but were limited by poor reporting across studies. Systematic review registration  PROSPERO CRD42015019435. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence

  5. Adaptive Linear and Normalized Combination of Radial Basis Function Networks for Function Approximation and Regression

    Directory of Open Access Journals (Sweden)

    Yunfeng Wu

    2014-01-01

    Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.

  6. Analysis and forecast of railway coal transportation volume based on BP neural network combined forecasting model

    Science.gov (United States)

    Xu, Yongbin; Xie, Haihong; Wu, Liuyi

    2018-05-01

    The share of coal transportation in the total railway freight volume is about 50%. As is widely acknowledged, coal industry is vulnerable to the economic situation and national policies. Coal transportation volume fluctuates significantly under the new economic normal. Grasp the overall development trend of railway coal transportation market, have important reference and guidance significance to the railway and coal industry decision-making. By analyzing the economic indicators and policy implications, this paper expounds the trend of the coal transportation volume, and further combines the economic indicators with the high correlation with the coal transportation volume with the traditional traffic prediction model to establish a combined forecasting model based on the back propagation neural network. The error of the prediction results is tested, which proves that the method has higher accuracy and has practical application.

  7. Classifying network attack scenarios using an ontology

    CSIR Research Space (South Africa)

    Van Heerden, RP

    2012-03-01

    Full Text Available ) or to the target?s reputation. The Residue sub-phase refers to damage or artefacts of the attack that occur after the attack goal has been achieved, and occurs because the attacker loses control of some systems. For example after the launch of a DDOS..., A. (1995). Hacking theft of $10 million from citibank revealed. Retrieved 10/10, 2011, from http://articles.latimes.com/1995-08-19/business/fi-36656_1_citibank-system Hurley, E. (2004). SCO site succumbs to DDoS attack. Retrieved 10/10, 2011, from...

  8. A Simple Neural Network Contextual Classifier

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Tidemann, J.

    1997-01-01

    I. Kanellopoulos, G.G. Wilkinson, F. Roli and J. Austin (Eds.)Proceedings of European Union Environment and Climate Programme Concerted Action COMPARES (COnnectionist Methods in Pre-processing and Analysis of REmote Sensing data)....

  9. Stack filter classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  10. Combined metabolomic and correlation networks analyses reveal fumarase insufficiency altered amino acid metabolism.

    Science.gov (United States)

    Hou, Entai; Li, Xian; Liu, Zerong; Zhang, Fuchang; Tian, Zhongmin

    2018-04-01

    Fumarase catalyzes the interconversion of fumarate and l-malate in the tricarboxylic acid cycle. Fumarase insufficiencies were associated with increased levels of fumarate, decreased levels of malate and exacerbated salt-induced hypertension. To gain insights into the metabolism profiles induced by fumarase insufficiency and identify key regulatory metabolites, we applied a GC-MS based metabolomics platform coupled with a network approach to analyze fumarase insufficient human umbilical vein endothelial cells (HUVEC) and negative controls. A total of 24 altered metabolites involved in seven metabolic pathways were identified as significantly altered, and enriched for the biological module of amino acids metabolism. In addition, Pearson correlation network analysis revealed that fumaric acid, l-malic acid, l-aspartic acid, glycine and l-glutamic acid were hub metabolites according to Pagerank based on their three centrality indices. Alanine aminotransferase and glutamate dehydrogenase activities increased significantly in fumarase deficiency HUVEC. These results confirmed that fumarase insufficiency altered amino acid metabolism. The combination of metabolomics and network methods would provide another perspective on expounding the molecular mechanism at metabolomics level. Copyright © 2017 John Wiley & Sons, Ltd.

  11. A probabilistic approach to combining smart meter and electric vehicle charging data to investigate distribution network impacts

    International Nuclear Information System (INIS)

    Neaimeh, Myriam; Wardle, Robin; Jenkins, Andrew M.; Yi, Jialiang; Hill, Graeme; Lyons, Padraig F.; Hübner, Yvonne; Blythe, Phil T.; Taylor, Phil C.

    2015-01-01

    Highlights: • Working with unique datasets of EV charging and smart meter load demand. • Distribution networks are not a homogenous group with more capabilities to accommodate EVs than previously suggested. • Spatial and temporal diversity of EV charging demand alleviate the impacts on networks. • An extensive recharging infrastructure could enable connection of additional EVs on constrained distribution networks. • Electric utilities could increase the network capability to accommodate EVs by investing in recharging infrastructure. - Abstract: This work uses a probabilistic method to combine two unique datasets of real world electric vehicle charging profiles and residential smart meter load demand. The data was used to study the impact of the uptake of Electric Vehicles (EVs) on electricity distribution networks. Two real networks representing an urban and rural area, and a generic network representative of a heavily loaded UK distribution network were used. The findings show that distribution networks are not a homogeneous group with a variation of capabilities to accommodate EVs and there is a greater capability than previous studies have suggested. Consideration of the spatial and temporal diversity of EV charging demand has been demonstrated to reduce the estimated impacts on the distribution networks. It is suggested that distribution network operators could collaborate with new market players, such as charging infrastructure operators, to support the roll out of an extensive charging infrastructure in a way that makes the network more robust; create more opportunities for demand side management; and reduce planning uncertainties associated with the stochastic nature of EV charging demand.

  12. A semi-supervised learning approach to predict synthetic genetic interactions by combining functional and topological properties of functional gene network

    Directory of Open Access Journals (Sweden)

    Han Kyungsook

    2010-06-01

    Full Text Available Abstract Background Genetic interaction profiles are highly informative and helpful for understanding the functional linkages between genes, and therefore have been extensively exploited for annotating gene functions and dissecting specific pathway structures. However, our understanding is rather limited to the relationship between double concurrent perturbation and various higher level phenotypic changes, e.g. those in cells, tissues or organs. Modifier screens, such as synthetic genetic arrays (SGA can help us to understand the phenotype caused by combined gene mutations. Unfortunately, exhaustive tests on all possible combined mutations in any genome are vulnerable to combinatorial explosion and are infeasible either technically or financially. Therefore, an accurate computational approach to predict genetic interaction is highly desirable, and such methods have the potential of alleviating the bottleneck on experiment design. Results In this work, we introduce a computational systems biology approach for the accurate prediction of pairwise synthetic genetic interactions (SGI. First, a high-coverage and high-precision functional gene network (FGN is constructed by integrating protein-protein interaction (PPI, protein complex and gene expression data; then, a graph-based semi-supervised learning (SSL classifier is utilized to identify SGI, where the topological properties of protein pairs in weighted FGN is used as input features of the classifier. We compare the proposed SSL method with the state-of-the-art supervised classifier, the support vector machines (SVM, on a benchmark dataset in S. cerevisiae to validate our method's ability to distinguish synthetic genetic interactions from non-interaction gene pairs. Experimental results show that the proposed method can accurately predict genetic interactions in S. cerevisiae (with a sensitivity of 92% and specificity of 91%. Noticeably, the SSL method is more efficient than SVM, especially for

  13. [Effect of microneedle combined with Lauromacrogol on skin capillary network: experimental study].

    Science.gov (United States)

    Xu, Sida; Wei, Qiang; Fan, Youfen; Chen, Shihai; Liu, Qingfeng; Yin, Guoqiang; Liao, Mingde; Sun, Yu

    2014-11-01

    To explore the effect of microneedle combined with Lauromacrogol on skin capillary network. 24 male Leghone (1.5-2.0 kg in weight) were randomly divided into three groups as group A (microneedle combined with Lauromacrogol), B (microneedle combined with physiological saline) , and C(control). The cockscombs were treated. The specimens were taken on the 7th, 14th, 21th , and 28th day postoperatively. HE staining, immunohistochemical staining and special staining were performed for study of the number of capillary and collagen I/III , as well as elastic fibers. The color of cockscombs in group A became lightening after treatment. The number of capillary decreased as showing by HE staining. The collagen I and III in group B was significantly different from that in group A and C (P microneedle combined with Lauromacrogol could effectively reduce the capillary in cockscomb without any tissue fibrosis. Microneedle can stimulate the proliferation of elastic fiber, so as to improve the skin ageing process.

  14. Combined Ozone Retrieval From METOP Sensors Using META-Training Of Deep Neural Networks

    Science.gov (United States)

    Felder, Martin; Sehnke, Frank; Kaifel, Anton

    2013-12-01

    The newest installment of our well-proven Neural Net- work Ozone Retrieval System (NNORSY) combines the METOP sensors GOME-2 and IASI with cloud information from AVHRR. Through the use of advanced meta- learning techniques like automatic feature selection and automatic architecture search applied to a set of deep neural networks, having at least two or three hidden layers, we have been able to avoid many technical issues normally encountered during the construction of such a joint retrieval system. This has been made possible by harnessing the processing power of modern consumer graphics cards with high performance graphic processors (GPU), which decreases training times by about two orders of magnitude. The system was trained on data from 2009 and 2010, including target ozone profiles from ozone sondes, ACE- FTS and MLS-AURA. To make maximum use of tropospheric information in the spectra, the data were partitioned into several sets of different cloud fraction ranges with the GOME-2 FOV, on which specialized retrieval networks are being trained. For the final ozone retrieval processing the different specialized networks are combined. The resulting retrieval system is very stable and does not show any systematic dependence on solar zenith angle, scan angle or sensor degradation. We present several sensitivity studies with regard to cloud fraction and target sensor type, as well as the performance in several latitude bands and with respect to independent validation stations. A visual cross-comparison against high-resolution ozone profiles from the KNMI EUMETSAT Ozone SAF product has also been performed and shows some distinctive features which we will briefly discuss. Overall, we demonstrate that a complex retrieval system can now be constructed with a minimum of ma- chine learning knowledge, using automated algorithms for many design decisions previously requiring expert knowledge. Provided sufficient training data and computation power of GPUs is available, the

  15. Regional brain network organization distinguishes the combined and inattentive subtypes of Attention Deficit Hyperactivity Disorder.

    Science.gov (United States)

    Saad, Jacqueline F; Griffiths, Kristi R; Kohn, Michael R; Clarke, Simon; Williams, Leanne M; Korgaonkar, Mayuresh S

    2017-01-01

    Attention Deficit Hyperactivity Disorder (ADHD) is characterized clinically by hyperactive/impulsive and/or inattentive symptoms which determine diagnostic subtypes as Predominantly Hyperactive-Impulsive (ADHD-HI), Predominantly Inattentive (ADHD-I), and Combined (ADHD-C). Neuroanatomically though we do not yet know if these clinical subtypes reflect distinct aberrations in underlying brain organization. We imaged 34 ADHD participants defined using DSM-IV criteria as ADHD-I ( n  = 16) or as ADHD-C ( n  = 18) and 28 matched typically developing controls, aged 8-17 years, using high-resolution T1 MRI. To quantify neuroanatomical organization we used graph theoretical analysis to assess properties of structural covariance between ADHD subtypes and controls (global network measures: path length, clustering coefficient, and regional network measures: nodal degree). As a context for interpreting network organization differences, we also quantified gray matter volume using voxel-based morphometry. Each ADHD subtype was distinguished by a different organizational profile of the degree to which specific regions were anatomically connected with other regions (i.e., in "nodal degree"). For ADHD-I (compared to both ADHD-C and controls) the nodal degree was higher in the hippocampus. ADHD-I also had a higher nodal degree in the supramarginal gyrus, calcarine sulcus, and superior occipital cortex compared to ADHD-C and in the amygdala compared to controls. By contrast, the nodal degree was higher in the cerebellum for ADHD-C compared to ADHD-I and in the anterior cingulate, middle frontal gyrus and putamen compared to controls. ADHD-C also had reduced nodal degree in the rolandic operculum and middle temporal pole compared to controls. These regional profiles were observed in the context of no differences in gray matter volume or global network organization. Our results suggest that the clinical distinction between the Inattentive and Combined subtypes of ADHD may also be

  16. A signal combining technique based on channel shortening for cooperative sensor networks

    KAUST Repository

    Hussain, Syed Imtiaz; Alouini, Mohamed-Slim; Hasna, Mazen Omar

    2010-01-01

    The cooperative relaying process needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems, e.g. wireless sensor networks where the nodes are equipped with very basic communication hardware. In this paper, we consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination can capture the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. ©2010 IEEE.

  17. Predicting combined sewer overflows chamber depth using artificial neural networks with rainfall radar data.

    Science.gov (United States)

    Mounce, S R; Shepherd, W; Sailor, G; Shucksmith, J; Saul, A J

    2014-01-01

    Combined sewer overflows (CSOs) represent a common feature in combined urban drainage systems and are used to discharge excess water to the environment during heavy storms. To better understand the performance of CSOs, the UK water industry has installed a large number of monitoring systems that provide data for these assets. This paper presents research into the prediction of the hydraulic performance of CSOs using artificial neural networks (ANN) as an alternative to hydraulic models. Previous work has explored using an ANN model for the prediction of chamber depth using time series for depth and rain gauge data. Rainfall intensity data that can be provided by rainfall radar devices can be used to improve on this approach. Results are presented using real data from a CSO for a catchment in the North of England, UK. An ANN model trained with the pseudo-inverse rule was shown to be capable of predicting CSO depth with less than 5% error for predictions more than 1 hour ahead for unseen data. Such predictive approaches are important to the future management of combined sewer systems.

  18. A signal combining technique based on channel shortening for cooperative sensor networks

    KAUST Repository

    Hussain, Syed Imtiaz

    2010-06-01

    The cooperative relaying process needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems, e.g. wireless sensor networks where the nodes are equipped with very basic communication hardware. In this paper, we consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination can capture the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. ©2010 IEEE.

  19. Equal gain combining for cooperative spectrum sensing in cognitive radio networks

    KAUST Repository

    Hamza, Doha R.

    2014-08-01

    Sensing with equal gain combining (SEGC), a novel cooperative spectrum sensing technique for cognitive radio networks, is proposed. Cognitive radios simultaneously transmit their sensing results to the fusion center (FC) over multipath fading reporting channels. The cognitive radios estimate the phases of the reporting channels and use those estimates for coherent combining of the sensing results at the FC. A global decision is made at the FC by comparing the received signal with a threshold. We obtain the global detection probabilities and secondary throughput exactly through a moment generating function approach. We verify our solution via system simulation and demonstrate that the Chernoff bound and central limit theory approximation are not tight. The cases of hard sensing and soft sensing are considered and we provide examples in which hard sensing is advantageous to soft sensing. We contrast the performance of SEGC with maximum ratio combining of the sensors\\' results and provide examples where the former is superior. Furthermore, we evaluate the performance of SEGC against existing orthogonal reporting techniques such as time division multiple access (TDMA). SEGC performance always dominates that of TDMA in terms of secondary throughput. We also study the impact of phase and synchronization errors and demonstrate the robustness of the SEGC technique against such imperfections. © 2002-2012 IEEE.

  20. Optimal Seamline Detection for Orthoimage Mosaicking by Combining Deep Convolutional Neural Network and Graph Cuts

    Directory of Open Access Journals (Sweden)

    Li Li

    2017-07-01

    Full Text Available When mosaicking orthoimages, especially in urban areas with various obvious ground objects like buildings, roads, cars or trees, the detection of optimal seamlines is one of the key technologies for creating seamless and pleasant image mosaics. In this paper, we propose a new approach to detect optimal seamlines for orthoimage mosaicking with the use of deep convolutional neural network (CNN and graph cuts. Deep CNNs have been widely used in many fields of computer vision and photogrammetry in recent years, and graph cuts is one of the most widely used energy optimization frameworks. We first propose a deep CNN for land cover semantic segmentation in overlap regions between two adjacent images. Then, the energy cost of each pixel in the overlap regions is defined based on the classification probabilities of belonging to each of the specified classes. To find the optimal seamlines globally, we fuse the CNN-classified energy costs of all pixels into the graph cuts energy minimization framework. The main advantage of our proposed method is that the pixel similarity energy costs between two images are defined using the classification results of the CNN based semantic segmentation instead of using the image informations of color, gradient or texture as traditional methods do. Another advantage of our proposed method is that the semantic informations are fully used to guide the process of optimal seamline detection, which is more reasonable than only using the hand designed features defined to represent the image differences. Finally, the experimental results on several groups of challenging orthoimages show that the proposed method is capable of finding high-quality seamlines among urban and non-urban orthoimages, and outperforms the state-of-the-art algorithms and the commercial software based on the visual comparison, statistical evaluation and quantitative evaluation based on the structural similarity (SSIM index.

  1. Efficacy Comparison of Six Chemotherapeutic Combinations for Osteosarcoma and Ewing's Sarcoma Treatment: A Network Meta-Analysis.

    Science.gov (United States)

    Zhang, Tao; Zhang, Song; Yang, Feifei; Wang, Lili; Zhu, Sigang; Qiu, Bing; Li, Shunhua; Deng, Zhongliang

    2018-01-01

    This study aimed to address the insufficiency of traditional meta-analysis and provide improved guidelines for the clinical practice of osteosarcoma treatment. The heterogeneity of the fixed-effect model was calculated, and when necessary, a random-effect model was adopted. Furthermore, the direct and indirect evidence was pooled together and exhibited in the forest plot and slash table. The surface under the cumulative ranking curve (SUCRA) value was also measured to rank each intervention. Finally, heat plot was introduced to demonstrate the contribution of each intervention and the inconsistency between direct and indirect comparisons. This network meta-analysis included 32 trials, involving a total of 5,626 subjects reported by 28 articles. All the treatments were classified into six chemotherapeutic combinations: dual agent with or without ifosfamide (IFO), multi-agent with or without IFO, and dual agent or multi-agent with IFO and etoposide. For the primary outcomes, both overall survival (OS) and event-free survival (EFS) rates were considered. The multi-agent integrated with IFO and etoposide showed an optimal performance for 5-year OS, 10-year OS, 3-year EFS, 5-year EFS, and 10-year EFS when compared with placebo. The SUCRA value of this treatment was also the highest of these six interventions. However, multi-drug with IFO alone had the highest SUCRA value of 0.652 and 0.516 when it came to relapse and lung-metastasis. It was efficient to some extent, but no significant difference was observed in both outcomes. Chemotherapy, applied as induction or adjuvant treatment with radiation therapy or surgery, is able to increase the survival rate of patients, especially by combining multi-drug with IFO and etoposide, which demonstrated the best performance in both OS and EFS. As for relapse and the lung-metastasis, multiple agents with IFO alone seemed to have the optimal efficiency, although no significant difference was observed here. J. Cell. Biochem. 119: 250

  2. Combined Model of Intrinsic and Extrinsic Variability for Computational Network Design with Application to Synthetic Biology

    Science.gov (United States)

    Toni, Tina; Tidor, Bruce

    2013-01-01

    Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i) high protein cooperativity and low miRNA cooperativity, (ii) imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii) correlated expression of mRNA and miRNA – for example, on the same transcript – was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in the steady

  3. Combining neural networks and signed particles to simulate quantum systems more efficiently

    Science.gov (United States)

    Sellier, Jean Michel

    2018-04-01

    Recently a new formulation of quantum mechanics has been suggested which describes systems by means of ensembles of classical particles provided with a sign. This novel approach mainly consists of two steps: the computation of the Wigner kernel, a multi-dimensional function describing the effects of the potential over the system, and the field-less evolution of the particles which eventually create new signed particles in the process. Although this method has proved to be extremely advantageous in terms of computational resources - as a matter of fact it is able to simulate in a time-dependent fashion many-body systems on relatively small machines - the Wigner kernel can represent the bottleneck of simulations of certain systems. Moreover, storing the kernel can be another issue as the amount of memory needed is cursed by the dimensionality of the system. In this work, we introduce a new technique which drastically reduces the computation time and memory requirement to simulate time-dependent quantum systems which is based on the use of an appropriately tailored neural network combined with the signed particle formalism. In particular, the suggested neural network is able to compute efficiently and reliably the Wigner kernel without any training as its entire set of weights and biases is specified by analytical formulas. As a consequence, the amount of memory for quantum simulations radically drops since the kernel does not need to be stored anymore as it is now computed by the neural network itself, only on the cells of the (discretized) phase-space which are occupied by particles. As its is clearly shown in the final part of this paper, not only this novel approach drastically reduces the computational time, it also remains accurate. The author believes this work opens the way towards effective design of quantum devices, with incredible practical implications.

  4. A Novel approach for predicting monthly water demand by combining singular spectrum analysis with neural networks

    Science.gov (United States)

    Zubaidi, Salah L.; Dooley, Jayne; Alkhaddar, Rafid M.; Abdellatif, Mawada; Al-Bugharbee, Hussein; Ortega-Martorell, Sandra

    2018-06-01

    Valid and dependable water demand prediction is a major element of the effective and sustainable expansion of municipal water infrastructures. This study provides a novel approach to quantifying water demand through the assessment of climatic factors, using a combination of a pretreatment signal technique, a hybrid particle swarm optimisation algorithm and an artificial neural network (PSO-ANN). The Singular Spectrum Analysis (SSA) technique was adopted to decompose and reconstruct water consumption in relation to six weather variables, to create a seasonal and stochastic time series. The results revealed that SSA is a powerful technique, capable of decomposing the original time series into many independent components including trend, oscillatory behaviours and noise. In addition, the PSO-ANN algorithm was shown to be a reliable prediction model, outperforming the hybrid Backtracking Search Algorithm BSA-ANN in terms of fitness function (RMSE). The findings of this study also support the view that water demand is driven by climatological variables.

  5. A Neural Network Combined Inverse Controller for a Two-Rear-Wheel Independently Driven Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Duo Zhang

    2014-07-01

    Full Text Available Vehicle active safety control is attracting ever increasing attention in the attempt to improve the stability and the maneuverability of electric vehicles. In this paper, a neural network combined inverse (NNCI controller is proposed, incorporating the merits of left-inversion and right-inversion. As the left-inversion soft-sensor can estimate the sideslip angle, while the right-inversion is utilized to decouple control. Then, the proposed NNCI controller not only linearizes and decouples the original nonlinear system, but also directly obtains immeasurable state feedback in constructing the right-inversion. Hence, the proposed controller is very practical in engineering applications. The proposed system is co-simulated based on the vehicle simulation package CarSim in connection with Matlab/Simulink. The results verify the effectiveness of the proposed control strategy.

  6. Combining Quality of Service and Topology Control in Directional Hybrid Wireless Networks

    National Research Council Canada - National Science Library

    Erwin, Michael C

    2006-01-01

    .... This thesis establishes a foundation for the definition and consideration of the unique network characteristics and requirements introduced by this novel instance of the Network Design Problem (NDP...

  7. Combined Sector and Channel Hopping Schemes for Efficient Rendezvous in Directional Antenna Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    AbdulMajid M. Al-Mqdashi

    2017-01-01

    Full Text Available Rendezvous is a prerequisite and important process for secondary users (SUs to establish data communications in cognitive radio networks (CRNs. Recently, there has been a proliferation of different channel hopping- (CH- based schemes that can provide rendezvous without relying on any predetermined common control channel. However, the existing CH schemes were designed with omnidirectional antennas which can degrade their rendezvous performance when applied in CRNs that are highly crowded with primary users (PUs. In such networks, the large number of PUs may lead to the inexistence of any common available channel between neighboring SUs which result in a failure of their rendezvous process. In this paper, we consider the utilization of directional antennas in CRNs for tackling the issue. Firstly, we propose two coprimality-based sector hopping (SH schemes that can provide efficient pairwise sector rendezvous in directional antenna CRNs (DIR-CRNs. Then, we propose an efficient CH scheme that can be combined within the SH schemes for providing a simultaneous sector and channel rendezvous. The guaranteed rendezvous of our schemes are proven by deriving the theoretical upper bounds of their rendezvous delay metrics. Furthermore, extensive simulation comparisons with other related rendezvous schemes are conducted to illustrate the significant outperformance of our schemes.

  8. Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations

    Science.gov (United States)

    Niemeier, Wolfgang; Tengen, Dieter

    2017-06-01

    In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.

  9. Conflict detection and resolution rely on a combination of common and distinct cognitive control networks.

    Science.gov (United States)

    Li, Qi; Yang, Guochun; Li, Zhenghan; Qi, Yanyan; Cole, Michael W; Liu, Xun

    2017-12-01

    Cognitive control can be activated by stimulus-stimulus (S-S) and stimulus-response (S-R) conflicts. However, whether cognitive control is domain-general or domain-specific remains unclear. To deepen the understanding of the functional organization of cognitive control networks, we conducted activation likelihood estimation (ALE) from 111 neuroimaging studies to examine brain activation in conflict-related tasks. We observed that fronto-parietal and cingulo-opercular networks were commonly engaged by S-S and S-R conflicts, showing a domain-general pattern. In addition, S-S conflicts specifically activated distinct brain regions to a greater degree. These regions were implicated in the processing of the semantic-relevant attribute, including the inferior frontal cortex (IFC), superior parietal cortex (SPC), superior occipital cortex (SOC), and right anterior cingulate cortex (ACC). By contrast, S-R conflicts specifically activated the left thalamus, middle frontal cortex (MFC), and right SPC, which were associated with detecting response conflict and orienting spatial attention. These findings suggest that conflict detection and resolution involve a combination of domain-general and domain-specific cognitive control mechanisms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Who runs public health? A mixed-methods study combining qualitative and network analyses.

    Science.gov (United States)

    Oliver, Kathryn; de Vocht, Frank; Money, Annemarie; Everett, Martin

    2013-09-01

    Persistent health inequalities encourage researchers to identify new ways of understanding the policy process. Informal relationships are implicated in finding evidence and making decisions for public health policy (PHP), but few studies use specialized methods to identify key actors in the policy process. We combined network and qualitative data to identify the most influential individuals in PHP in a UK conurbation and describe their strategies to influence policy. Network data were collected by asking for nominations of powerful and influential people in PHP (n = 152, response rate 80%), and 23 semi-structured interviews were analysed using a framework approach. The most influential PHP makers in this conurbation were mid-level managers in the National Health Service and local government, characterized by managerial skills: controlling policy processes through gate keeping key organizations, providing policy content and managing selected experts and executives to lead on policies. Public health professionals and academics are indirectly connected to policy via managers. The most powerful individuals in public health are managers, not usually considered targets for research. As we show, they are highly influential through all stages of the policy process. This study shows the importance of understanding the daily activities of influential policy individuals.

  11. Enhanced activation of motor execution networks using action observation combined with imagination of lower limb movements.

    Directory of Open Access Journals (Sweden)

    Michael Villiger

    Full Text Available The combination of first-person observation and motor imagery, i.e. first-person observation of limbs with online motor imagination, is commonly used in interactive 3D computer gaming and in some movie scenes. These scenarios are designed to induce a cognitive process in which a subject imagines himself/herself acting as the agent in the displayed movement situation. Despite the ubiquity of this type of interaction and its therapeutic potential, its relationship to passive observation and imitation during observation has not been directly studied using an interactive paradigm. In the present study we show activation resulting from observation, coupled with online imagination and with online imitation of a goal-directed lower limb movement using functional MRI (fMRI in a mixed block/event-related design. Healthy volunteers viewed a video (first-person perspective of a foot kicking a ball. They were instructed to observe-only the action (O, observe and simultaneously imagine performing the action (O-MI, or imitate the action (O-IMIT. We found that when O-MI was compared to O, activation was enhanced in the ventralpremotor cortex bilaterally, left inferior parietal lobule and left insula. The O-MI and O-IMIT conditions shared many activation foci in motor relevant areas as confirmed by conjunction analysis. These results show that (i combining observation with motor imagery (O-MI enhances activation compared to observation-only (O in the relevant foot motor network and in regions responsible for attention, for control of goal-directed movements and for the awareness of causing an action, and (ii it is possible to extensively activate the motor execution network using O-MI, even in the absence of overt movement. Our results may have implications for the development of novel virtual reality interactions for neurorehabilitation interventions and other applications involving training of motor tasks.

  12. Entropy based classifier for cross-domain opinion mining

    Directory of Open Access Journals (Sweden)

    Jyoti S. Deshmukh

    2018-01-01

    Full Text Available In recent years, the growth of social network has increased the interest of people in analyzing reviews and opinions for products before they buy them. Consequently, this has given rise to the domain adaptation as a prominent area of research in sentiment analysis. A classifier trained from one domain often gives poor results on data from another domain. Expression of sentiment is different in every domain. The labeling cost of each domain separately is very high as well as time consuming. Therefore, this study has proposed an approach that extracts and classifies opinion words from one domain called source domain and predicts opinion words of another domain called target domain using a semi-supervised approach, which combines modified maximum entropy and bipartite graph clustering. A comparison of opinion classification on reviews on four different product domains is presented. The results demonstrate that the proposed method performs relatively well in comparison to the other methods. Comparison of SentiWordNet of domain-specific and domain-independent words reveals that on an average 72.6% and 88.4% words, respectively, are correctly classified.

  13. Combining Pathway Identification and Breast Cancer Survival Prediction via Screening-Network Methods

    Directory of Open Access Journals (Sweden)

    Antonella Iuliano

    2018-06-01

    Full Text Available Breast cancer is one of the most common invasive tumors causing high mortality among women. It is characterized by high heterogeneity regarding its biological and clinical characteristics. Several high-throughput assays have been used to collect genome-wide information for many patients in large collaborative studies. This knowledge has improved our understanding of its biology and led to new methods of diagnosing and treating the disease. In particular, system biology has become a valid approach to obtain better insights into breast cancer biological mechanisms. A crucial component of current research lies in identifying novel biomarkers that can be predictive for breast cancer patient prognosis on the basis of the molecular signature of the tumor sample. However, the high dimension and low sample size of data greatly increase the difficulty of cancer survival analysis demanding for the development of ad-hoc statistical methods. In this work, we propose novel screening-network methods that predict patient survival outcome by screening key survival-related genes and we assess the capability of the proposed approaches using METABRIC dataset. In particular, we first identify a subset of genes by using variable screening techniques on gene expression data. Then, we perform Cox regression analysis by incorporating network information associated with the selected subset of genes. The novelty of this work consists in the improved prediction of survival responses due to the different types of screenings (i.e., a biomedical-driven, data-driven and a combination of the two before building the network-penalized model. Indeed, the combination of the two screening approaches allows us to use the available biological knowledge on breast cancer and complement it with additional information emerging from the data used for the analysis. Moreover, we also illustrate how to extend the proposed approaches to integrate an additional omic layer, such as copy number

  14. Combining structure, governance and context : A configurational approach to network effectiveness

    NARCIS (Netherlands)

    Raab, J.; Mannak, R.S.; Cambré, B.

    2015-01-01

    This study explores the way in which network structure (network integration), network context (resource munificence and stability), and network governance mode relate to net -work effectiveness. The model by Provan and Milward (Provan, Keith G., and H. Brinton Milward. 1995. A preliminary theory of

  15. Towards Aiding Decision-Making in Social Networks by Using Sentiment and Stress Combined Analysis

    Directory of Open Access Journals (Sweden)

    Guillem Aguado

    2018-05-01

    Full Text Available The present work is a study of the detection of negative emotional states that people have using social network sites (SNSs, and the effect that this negative state has on the repercussions of posted messages. We aim to discover in which grade a user having an affective state considered negative by an Analyzer can affect other users and generate bad repercussions. Those Analyzers that we propose are a Sentiment Analyzer, a Stress Analyzer and a novel combined Analyzer. We also want to discover what Analyzer is more suitable to predict a bad future situation, and in what context. We designed a Multi-Agent System (MAS that uses different Analyzers to protect or advise users. This MAS uses the trained and tested Analyzers to predict future bad situations in social media, which could be triggered by the actions of a user that has an emotional state considered negative. We conducted an experimentation with different datasets of text messages from Twitter.com to examine the ability of the system to predict bad repercussions, by comparing the polarity, stress level or combined value classification of the messages that are replies to the ones of the messages that originated them.

  16. Combination Adaptive Traffic Algorithm and Coordinated Sleeping in Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    M. Udin Harun Al Rasyid

    2014-12-01

    Full Text Available Wireless sensor network (WSN uses a battery as its primary power source, so that WSN will be limited to battery power for long operations. The WSN should be able to save the energy consumption in order to operate in a long time.WSN has the potential to be the future of wireless communications solutions. WSN are small but has a variety of functions that can help human life. WSN has the wide variety of sensors and can communicate quickly making it easier for people to obtain information accurately and quickly. In this study, we combine adaptive traffic algorithms and coordinated sleeping as power‐efficient WSN solution. We compared the performance of our proposed ideas combination adaptive traffic and coordinated sleeping algorithm with non‐adaptive scheme. From the simulation results, our proposed idea has good‐quality data transmission and more efficient in energy consumption, but it has higher delay than that of non‐adaptive scheme. Keywords:WSN,adaptive traffic,coordinated sleeping,beacon order,superframe order.

  17. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques.

    Directory of Open Access Journals (Sweden)

    Hazlee Azil Illias

    Full Text Available It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN and particle swarm optimisation (PSO techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.

  18. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques.

    Science.gov (United States)

    Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.

  19. Combining Bayesian Networks and Agent Based Modeling to develop a decision-support model in Vietnam

    Science.gov (United States)

    Nong, Bao Anh; Ertsen, Maurits; Schoups, Gerrit

    2016-04-01

    Complexity and uncertainty in natural resources management have been focus themes in recent years. Within these debates, with the aim to define an approach feasible for water management practice, we are developing an integrated conceptual modeling framework for simulating decision-making processes of citizens, in our case in the Day river area, Vietnam. The model combines Bayesian Networks (BNs) and Agent-Based Modeling (ABM). BNs are able to combine both qualitative data from consultants / experts / stakeholders, and quantitative data from observations on different phenomena or outcomes from other models. Further strengths of BNs are that the relationship between variables in the system is presented in a graphical interface, and that components of uncertainty are explicitly related to their probabilistic dependencies. A disadvantage is that BNs cannot easily identify the feedback of agents in the system once changes appear. Hence, ABM was adopted to represent the reaction among stakeholders under changes. The modeling framework is developed as an attempt to gain better understanding about citizen's behavior and factors influencing their decisions in order to reduce uncertainty in the implementation of water management policy.

  20. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques

    Science.gov (United States)

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634

  1. Combined application of mixture experimental design and artificial neural networks in the solid dispersion development.

    Science.gov (United States)

    Medarević, Djordje P; Kleinebudde, Peter; Djuriš, Jelena; Djurić, Zorica; Ibrić, Svetlana

    2016-01-01

    This study for the first time demonstrates combined application of mixture experimental design and artificial neural networks (ANNs) in the solid dispersions (SDs) development. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs were prepared by solvent casting method to improve carbamazepine dissolution rate. The influence of the composition of prepared SDs on carbamazepine dissolution rate was evaluated using d-optimal mixture experimental design and multilayer perceptron ANNs. Physicochemical characterization proved the presence of the most stable carbamazepine polymorph III within the SD matrix. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs significantly improved carbamazepine dissolution rate compared to pure drug. Models developed by ANNs and mixture experimental design well described the relationship between proportions of SD components and percentage of carbamazepine released after 10 (Q10) and 20 (Q20) min, wherein ANN model exhibit better predictability on test data set. Proportions of carbamazepine and poloxamer 188 exhibited the highest influence on carbamazepine release rate. The highest carbamazepine release rate was observed for SDs with the lowest proportions of carbamazepine and the highest proportions of poloxamer 188. ANNs and mixture experimental design can be used as powerful data modeling tools in the systematic development of SDs. Taking into account advantages and disadvantages of both techniques, their combined application should be encouraged.

  2. KeyPathwayMiner - De-novo network enrichment by combining multiple OMICS data and biological networks

    DEFF Research Database (Denmark)

    Baumbach, Jan; Alcaraz, Nicolas; Pauling, Josch K.

    We tackle the problem of de-novo pathway extraction. Given a biological network and a set of case-control studies, KeyPathwayMiner efficiently extracts and visualizes all maximal connected sub-networks that contain mainly genes that are dysregulated, e.g., differentially expressed, in most cases ...

  3. Communication Behaviour-Based Big Data Application to Classify and Detect HTTP Automated Software

    Directory of Open Access Journals (Sweden)

    Manh Cong Tran

    2016-01-01

    Full Text Available HTTP is recognized as the most widely used protocol on the Internet when applications are being transferred more and more by developers onto the web. Due to increasingly complex computer systems, diversity HTTP automated software (autoware thrives. Unfortunately, besides normal autoware, HTTP malware and greyware are also spreading rapidly in web environment. Consequently, network communication is not just rigorously controlled by users intention. This raises the demand for analyzing HTTP autoware communication behaviour to detect and classify malicious and normal activities via HTTP traffic. Hence, in this paper, based on many studies and analysis of the autoware communication behaviour through access graph, a new method to detect and classify HTTP autoware communication at network level is presented. The proposal system includes combination of MapReduce of Hadoop and MarkLogic NoSQL database along with xQuery to deal with huge HTTP traffic generated each day in a large network. The method is examined with real outbound HTTP traffic data collected through a proxy server of a private network. Experimental results obtained for proposed method showed that promised outcomes are achieved since 95.1% of suspicious autoware are classified and detected. This finding may assist network and system administrator in inspecting early the internal threats caused by HTTP autoware.

  4. Gearbox Condition Monitoring Using Advanced Classifiers

    Directory of Open Access Journals (Sweden)

    P. Večeř

    2010-01-01

    Full Text Available New efficient and reliable methods for gearbox diagnostics are needed in automotive industry because of growing demand for production quality. This paper presents the application of two different classifiers for gearbox diagnostics – Kohonen Neural Networks and the Adaptive-Network-based Fuzzy Interface System (ANFIS. Two different practical applications are presented. In the first application, the tested gearboxes are separated into two classes according to their condition indicators. In the second example, ANFIS is applied to label the tested gearboxes with a Quality Index according to the condition indicators. In both applications, the condition indicators were computed from the vibration of the gearbox housing. 

  5. Combining ground-based and airborne EM through Artificial Neural Networks for modelling glacial till under saline groundwater conditions

    DEFF Research Database (Denmark)

    Gunnink, J.L.; Bosch, A.; Siemon, B.

    2012-01-01

    Airborne electromagnetic (AEM) methods supply data over large areas in a cost-effective way. We used ArtificialNeural Networks (ANN) to classify the geophysical signal into a meaningful geological parameter. By using examples of known relations between ground-based geophysical data (in this case...... electrical conductivity, EC, from electrical cone penetration tests) and geological parameters (presence of glacial till), we extracted learning rules that could be applied to map the presence of a glacial till using the EC profiles from the airborne EM data. The saline groundwater in the area was obscuring...

  6. Detecting and classifying faults on transmission systems using a backpropagation neural network; Deteccion y clasificacion de fallas en sistemas de transmision empleando una red neuronal con retropropagacion del error

    Energy Technology Data Exchange (ETDEWEB)

    Rosas Ortiz, German

    2000-01-01

    Fault detection and diagnosis on transmission systems is an interesting area of investigation to Artificial Intelligence (AI) based systems. Neurocomputing is one of fastest growing areas of research in the fields of AI and pattern recognition. This work explores the possible suitability of pattern recognition approach of neural networks for fault detection and classification on power systems. The conventional detection techniques in modern relays are based in digital processing of signals and it need some time (around 1 cycle) to send a tripping signal, also they are likely to make incorrect decisions if the signals are noisy. It's desirable to develop a fast, accurate and robust approach that perform accurately for changing system conditions (like load variations and fault resistance). The aim of this work is to develop a novel technique based on Artificial Neural Networks (ANN), which explores the suitability of a pattern classification approach for fault detection and diagnosis. The suggested approach is based in the fact that when a fault occurs, a change in the system impedance take place and, as a consequence changes in amplitude and phase of line voltage and current signals take place. The ANN-based fault discriminator is trained to detect this changes as indicators of the instant of fault inception. This detector uses instantaneous values of these signals to make decisions. Suitability of using neural network as pattern classifiers for transmission systems fault diagnosis is described in detail a neural network design and simulation environment for real-time is presented. Results showing the performance of this approach are presented and indicate that it is fast, secure and exact enough, and it can be used in high speed fault detection and classification schemes. [Spanish] El diagnostico y la deteccion de fallas en sistemas de transmision es una area de interes en investigacion para sistemas basados en Inteligencia Artificial (IA). El calculo neuronal

  7. The benefit of combining a deep neural network architecture with ideal ratio mask estimation in computational speech segregation to improve speech intelligibility.

    Science.gov (United States)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail A; Dau, Torsten

    2018-01-01

    Computational speech segregation attempts to automatically separate speech from noise. This is challenging in conditions with interfering talkers and low signal-to-noise ratios. Recent approaches have adopted deep neural networks and successfully demonstrated speech intelligibility improvements. A selection of components may be responsible for the success with these state-of-the-art approaches: the system architecture, a time frame concatenation technique and the learning objective. The aim of this study was to explore the roles and the relative contributions of these components by measuring speech intelligibility in normal-hearing listeners. A substantial improvement of 25.4 percentage points in speech intelligibility scores was found going from a subband-based architecture, in which a Gaussian Mixture Model-based classifier predicts the distributions of speech and noise for each frequency channel, to a state-of-the-art deep neural network-based architecture. Another improvement of 13.9 percentage points was obtained by changing the learning objective from the ideal binary mask, in which individual time-frequency units are labeled as either speech- or noise-dominated, to the ideal ratio mask, where the units are assigned a continuous value between zero and one. Therefore, both components play significant roles and by combining them, speech intelligibility improvements were obtained in a six-talker condition at a low signal-to-noise ratio.

  8. Classifier Fusion With Contextual Reliability Evaluation.

    Science.gov (United States)

    Liu, Zhunga; Pan, Quan; Dezert, Jean; Han, Jun-Wei; He, You

    2018-05-01

    Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.

  9. Physical risk factors identification based on body sensor network combined to videotaping.

    Science.gov (United States)

    Vignais, Nicolas; Bernard, Fabien; Touvenot, Gérard; Sagot, Jean-Claude

    2017-11-01

    The aim of this study was to perform an ergonomic analysis of a material handling task by combining a subtask video analysis and a RULA computation, implemented continuously through a motion capture system combining inertial sensors and electrogoniometers. Five workers participated to the experiment. Seven inertial measurement units, placed on the worker's upper body (pelvis, thorax, head, arms, forearms), were implemented through a biomechanical model of the upper body to continuously provide trunk, neck, shoulder and elbow joint angles. Wrist joint angles were derived from electrogoniometers synchronized with the inertial measurement system. Worker's activity was simultaneously recorded using video. During post-processing, joint angles were used as inputs to a computationally implemented ergonomic evaluation based on the RULA method. Consequently a RULA score was calculated at each time step to characterize the risk of exposure of the upper body (right and left sides). Local risk scores were also computed to identify the anatomical origin of the exposure. Moreover, the video-recorded work activity was time-studied in order to classify and quantify all subtasks involved into the task. Results showed that mean RULA scores were at high risk for all participants (6 and 6.2 for right and left sides respectively). A temporal analysis demonstrated that workers spent most part of the work time at a RULA score of 7 (right: 49.19 ± 35.27%; left: 55.5 ± 29.69%). Mean local scores revealed that most exposed joints during the task were elbows, lower arms, wrists and hands. Elbows and lower arms were indeed at a high level of risk during the total time of a work cycle (100% for right and left sides). Wrist and hands were also exposed to a risky level for much of the period of work (right: 82.13 ± 7.46%; left: 77.85 ± 12.46%). Concerning the subtask analysis, subtasks called 'snow thrower', 'opening the vacuum sealer', 'cleaning' and 'storing' have been identified as

  10. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    Science.gov (United States)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  11. Proposed hybrid-classifier ensemble algorithm to map snow cover area

    Science.gov (United States)

    Nijhawan, Rahul; Raman, Balasubramanian; Das, Josodhir

    2018-01-01

    Metaclassification ensemble approach is known to improve the prediction performance of snow-covered area. The methodology adopted in this case is based on neural network along with four state-of-art machine learning algorithms: support vector machine, artificial neural networks, spectral angle mapper, K-mean clustering, and a snow index: normalized difference snow index. An AdaBoost ensemble algorithm related to decision tree for snow-cover mapping is also proposed. According to available literature, these methods have been rarely used for snow-cover mapping. Employing the above techniques, a study was conducted for Raktavarn and Chaturangi Bamak glaciers, Uttarakhand, Himalaya using multispectral Landsat 7 ETM+ (enhanced thematic mapper) image. The study also compares the results with those obtained from statistical combination methods (majority rule and belief functions) and accuracies of individual classifiers. Accuracy assessment is performed by computing the quantity and allocation disagreement, analyzing statistic measures (accuracy, precision, specificity, AUC, and sensitivity) and receiver operating characteristic curves. A total of 225 combinations of parameters for individual classifiers were trained and tested on the dataset and results were compared with the proposed approach. It was observed that the proposed methodology produced the highest classification accuracy (95.21%), close to (94.01%) that was produced by the proposed AdaBoost ensemble algorithm. From the sets of observations, it was concluded that the ensemble of classifiers produced better results compared to individual classifiers.

  12. Premature ventricular contraction detection combining deep neural networks and rules inference.

    Science.gov (United States)

    Zhou, Fei-Yan; Jin, Lin-Peng; Dong, Jun

    2017-06-01

    Premature ventricular contraction (PVC), which is a common form of cardiac arrhythmia caused by ectopic heartbeat, can lead to life-threatening cardiac conditions. Computer-aided PVC detection is of considerable importance in medical centers or outpatient ECG rooms. In this paper, we proposed a new approach that combined deep neural networks and rules inference for PVC detection. The detection performance and generalization were studied using publicly available databases: the MIT-BIH arrhythmia database (MIT-BIH-AR) and the Chinese Cardiovascular Disease Database (CCDD). The PVC detection accuracy on the MIT-BIH-AR database was 99.41%, with a sensitivity and specificity of 97.59% and 99.54%, respectively, which were better than the results from other existing methods. To test the generalization capability, the detection performance was also evaluated on the CCDD. The effectiveness of the proposed method was confirmed by the accuracy (98.03%), sensitivity (96.42%) and specificity (98.06%) with the dataset over 140,000 ECG recordings of the CCDD. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. An OFDM Receiver with Frequency Domain Diversity Combined Impulsive Noise Canceller for Underwater Network

    Science.gov (United States)

    Saotome, Rie; Hai, Tran Minh; Matsuda, Yasuto; Suzuki, Taisaku; Wada, Tomohisa

    2015-01-01

    In order to explore marine natural resources using remote robotic sensor or to enable rapid information exchange between ROV (remotely operated vehicles), AUV (autonomous underwater vehicle), divers, and ships, ultrasonic underwater communication systems are used. However, if the communication system is applied to rich living creature marine environment such as shallow sea, it suffers from generated Impulsive Noise so-called Shrimp Noise, which is randomly generated in time domain and seriously degrades communication performance in underwater acoustic network. With the purpose of supporting high performance underwater communication, a robust digital communication method for Impulsive Noise environments is necessary. In this paper, we propose OFDM ultrasonic communication system with diversity receiver. The main feature of the receiver is a newly proposed Frequency Domain Diversity Combined Impulsive Noise Canceller. The OFDM receiver utilizes 20–28 KHz ultrasonic channel and subcarrier spacing of 46.875 Hz (MODE3) and 93.750 Hz (MODE2) OFDM modulations. In addition, the paper shows Impulsive Noise distribution data measured at a fishing port in Okinawa and at a barge in Shizuoka prefectures and then proposed diversity OFDM transceivers architecture and experimental results are described. By the proposed Impulsive Noise Canceller, frame bit error rate has been decreased by 20–30%. PMID:26351656

  14. An OFDM Receiver with Frequency Domain Diversity Combined Impulsive Noise Canceller for Underwater Network.

    Science.gov (United States)

    Saotome, Rie; Hai, Tran Minh; Matsuda, Yasuto; Suzuki, Taisaku; Wada, Tomohisa

    2015-01-01

    In order to explore marine natural resources using remote robotic sensor or to enable rapid information exchange between ROV (remotely operated vehicles), AUV (autonomous underwater vehicle), divers, and ships, ultrasonic underwater communication systems are used. However, if the communication system is applied to rich living creature marine environment such as shallow sea, it suffers from generated Impulsive Noise so-called Shrimp Noise, which is randomly generated in time domain and seriously degrades communication performance in underwater acoustic network. With the purpose of supporting high performance underwater communication, a robust digital communication method for Impulsive Noise environments is necessary. In this paper, we propose OFDM ultrasonic communication system with diversity receiver. The main feature of the receiver is a newly proposed Frequency Domain Diversity Combined Impulsive Noise Canceller. The OFDM receiver utilizes 20-28 KHz ultrasonic channel and subcarrier spacing of 46.875 Hz (MODE3) and 93.750 Hz (MODE2) OFDM modulations. In addition, the paper shows Impulsive Noise distribution data measured at a fishing port in Okinawa and at a barge in Shizuoka prefectures and then proposed diversity OFDM transceivers architecture and experimental results are described. By the proposed Impulsive Noise Canceller, frame bit error rate has been decreased by 20-30%.

  15. Assessment of erosion and sedimentation dynamic in a combined sewer network using online turbidity monitoring.

    Science.gov (United States)

    Bersinger, T; Le Hécho, I; Bareille, G; Pigot, T

    2015-01-01

    Eroded sewer sediments are a significant source of organic matter discharge by combined sewer overflows. Many authors have studied the erosion and sedimentation processes at the scale of a section of sewer pipe and over short time periods. The objective of this study was to assess these processes at the scale of an entire sewer network and over 1 month, to understand whether phenomena observed on a small scale of space and time are still valid on a larger scale. To achieve this objective the continuous monitoring of turbidity was used. First, the study of successive rain events allows observation of the reduction of the available sediment and highlights the widely different erosion resistance for the different sediment layers. Secondly, calculation of daily chemical oxygen demand (COD) fluxes during the entire month was performed showing that sediment storage in the sewer pipe after a rain period is important and stops after 5 days. Nevertheless, during rainfall events, the eroded fluxes are more important than the whole sewer sediment accumulated during a dry weather period. This means that the COD fluxes promoted by runoff are substantial. This work confirms, with online monitoring, most of the conclusions from other studies on a smaller scale.

  16. Short-Term Wind Speed Forecasting Using Decomposition-Based Neural Networks Combining Abnormal Detection Method

    Directory of Open Access Journals (Sweden)

    Xuejun Chen

    2014-01-01

    Full Text Available As one of the most promising renewable resources in electricity generation, wind energy is acknowledged for its significant environmental contributions and economic competitiveness. Because wind fluctuates with strong variation, it is quite difficult to describe the characteristics of wind or to estimate the power output that will be injected into the grid. In particular, short-term wind speed forecasting, an essential support for the regulatory actions and short-term load dispatching planning during the operation of wind farms, is currently regarded as one of the most difficult problems to be solved. This paper contributes to short-term wind speed forecasting by developing two three-stage hybrid approaches; both are combinations of the five-three-Hanning (53H weighted average smoothing method, ensemble empirical mode decomposition (EEMD algorithm, and nonlinear autoregressive (NAR neural networks. The chosen datasets are ten-minute wind speed observations, including twelve samples, and our simulation indicates that the proposed methods perform much better than the traditional ones when addressing short-term wind speed forecasting problems.

  17. An OFDM Receiver with Frequency Domain Diversity Combined Impulsive Noise Canceller for Underwater Network

    Directory of Open Access Journals (Sweden)

    Rie Saotome

    2015-01-01

    Full Text Available In order to explore marine natural resources using remote robotic sensor or to enable rapid information exchange between ROV (remotely operated vehicles, AUV (autonomous underwater vehicle, divers, and ships, ultrasonic underwater communication systems are used. However, if the communication system is applied to rich living creature marine environment such as shallow sea, it suffers from generated Impulsive Noise so-called Shrimp Noise, which is randomly generated in time domain and seriously degrades communication performance in underwater acoustic network. With the purpose of supporting high performance underwater communication, a robust digital communication method for Impulsive Noise environments is necessary. In this paper, we propose OFDM ultrasonic communication system with diversity receiver. The main feature of the receiver is a newly proposed Frequency Domain Diversity Combined Impulsive Noise Canceller. The OFDM receiver utilizes 20–28 KHz ultrasonic channel and subcarrier spacing of 46.875 Hz (MODE3 and 93.750 Hz (MODE2 OFDM modulations. In addition, the paper shows Impulsive Noise distribution data measured at a fishing port in Okinawa and at a barge in Shizuoka prefectures and then proposed diversity OFDM transceivers architecture and experimental results are described. By the proposed Impulsive Noise Canceller, frame bit error rate has been decreased by 20–30%.

  18. Artificial neural network combined with principal component analysis for resolution of complex pharmaceutical formulations.

    Science.gov (United States)

    Ioele, Giuseppina; De Luca, Michele; Dinç, Erdal; Oliverio, Filomena; Ragno, Gaetano

    2011-01-01

    A chemometric approach based on the combined use of the principal component analysis (PCA) and artificial neural network (ANN) was developed for the multicomponent determination of caffeine (CAF), mepyramine (MEP), phenylpropanolamine (PPA) and pheniramine (PNA) in their pharmaceutical preparations without any chemical separation. The predictive ability of the ANN method was compared with the classical linear regression method Partial Least Squares 2 (PLS2). The UV spectral data between 220 and 300 nm of a training set of sixteen quaternary mixtures were processed by PCA to reduce the dimensions of input data and eliminate the noise coming from instrumentation. Several spectral ranges and different numbers of principal components (PCs) were tested to find the PCA-ANN and PLS2 models reaching the best determination results. A two layer ANN, using the first four PCs, was used with log-sigmoid transfer function in first hidden layer and linear transfer function in output layer. Standard error of prediction (SEP) was adopted to assess the predictive accuracy of the models when subjected to external validation. PCA-ANN showed better prediction ability in the determination of PPA and PNA in synthetic samples with added excipients and pharmaceutical formulations. Since both components are characterized by low absorptivity, the better performance of PCA-ANN was ascribed to the ability in considering all non-linear information from noise or interfering excipients.

  19. Priority substances in combined sewer overflows: case study of the Paris sewer network.

    Science.gov (United States)

    Gasperi, J; Garnaud, S; Rocher, V; Moilleron, R

    2011-01-01

    This study was undertaken to supply data on both priority pollutant (PP) occurrence and concentrations in combined sewer overflows (CSOs). A single rain event was studied on 13 sites within the Paris sewer network. For each sample, a total of 66 substances, including metals, polycyclic aromatic hydrocarbons (PAHs), pesticides, organotins, volatile organic compounds, chlorobenzenes, phthalates and alkylphenols were analyzed. Of the 66 compounds analyzed in all, 40 PPs including 12 priority hazardous substances were detected in CSOs. As expected, most metals were present in all samples, reflecting their ubiquitous nature. Chlorobenzenes and most pesticides were never quantified above the limit of quantification, while the majority of the other organic pollutants, except DEHP (median concentration: 22 μg.l(-1)), were found to lie in the μg.l(-1) range. For the particular rain event studied, the pollutant loads discharged by CSOs were evaluated and then compared to pollutant loads conveyed by the Seine River. Under the hydraulic conditions considered and according to the estimations performed, this comparison suggests that CSOs are potentially significant local source of metals, PAHs and DEHP. Depending on the substance, the ratio between the CSO and Seine River loads varied from 0.5 to 26, underscoring the important local impact of CSOs at the scale of this storm for most pollutants.

  20. Voice Quality Estimation in Combined Radio-VoIP Networks for Dispatching Systems

    Directory of Open Access Journals (Sweden)

    Jiri Vodrazka

    2016-01-01

    Full Text Available The voice quality modelling assessment and planning field is deeply and widely theoretically and practically mastered for common voice communication systems, especially for the public fixed and mobile telephone networks including Next Generation Networks (NGN - internet protocol based networks. This article seeks to contribute voice quality modelling assessment and planning for dispatching communication systems based on Internet Protocol (IP and private radio networks. The network plan, correction in E-model calculation and default values for the model are presented and discussed.

  1. Classified

    CERN Multimedia

    Computer Security Team

    2011-01-01

    In the last issue of the Bulletin, we have discussed recent implications for privacy on the Internet. But privacy of personal data is just one facet of data protection. Confidentiality is another one. However, confidentiality and data protection are often perceived as not relevant in the academic environment of CERN.   But think twice! At CERN, your personal data, e-mails, medical records, financial and contractual documents, MARS forms, group meeting minutes (and of course your password!) are all considered to be sensitive, restricted or even confidential. And this is not all. Physics results, in particular when being preliminary and pending scrutiny, are sensitive, too. Just recently, an ATLAS collaborator copy/pasted the abstract of an ATLAS note onto an external public blog, despite the fact that this document was clearly marked as an "Internal Note". Such an act was not only embarrassing to the ATLAS collaboration, and had negative impact on CERN’s reputation --- i...

  2. Classifying Sluice Occurrences in Dialogue

    DEFF Research Database (Denmark)

    Baird, Austin; Hamza, Anissa; Hardt, Daniel

    2018-01-01

    perform manual annotation with acceptable inter-coder agreement. We build classifier models with Decision Trees and Naive Bayes, with accuracy of 67%. We deploy a classifier to automatically classify sluice occurrences in OpenSubtitles, resulting in a corpus with 1.7 million occurrences. This will support....... Despite this, the corpus can be of great use in research on sluicing and development of systems, and we are making the corpus freely available on request. Furthermore, we are in the process of improving the accuracy of sluice identification and annotation for the purpose of created a subsequent version...

  3. Material discovery by combining stochastic surface walking global optimization with a neural network.

    Science.gov (United States)

    Huang, Si-Da; Shang, Cheng; Zhang, Xiao-Jie; Liu, Zhi-Pan

    2017-09-01

    While the underlying potential energy surface (PES) determines the structure and other properties of a material, it has been frustrating to predict new materials from theory even with the advent of supercomputing facilities. The accuracy of the PES and the efficiency of PES sampling are two major bottlenecks, not least because of the great complexity of the material PES. This work introduces a "Global-to-Global" approach for material discovery by combining for the first time a global optimization method with neural network (NN) techniques. The novel global optimization method, named the stochastic surface walking (SSW) method, is carried out massively in parallel for generating a global training data set, the fitting of which by the atom-centered NN produces a multi-dimensional global PES; the subsequent SSW exploration of large systems with the analytical NN PES can provide key information on the thermodynamics and kinetics stability of unknown phases identified from global PESs. We describe in detail the current implementation of the SSW-NN method with particular focuses on the size of the global data set and the simultaneous energy/force/stress NN training procedure. An important functional material, TiO 2 , is utilized as an example to demonstrate the automated global data set generation, the improved NN training procedure and the application in material discovery. Two new TiO 2 porous crystal structures are identified, which have similar thermodynamics stability to the common TiO 2 rutile phase and the kinetics stability for one of them is further proved from SSW pathway sampling. As a general tool for material simulation, the SSW-NN method provides an efficient and predictive platform for large-scale computational material screening.

  4. Altered temporal features of intrinsic connectivity networks in boys with combined type of attention deficit hyperactivity disorder

    International Nuclear Information System (INIS)

    Wang, Xun-Heng; Li, Lihua

    2015-01-01

    Highlights: • Temporal patterns within ICNs provide new way to investigate ADHD brains. • ADHD exhibits enhanced temporal activities within and between ICNs. • Network-wise ALFF influences functional connectivity between ICNs. • Univariate patterns within ICNs are correlated to behavior scores. - Abstract: Purpose: Investigating the altered temporal features within and between intrinsic connectivity networks (ICNs) for boys with attention-deficit/hyperactivity disorder (ADHD); and analyzing the relationships between altered temporal features within ICNs and behavior scores. Materials and methods: A cohort of boys with combined type of ADHD and a cohort of age-matched healthy boys were recruited from ADHD-200 Consortium. All resting-state fMRI datasets were preprocessed and normalized into standard brain space. Using general linear regression, 20 ICNs were taken as spatial templates to analyze the time-courses of ICNs for each subject. Amplitude of low frequency fluctuations (ALFFs) were computed as univariate temporal features within ICNs. Pearson correlation coefficients and node strengths were computed as bivariate temporal features between ICNs. Additional correlation analysis was performed between temporal features of ICNs and behavior scores. Results: ADHD exhibited more activated network-wise ALFF than normal controls in attention and default mode-related network. Enhanced functional connectivities between ICNs were found in ADHD. The network-wise ALFF within ICNs might influence the functional connectivity between ICNs. The temporal pattern within posterior default mode network (pDMN) was positively correlated to inattentive scores. The subcortical network, fusiform-related DMN and attention-related networks were negatively correlated to Intelligence Quotient (IQ) scores. Conclusion: The temporal low frequency oscillations of ICNs in boys with ADHD were more activated than normal controls during resting state; the temporal features within ICNs could

  5. Altered temporal features of intrinsic connectivity networks in boys with combined type of attention deficit hyperactivity disorder

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xun-Heng, E-mail: xhwang@hdu.edu.cn [College of Life Information Science and Instrument Engineering, Hangzhou Dianzi University, Hangzhou 310018 (China); School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096 (China); Li, Lihua [College of Life Information Science and Instrument Engineering, Hangzhou Dianzi University, Hangzhou 310018 (China)

    2015-05-15

    Highlights: • Temporal patterns within ICNs provide new way to investigate ADHD brains. • ADHD exhibits enhanced temporal activities within and between ICNs. • Network-wise ALFF influences functional connectivity between ICNs. • Univariate patterns within ICNs are correlated to behavior scores. - Abstract: Purpose: Investigating the altered temporal features within and between intrinsic connectivity networks (ICNs) for boys with attention-deficit/hyperactivity disorder (ADHD); and analyzing the relationships between altered temporal features within ICNs and behavior scores. Materials and methods: A cohort of boys with combined type of ADHD and a cohort of age-matched healthy boys were recruited from ADHD-200 Consortium. All resting-state fMRI datasets were preprocessed and normalized into standard brain space. Using general linear regression, 20 ICNs were taken as spatial templates to analyze the time-courses of ICNs for each subject. Amplitude of low frequency fluctuations (ALFFs) were computed as univariate temporal features within ICNs. Pearson correlation coefficients and node strengths were computed as bivariate temporal features between ICNs. Additional correlation analysis was performed between temporal features of ICNs and behavior scores. Results: ADHD exhibited more activated network-wise ALFF than normal controls in attention and default mode-related network. Enhanced functional connectivities between ICNs were found in ADHD. The network-wise ALFF within ICNs might influence the functional connectivity between ICNs. The temporal pattern within posterior default mode network (pDMN) was positively correlated to inattentive scores. The subcortical network, fusiform-related DMN and attention-related networks were negatively correlated to Intelligence Quotient (IQ) scores. Conclusion: The temporal low frequency oscillations of ICNs in boys with ADHD were more activated than normal controls during resting state; the temporal features within ICNs could

  6. Classifying objects in LWIR imagery via CNNs

    Science.gov (United States)

    Rodger, Iain; Connor, Barry; Robertson, Neil M.

    2016-10-01

    The aim of the presented work is to demonstrate enhanced target recognition and improved false alarm rates for a mid to long range detection system, utilising a Long Wave Infrared (LWIR) sensor. By exploiting high quality thermal image data and recent techniques in machine learning, the system can provide automatic target recognition capabilities. A Convolutional Neural Network (CNN) is trained and the classifier achieves an overall accuracy of > 95% for 6 object classes related to land defence. While the highly accurate CNN struggles to recognise long range target classes, due to low signal quality, robust target discrimination is achieved for challenging candidates. The overall performance of the methodology presented is assessed using human ground truth information, generating classifier evaluation metrics for thermal image sequences.

  7. Novel amphiphilic poly(dimethylsiloxane) based polyurethane networks tethered with carboxybetaine and their combined antibacterial and anti-adhesive property

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Jingxian; Fu, Yuchen; Zhang, Qinghua, E-mail: qhzhang@zju.edu.cn; Zhan, Xiaoli; Chen, Fengqiu

    2017-08-01

    Highlights: • An amphiphilic poly(dimethylsiloxane) (PDMS) based polyurethane (PU) network tethered with carboxybetaine is prepared. • The surface distribution of PDMS and zwitterionic segments produces an obvious amphiphilic heterogeneous surface. • This designed PDMS-based amphiphilic PU network exhibits combined antibacterial and anti-adhesive properties. - Abstract: The traditional nonfouling materials are powerless against bacterial cells attachment, while the hydrophobic bactericidal surfaces always suffer from nonspecific protein adsorption and dead bacterial cells accumulation. Here, amphiphilic polyurethane (PU) networks modified with poly(dimethylsiloxane) (PDMS) and cationic carboxybetaine diol through simple crosslinking reaction were developed, which had an antibacterial efficiency of 97.7%. Thereafter, the hydrolysis of carboxybetaine ester into zwitterionic groups brought about anti-adhesive properties against bacteria and proteins. The surface chemical composition and wettability performance of the PU network surfaces were investigated by attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR), X-ray photoelectron spectroscopy (XPS) and contact angle analysis. The surface distribution of PDMS and zwitterionic segments produced an obvious amphiphilic heterogeneous surface, which was demonstrated by atomic force microscopy (AFM). Enzyme-linked immunosorbent assays (ELISA) were used to test the nonspecific protein adsorption behaviors. With the advantages of the transition from excellent bactericidal performance to anti-adhesion and the combination of fouling resistance and fouling release property, the designed PDMS-based amphiphilic PU network shows great application potential in biomedical devices and marine facilities.

  8. A new and accurate fault location algorithm for combined transmission lines using Adaptive Network-Based Fuzzy Inference System

    Energy Technology Data Exchange (ETDEWEB)

    Sadeh, Javad; Afradi, Hamid [Electrical Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad, P.O. Box: 91775-1111, Mashhad (Iran)

    2009-11-15

    This paper presents a new and accurate algorithm for locating faults in a combined overhead transmission line with underground power cable using Adaptive Network-Based Fuzzy Inference System (ANFIS). The proposed method uses 10 ANFIS networks and consists of 3 stages, including fault type classification, faulty section detection and exact fault location. In the first part, an ANFIS is used to determine the fault type, applying four inputs, i.e., fundamental component of three phase currents and zero sequence current. Another ANFIS network is used to detect the faulty section, whether the fault is on the overhead line or on the underground cable. Other eight ANFIS networks are utilized to pinpoint the faults (two for each fault type). Four inputs, i.e., the dc component of the current, fundamental frequency of the voltage and current and the angle between them, are used to train the neuro-fuzzy inference systems in order to accurately locate the faults on each part of the combined line. The proposed method is evaluated under different fault conditions such as different fault locations, different fault inception angles and different fault resistances. Simulation results confirm that the proposed method can be used as an efficient means for accurate fault location on the combined transmission lines. (author)

  9. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  10. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen [Tianjin Key Laboratory of Process Measurement and Control, Institute of Robotics and Autonomous Systems, School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2014-05-15

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.

  11. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    International Nuclear Information System (INIS)

    Jing, Yaqi; Meng, Qinghao; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen

    2014-01-01

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively

  12. Combining social and genetic networks to study HIV transmission in mixing risk groups

    NARCIS (Netherlands)

    Zarrabi, N.; Prosperi, M.C.F.; Belleman, R.G.; Di Giambenedetto, S.; Fabbiani, M.; De Luca, A.; Sloot, P.M.A.

    2013-01-01

    Reconstruction of HIV transmission networks is important for understanding and preventing the spread of the virus and drug resistant variants. Mixing risk groups is important in network analysis of HIV in order to assess the role of transmission between risk groups in the HIV epidemic. Most of the

  13. Combining Host-based and network-based intrusion detection system

    African Journals Online (AJOL)

    These attacks were simulated using hping. The proposed system is implemented in Java. The results show that the proposed system is able to detect attacks both from within (host-based) and outside sources (network-based). Key Words: Intrusion Detection System (IDS), Host-based, Network-based, Signature, Security log.

  14. Combining epidemiological and genetic networks signifies the importance of early treatment in HIV-1 transmission

    NARCIS (Netherlands)

    Zarrabi, N.; Prosperi, M.; Belleman, R.G.; Colafigli, M.; De Luca, A.; Sloot, P.M.A.

    2012-01-01

    Inferring disease transmission networks is important in epidemiology in order to understand and prevent the spread of infectious diseases. Reconstruction of the infection transmission networks requires insight into viral genome data as well as social interactions. For the HIV-1 epidemic, current

  15. Facilitating sustainability through smart network design in combination with virtual power plant operation

    NARCIS (Netherlands)

    El Bakari, K.; Kling, W.L.

    2010-01-01

    While smart grids are considered as an outcome to integrate a high penetration level of dispersed generation (DG) in the power system, most distribution networks are still passive controlled. To accelerate the transition towards smart grids network operators can take two important steps: 1.

  16. Smart grids : combination of 'Virtual Power Plant'-concept and 'smart network'-design

    NARCIS (Netherlands)

    El Bakari, K.; Kling, W.L.

    2010-01-01

    The concept of virtual power plant (VPP) offers a solution to control and manage higher level of dispersed generation in nowadays passive distribution network. Under certain conditions the VPP is able to displace power and energy which implies more control on the energy flow in the networks. To

  17. IAEA safeguards and classified materials

    International Nuclear Information System (INIS)

    Pilat, J.F.; Eccleston, G.W.; Fearey, B.L.; Nicholas, N.J.; Tape, J.W.; Kratzer, M.

    1997-01-01

    The international community in the post-Cold War period has suggested that the International Atomic Energy Agency (IAEA) utilize its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials, some of which are classified, under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring classified materials. A traditional safeguards approach, based on nuclear material accountancy, would seem unavoidably to reveal classified information. However, further analysis of the IAEA's safeguards approaches is warranted in order to understand fully the scope and nature of any problems. The issues are complex and difficult, and it is expected that common technical understandings will be essential for their resolution. Accordingly, this paper examines and compares traditional safeguards item accounting of fuel at a nuclear power station (especially spent fuel) with the challenges presented by inspections of classified materials. This analysis is intended to delineate more clearly the problems as well as reveal possible approaches, techniques, and technologies that could allow the adaptation of safeguards to the unprecedented task of inspecting classified materials. It is also hoped that a discussion of these issues can advance ongoing political-technical debates on international inspections of excess classified materials

  18. Neural networks and traditional time series methods: a synergistic combination in state economic forecasts.

    Science.gov (United States)

    Hansen, J V; Nelson, R D

    1997-01-01

    Ever since the initial planning for the 1997 Utah legislative session, neural-network forecasting techniques have provided valuable insights for analysts forecasting tax revenues. These revenue estimates are critically important since agency budgets, support for education, and improvements to infrastructure all depend on their accuracy. Underforecasting generates windfalls that concern taxpayers, whereas overforecasting produces budget shortfalls that cause inadequately funded commitments. The pattern finding ability of neural networks gives insightful and alternative views of the seasonal and cyclical components commonly found in economic time series data. Two applications of neural networks to revenue forecasting clearly demonstrate how these models complement traditional time series techniques. In the first, preoccupation with a potential downturn in the economy distracts analysis based on traditional time series methods so that it overlooks an emerging new phenomenon in the data. In this case, neural networks identify the new pattern that then allows modification of the time series models and finally gives more accurate forecasts. In the second application, data structure found by traditional statistical tools allows analysts to provide neural networks with important information that the networks then use to create more accurate models. In summary, for the Utah revenue outlook, the insights that result from a portfolio of forecasts that includes neural networks exceeds the understanding generated from strictly statistical forecasting techniques. In this case, the synergy clearly results in the whole of the portfolio of forecasts being more accurate than the sum of the individual parts.

  19. Identification of T1D susceptibility genes within the MHC region by combining protein interaction networks and SNP genotyping data

    DEFF Research Database (Denmark)

    Brorsson, C.; Hansen, Niclas Tue; Hansen, Kasper Lage

    2009-01-01

    genes. We have developed a novel method that combines single nucleotide polymorphism (SNP) genotyping data with protein-protein interaction (ppi) networks to identify disease-associated network modules enriched for proteins encoded from the MHC region. Approximately 2500 SNPs located in the 4 Mb MHC......To develop novel methods for identifying new genes that contribute to the risk of developing type 1 diabetes within the Major Histocompatibility Complex (MHC) region on chromosome 6, independently of the known linkage disequilibrium (LD) between human leucocyte antigen (HLA)-DRB1, -DQA1, -DQB1...... region were analysed in 1000 affected offspring trios generated by the Type 1 Diabetes Genetics Consortium (T1DGC). The most associated SNP in each gene was chosen and genes were mapped to ppi networks for identification of interaction partners. The association testing and resulting interacting protein...

  20. Identifying and ranking influential spreaders in complex networks by combining a local-degree sum and the clustering coefficient

    Science.gov (United States)

    Li, Mengtian; Zhang, Ruisheng; Hu, Rongjing; Yang, Fan; Yao, Yabing; Yuan, Yongna

    2018-03-01

    Identifying influential spreaders is a crucial problem that can help authorities to control the spreading process in complex networks. Based on the classical degree centrality (DC), several improved measures have been presented. However, these measures cannot rank spreaders accurately. In this paper, we first calculate the sum of the degrees of the nearest neighbors of a given node, and based on the calculated sum, a novel centrality named clustered local-degree (CLD) is proposed, which combines the sum and the clustering coefficients of nodes to rank spreaders. By assuming that the spreading process in networks follows the susceptible-infectious-recovered (SIR) model, we perform extensive simulations on a series of real networks to compare the performances between the CLD centrality and other six measures. The results show that the CLD centrality has a competitive performance in distinguishing the spreading ability of nodes, and exposes the best performance to identify influential spreaders accurately.

  1. Visualization and Analysis of a Cardio Vascular Diseaseand MUPP1-related Biological Network combining Text Mining and Data Warehouse Approaches

    Directory of Open Access Journals (Sweden)

    Sommer Björn

    2010-03-01

    Full Text Available Detailed investigation of socially important diseases with modern experimental methods has resulted in the generation of large volume of valuable data. However, analysis and interpretation of this data needs application of efficient computational techniques and systems biology approaches. In particular, the techniques allowing the reconstruction of associative networks of various biological objects and events can be useful. In this publication, the combination of different techniques to create such a network associated with an abstract cell environment is discussed in order to gain insights into the functional as well as spatial interrelationships. It is shown that experimentally gained knowledge enriched with data warehouse content and text mining data can be used for the reconstruction and localization of a cardiovascular disease developing network beginning with MUPP1/MPDZ (multi-PDZ domain protein.

  2. Combining Neural Networks with Existing Methods to Estimate 1 in 100-Year Flood Event Magnitudes

    Science.gov (United States)

    Newson, A.; See, L.

    2005-12-01

    Over the last fifteen years artificial neural networks (ANN) have been shown to be advantageous for the solution of many hydrological modelling problems. The use of ANNs for flood magnitude estimation in ungauged catchments, however, is a relatively new and under researched area. In this paper ANNs are used to make estimates of the magnitude of the 100-year flood event (Q100) for a number of ungauged catchments. The data used in this study were provided by the Centre for Ecology and Hydrology's Flood Estimation Handbook (FEH), which contains information on catchments across the UK. Sixteen catchment descriptors for 719 catchments were used to train an ANN, which was split into a training, validation and test data set. The goodness-of-fit statistics on the test data set indicated good model performance, with an r-squared value of 0.8 and a coefficient of efficiency of 79 percent. Data for twelve ungauged catchments were then put through the trained ANN to produce estimates of Q100. Two other accepted methodologies were also employed: the FEH statistical method and the FSR (Flood Studies Report) design storm technique, both of which are used to produce flood frequency estimates. The advantage of developing an ANN model is that it provides a third figure to aid a hydrologist in making an accurate estimate. For six of the twelve catchments, there was a relatively low spread between estimates. In these instances, an estimate of Q100 could be made with a fair degree of certainty. Of the remaining six catchments, three had areas greater than 1000km2, which means the FSR design storm estimate cannot be used. Armed with the ANN model and the FEH statistical method the hydrologist still has two possible estimates to consider. For these three catchments, the estimates were also fairly similar, providing additional confidence to the estimation. In summary, the findings of this study have shown that an accurate estimation of Q100 can be made using the catchment descriptors of

  3. Combining Volcano Monitoring Timeseries Analyses with Bayesian Belief Networks to Update Hazard Forecast Estimates

    Science.gov (United States)

    Odbert, Henry; Hincks, Thea; Aspinall, Willy

    2015-04-01

    Volcanic hazard assessments must combine information about the physical processes of hazardous phenomena with observations that indicate the current state of a volcano. Incorporating both these lines of evidence can inform our belief about the likelihood (probability) and consequences (impact) of possible hazardous scenarios, forming a basis for formal quantitative hazard assessment. However, such evidence is often uncertain, indirect or incomplete. Approaches to volcano monitoring have advanced substantially in recent decades, increasing the variety and resolution of multi-parameter timeseries data recorded at volcanoes. Interpreting these multiple strands of parallel, partial evidence thus becomes increasingly complex. In practice, interpreting many timeseries requires an individual to be familiar with the idiosyncrasies of the volcano, monitoring techniques, configuration of recording instruments, observations from other datasets, and so on. In making such interpretations, an individual must consider how different volcanic processes may manifest as measureable observations, and then infer from the available data what can or cannot be deduced about those processes. We examine how parts of this process may be synthesised algorithmically using Bayesian inference. Bayesian Belief Networks (BBNs) use probability theory to treat and evaluate uncertainties in a rational and auditable scientific manner, but only to the extent warranted by the strength of the available evidence. The concept is a suitable framework for marshalling multiple strands of evidence (e.g. observations, model results and interpretations) and their associated uncertainties in a methodical manner. BBNs are usually implemented in graphical form and could be developed as a tool for near real-time, ongoing use in a volcano observatory, for example. We explore the application of BBNs in analysing volcanic data from the long-lived eruption at Soufriere Hills Volcano, Montserrat. We show how our method

  4. Performance of classification confidence measures in dynamic classifier systems

    Czech Academy of Sciences Publication Activity Database

    Štefka, D.; Holeňa, Martin

    2013-01-01

    Roč. 23, č. 4 (2013), s. 299-319 ISSN 1210-0552 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : classifier combining * dynamic classifier systems * classification confidence Subject RIV: IN - Informatics, Computer Science Impact factor: 0.412, year: 2013

  5. Research on method of nuclear power plant operation fault diagnosis based on a combined artificial neural network

    International Nuclear Information System (INIS)

    Liu Feng; Yu Ren; Li Fengyu; Zhang Meng

    2007-01-01

    To solve the online real-time diagnosis problem of the nuclear power plant in operating condition, a method based on a combined artificial neural network is put forward in the paper. Its main principle is: using the BP neural network for the fast group diagnosis, and then using the RBF neural network for distinguishing and verifying the diagnostic result. The accuracy of the method is verified using the simulation values of the key parameters in normal status and malfunction status of a nuclear power plant. The results show that the method combining the advantages of the two neural networks can not only diagnose the learned faults in similar power level of the nuclear power plant quickly and accurately, but also can identify the faults in different power status, as well as the unlearned faults. The outputs of the diagnosis system are in form of the reliability of the faults, and are changing with the lasting of the operation time of the plant. This makes the diagnosis results be more acceptable to operators. (authors)

  6. Comparing Attentional Networks in fetal alcohol spectrum disorder and the inattentive and combined subtypes of attention deficit hyperactivity disorder.

    Science.gov (United States)

    Kooistra, Libbe; Crawford, Susan; Gibbard, Ben; Kaplan, Bonnie J; Fan, Jin

    2011-01-01

    The Attention Network Test (ANT) was used to examine alerting, orienting, and executive control in fetal alcohol spectrum disorder (FASD) versus attention deficit hyperactivity disorder (ADHD). Participants were 113 children aged 7 to 10 years (31 ADHD-Combined, 16 ADHD-Primarily Inattentive, 28 FASD, 38 controls). Incongruent flanker trials triggered slower responses in both the ADHD-Combined and the FASD groups. Abnormal conflict scores in these same two groups provided additional evidence for the presence of executive function deficits. The ADHD-Primarily Inattentive group was indistinguishable from the controls on all three ANT indices, which highlights the possibility that this group constitutes a pathologically distinct entity.

  7. A grey neural network and input-output combined forecasting model. Primary energy consumption forecasts in Spanish economic sectors

    International Nuclear Information System (INIS)

    Liu, Xiuli; Moreno, Blanca; García, Ana Salomé

    2016-01-01

    A combined forecast of Grey forecasting method and neural network back propagation model, which is called Grey Neural Network and Input-Output Combined Forecasting Model (GNF-IO model), is proposed. A real case of energy consumption forecast is used to validate the effectiveness of the proposed model. The GNF-IO model predicts coal, crude oil, natural gas, renewable and nuclear primary energy consumption volumes by Spain's 36 sub-sectors from 2010 to 2015 according to three different GDP growth scenarios (optimistic, baseline and pessimistic). Model test shows that the proposed model has higher simulation and forecasting accuracy on energy consumption than Grey models separately and other combination methods. The forecasts indicate that the primary energies as coal, crude oil and natural gas will represent on average the 83.6% percent of the total of primary energy consumption, raising concerns about security of supply and energy cost and adding risk for some industrial production processes. Thus, Spanish industry must speed up its transition to an energy-efficiency economy, achieving a cost reduction and increase in the level of self-supply. - Highlights: • Forecasting System Using Grey Models combined with Input-Output Models is proposed. • Primary energy consumption in Spain is used to validate the model. • The grey-based combined model has good forecasting performance. • Natural gas will represent the majority of the total of primary energy consumption. • Concerns about security of supply, energy cost and industry competitiveness are raised.

  8. Gene expression patterns combined with network analysis identify hub genes associated with bladder cancer.

    Science.gov (United States)

    Bi, Dongbin; Ning, Hao; Liu, Shuai; Que, Xinxiang; Ding, Kejia

    2015-06-01

    To explore molecular mechanisms of bladder cancer (BC), network strategy was used to find biomarkers for early detection and diagnosis. The differentially expressed genes (DEGs) between bladder carcinoma patients and normal subjects were screened using empirical Bayes method of the linear models for microarray data package. Co-expression networks were constructed by differentially co-expressed genes and links. Regulatory impact factors (RIF) metric was used to identify critical transcription factors (TFs). The protein-protein interaction (PPI) networks were constructed by the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) and clusters were obtained through molecular complex detection (MCODE) algorithm. Centralities analyses for complex networks were performed based on degree, stress and betweenness. Enrichment analyses were performed based on Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) databases. Co-expression networks and TFs (based on expression data of global DEGs and DEGs in different stages and grades) were identified. Hub genes of complex networks, such as UBE2C, ACTA2, FABP4, CKS2, FN1 and TOP2A, were also obtained according to analysis of degree. In gene enrichment analyses of global DEGs, cell adhesion, proteinaceous extracellular matrix and extracellular matrix structural constituent were top three GO terms. ECM-receptor interaction, focal adhesion, and cell cycle were significant pathways. Our results provide some potential underlying biomarkers of BC. However, further validation is required and deep studies are needed to elucidate the pathogenesis of BC. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Combined modal split and assignment model for the multimodal transportation network of the economic circle in China

    Directory of Open Access Journals (Sweden)

    Sh. Li

    2009-09-01

    Full Text Available Economic circles have been formed and developing in China. An economic circle consists of more than one closely adjoining central cities and their influence zones. It is always the major engine for the development of one country’s economy and even for the world economy. A combined modal split and assignment model with deterministic travel demand is proposed for modelling passengers’ choices of intercity bus and train which are two main competing modes in the multimodal transportation network of the economic circle. The generalized travel cost model of highway and railway are used incorporating travel time, ticket fare and passenger’s discomfort. On the highway network, the interactions of private vehicles and intercity buses are asymmetric. Thus, a variational inequality formulation is proposed to describe the combined model. The streamlined diagonalization algorithm is presented to solve the combined model. The multimodal transportation network based on Yangtze River Delta economic circle is presented to illustrate the proposed method. The results show the efficiency of the proposed model.

  10. Assessing sensory versus optogenetic network activation by combining (o)fMRI with optical Ca2+ recordings

    Science.gov (United States)

    Schmid, Florian; Wachsmuth, Lydia; Schwalm, Miriam; Prouvot, Pierre-Hugues; Jubal, Eduardo Rosales; Fois, Consuelo; Pramanik, Gautam; Zimmer, Claus; Stroh, Albrecht

    2015-01-01

    Encoding of sensory inputs in the cortex is characterized by sparse neuronal network activation. Optogenetic stimulation has previously been combined with fMRI (ofMRI) to probe functional networks. However, for a quantitative optogenetic probing of sensory-driven sparse network activation, the level of similarity between sensory and optogenetic network activation needs to be explored. Here, we complement ofMRI with optic fiber-based population Ca2+ recordings for a region-specific readout of neuronal spiking activity in rat brain. Comparing Ca2+ responses to the blood oxygenation level-dependent signal upon sensory stimulation with increasing frequencies showed adaptation of Ca2+ transients contrasted by an increase of blood oxygenation level-dependent responses, indicating that the optical recordings convey complementary information on neuronal network activity to the corresponding hemodynamic response. To study the similarity of optogenetic and sensory activation, we quantified the density of cells expressing channelrhodopsin-2 and modeled light propagation in the tissue. We estimated the effectively illuminated volume and numbers of optogenetically stimulated neurons, being indicative of sparse activation. At the functional level, upon either sensory or optogenetic stimulation we detected single-peak short-latency primary Ca2+ responses with similar amplitudes and found that blood oxygenation level-dependent responses showed similar time courses. These data suggest that ofMRI can serve as a representative model for functional brain mapping. PMID:26661247

  11. Assessing sensory versus optogenetic network activation by combining (o)fMRI with optical Ca2+ recordings.

    Science.gov (United States)

    Schmid, Florian; Wachsmuth, Lydia; Schwalm, Miriam; Prouvot, Pierre-Hugues; Jubal, Eduardo Rosales; Fois, Consuelo; Pramanik, Gautam; Zimmer, Claus; Faber, Cornelius; Stroh, Albrecht

    2016-11-01

    Encoding of sensory inputs in the cortex is characterized by sparse neuronal network activation. Optogenetic stimulation has previously been combined with fMRI (ofMRI) to probe functional networks. However, for a quantitative optogenetic probing of sensory-driven sparse network activation, the level of similarity between sensory and optogenetic network activation needs to be explored. Here, we complement ofMRI with optic fiber-based population Ca 2+ recordings for a region-specific readout of neuronal spiking activity in rat brain. Comparing Ca 2+ responses to the blood oxygenation level-dependent signal upon sensory stimulation with increasing frequencies showed adaptation of Ca 2+ transients contrasted by an increase of blood oxygenation level-dependent responses, indicating that the optical recordings convey complementary information on neuronal network activity to the corresponding hemodynamic response. To study the similarity of optogenetic and sensory activation, we quantified the density of cells expressing channelrhodopsin-2 and modeled light propagation in the tissue. We estimated the effectively illuminated volume and numbers of optogenetically stimulated neurons, being indicative of sparse activation. At the functional level, upon either sensory or optogenetic stimulation we detected single-peak short-latency primary Ca 2+ responses with similar amplitudes and found that blood oxygenation level-dependent responses showed similar time courses. These data suggest that ofMRI can serve as a representative model for functional brain mapping. © The Author(s) 2015.

  12. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  13. A combined geostatistical-optimization model for the optimal design of a groundwater quality monitoring network

    Science.gov (United States)

    Kolosionis, Konstantinos; Papadopoulou, Maria P.

    2017-04-01

    Monitoring networks provide essential information for water resources management especially in areas with significant groundwater exploitation due to extensive agricultural activities. In this work, a simulation-optimization framework is developed based on heuristic optimization methodologies and geostatistical modeling approaches to obtain an optimal design for a groundwater quality monitoring network. Groundwater quantity and quality data obtained from 43 existing observation locations at 3 different hydrological periods in Mires basin in Crete, Greece will be used in the proposed framework in terms of Regression Kriging to develop the spatial distribution of nitrates concentration in the aquifer of interest. Based on the existing groundwater quality mapping, the proposed optimization tool will determine a cost-effective observation wells network that contributes significant information to water managers and authorities. The elimination of observation wells that add little or no beneficial information to groundwater level and quality mapping of the area can be obtain using estimations uncertainty and statistical error metrics without effecting the assessment of the groundwater quality. Given the high maintenance cost of groundwater monitoring networks, the proposed tool could used by water regulators in the decision-making process to obtain a efficient network design that is essential.

  14. Is Congenital Amusia a Disconnection Syndrome? A Study Combining Tract- and Network-Based Analysis

    Directory of Open Access Journals (Sweden)

    Jieqiong Wang

    2017-09-01

    Full Text Available Previous studies on congenital amusia mainly focused on the impaired fronto-temporal pathway. It is possible that neural pathways of amusia patients on a larger scale are affected. In this study, we investigated changes in structural connections by applying both tract-based and network-based analysis to DTI data of 12 subjects with congenital amusia and 20 demographic-matched normal controls. TBSS (tract-based spatial statistics was used to detect microstructural changes. The results showed that amusics had higher diffusivity indices in the corpus callosum, the right inferior/superior longitudinal fasciculus, and the right inferior frontal-occipital fasciculus (IFOF. The axial diffusivity values of the right IFOF were negatively correlated with musical scores in the amusia group. Network-based analysis showed that the efficiency of the brain network was reduced in amusics. The impairments of WM tracts were also found to be correlated with reduced network efficiency in amusics. This suggests that impaired WM tracts may lead to the reduced network efficiency seen in amusics. Our findings suggest that congenital amusia is a disconnection syndrome.

  15. Is Congenital Amusia a Disconnection Syndrome? A Study Combining Tract- and Network-Based Analysis.

    Science.gov (United States)

    Wang, Jieqiong; Zhang, Caicai; Wan, Shibiao; Peng, Gang

    2017-01-01

    Previous studies on congenital amusia mainly focused on the impaired fronto-temporal pathway. It is possible that neural pathways of amusia patients on a larger scale are affected. In this study, we investigated changes in structural connections by applying both tract-based and network-based analysis to DTI data of 12 subjects with congenital amusia and 20 demographic-matched normal controls. TBSS (tract-based spatial statistics) was used to detect microstructural changes. The results showed that amusics had higher diffusivity indices in the corpus callosum, the right inferior/superior longitudinal fasciculus, and the right inferior frontal-occipital fasciculus (IFOF). The axial diffusivity values of the right IFOF were negatively correlated with musical scores in the amusia group. Network-based analysis showed that the efficiency of the brain network was reduced in amusics. The impairments of WM tracts were also found to be correlated with reduced network efficiency in amusics. This suggests that impaired WM tracts may lead to the reduced network efficiency seen in amusics. Our findings suggest that congenital amusia is a disconnection syndrome.

  16. Classifying and Analyzing 3d Cell Motion in Jammed Microgels

    Science.gov (United States)

    Bhattacharjee, Tapomoy; Sawyer, W. Gregory; Angelini, Thomas

    Soft granular polyelectrolyte microgels swell in liquid cell growth media to form a continuous elastic solid that can easily transition between solid to fluid state under a low shear stress. Such Liquid-like solids (LLS) have recently been used to create 3D cellular constructs as well as to support, culture and harvest cells in 3D. Current understanding of cell migration mechanics in 3D was established from experiments performed in natural and synthetic polymer networks. Spatial variation in network structure and the transience of degradable gels limit their usefulness in quantitative cell mechanics studies. By contrast, LLS growth media approximates a homogeneous continuum, enabling tractable cell mechanics measurements to be performed in 3D. Here, we introduce a process to understand and classify cytotoxic T cell motion in 3D by studying cellular motility in LLS media. General classification of T cell motion can be achieved with a very traditional statistical approach: the cell's mean squared displacement (MSD) as a function of delay time. We will also use Langevin approaches combined with the constitutive equations of the LLS medium to predict the statistics of T cell motion. National Science Foundation under Grant No. DMR-1352043.

  17. Combining SDM-Based Circuit Switching with Packet Switching in a Router for On-Chip Networks

    Directory of Open Access Journals (Sweden)

    Angelo Kuti Lusala

    2012-01-01

    Full Text Available A Hybrid router architecture for Networks-on-Chip “NoC” is presented, it combines Spatial Division Multiplexing “SDM” based circuit switching and packet switching in order to efficiently and separately handle both streaming and best-effort traffic generated in real-time applications. Furthermore the SDM technique is combined with Time Division Multiplexing “TDM” technique in the circuit switching part in order to increase path diversity, thus improving throughput while sharing communication resources among multiple connections. Combining these two techniques allows mitigating the poor resource usage inherent to circuit switching. In this way Quality of Service “QoS” is easily provided for the streaming traffic through the circuit-switched sub-router while the packet-switched sub-router handles best-effort traffic. The proposed hybrid router architectures were synthesized, placed and routed on an FPGA. Results show that a practicable Network-on-Chip “NoC” can be built using the proposed router architectures. 7 × 7 mesh NoCs were simulated in SystemC. Simulation results show that the probability of establishing paths through the NoC increases with the number of sub-channels and has its highest value when combining SDM with TDM, thereby significantly reducing contention in the NoC.

  18. Comparison of artificial intelligence classifiers for SIP attack data

    Science.gov (United States)

    Safarik, Jakub; Slachta, Jiri

    2016-05-01

    Honeypot application is a source of valuable data about attacks on the network. We run several SIP honeypots in various computer networks, which are separated geographically and logically. Each honeypot runs on public IP address and uses standard SIP PBX ports. All information gathered via honeypot is periodically sent to the centralized server. This server classifies all attack data by neural network algorithm. The paper describes optimizations of a neural network classifier, which lower the classification error. The article contains the comparison of two neural network algorithm used for the classification of validation data. The first is the original implementation of the neural network described in recent work; the second neural network uses further optimizations like input normalization or cross-entropy cost function. We also use other implementations of neural networks and machine learning classification algorithms. The comparison test their capabilities on validation data to find the optimal classifier. The article result shows promise for further development of an accurate SIP attack classification engine.

  19. Knowledge Uncertainty and Composed Classifier

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2007-01-01

    Roč. 1, č. 2 (2007), s. 101-105 ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Boosting architecture * contextual modelling * composed classifier * knowledge management, * knowledge * uncertainty Subject RIV: IN - Informatics, Computer Science

  20. Correlation Dimension-Based Classifier

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    2014-01-01

    Roč. 44, č. 12 (2014), s. 2253-2263 ISSN 2168-2267 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : classifier * multidimensional data * correlation dimension * scaling exponent * polynomial expansion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014

  1. Ship localization in Santa Barbara Channel using machine learning classifiers.

    Science.gov (United States)

    Niu, Haiqiang; Ozanich, Emma; Gerstoft, Peter

    2017-11-01

    Machine learning classifiers are shown to outperform conventional matched field processing for a deep water (600 m depth) ocean acoustic-based ship range estimation problem in the Santa Barbara Channel Experiment when limited environmental information is known. Recordings of three different ships of opportunity on a vertical array were used as training and test data for the feed-forward neural network and support vector machine classifiers, demonstrating the feasibility of machine learning methods to locate unseen sources. The classifiers perform well up to 10 km range whereas the conventional matched field processing fails at about 4 km range without accurate environmental information.

  2. Combined effect of chemical and electrical synapses in Hindmarsh-Rose neural networks on synchronization and the rate of information.

    Science.gov (United States)

    Baptista, M S; Moukam Kakmeni, F M; Grebogi, C

    2010-09-01

    In this work we studied the combined action of chemical and electrical synapses in small networks of Hindmarsh-Rose (HR) neurons on the synchronous behavior and on the rate of information produced (per time unit) by the networks. We show that if the chemical synapse is excitatory, the larger the chemical synapse strength used the smaller the electrical synapse strength needed to achieve complete synchronization, and for moderate synaptic strengths one should expect to find desynchronous behavior. Otherwise, if the chemical synapse is inhibitory, the larger the chemical synapse strength used the larger the electrical synapse strength needed to achieve complete synchronization, and for moderate synaptic strengths one should expect to find synchronous behaviors. Finally, we show how to calculate semianalytically an upper bound for the rate of information produced per time unit (Kolmogorov-Sinai entropy) in larger networks. As an application, we show that this upper bound is linearly proportional to the number of neurons in a network whose neurons are highly connected.

  3. Combined IR imaging-neural network method for the estimation of internal temperature in cooked chicken meat

    Science.gov (United States)

    Ibarra, Juan G.; Tao, Yang; Xin, Hongwei

    2000-11-01

    A noninvasive method for the estimation of internal temperature in chicken meat immediately following cooking is proposed. The external temperature from IR images was correlated with measured internal temperature through a multilayer neural network. To provide inputs for the network, time series experiments were conducted to obtain simultaneous observations of internal and external temperatures immediately after cooking during the cooling process. An IR camera working at the spectral band of 3.4 to 5.0 micrometers registered external temperature distributions without the interference of close-to-oven environment, while conventional thermocouples registered internal temperatures. For an internal temperature at a given time, simultaneous and lagged external temperature observations were used as the input of the neural network. Based on practical and statistical considerations, a criterion is established to reduce the nodes in the neural network input. The combined method was able to estimate internal temperature for times between 0 and 540 s within a standard error of +/- 1.01 degree(s)C, and within an error of +/- 1.07 degree(s)C for short times after cooking (3 min), with two thermograms at times t and t+30s. The method has great potential for monitoring of doneness of chicken meat in conveyor belt type cooking and can be used as a platform for similar studies in other food products.

  4. NEpiC: a network-assisted algorithm for epigenetic studies using mean and variance combined signals.

    Science.gov (United States)

    Ruan, Peifeng; Shen, Jing; Santella, Regina M; Zhou, Shuigeng; Wang, Shuang

    2016-09-19

    DNA methylation plays an important role in many biological processes. Existing epigenome-wide association studies (EWAS) have successfully identified aberrantly methylated genes in many diseases and disorders with most studies focusing on analysing methylation sites one at a time. Incorporating prior biological information such as biological networks has been proven to be powerful in identifying disease-associated genes in both gene expression studies and genome-wide association studies (GWAS) but has been under studied in EWAS. Although recent studies have noticed that there are differences in methylation variation in different groups, only a few existing methods consider variance signals in DNA methylation studies. Here, we present a network-assisted algorithm, NEpiC, that combines both mean and variance signals in searching for differentially methylated sub-networks using the protein-protein interaction (PPI) network. In simulation studies, we demonstrate the power gain from using both the prior biological information and variance signals compared to using either of the two or neither information. Applications to several DNA methylation datasets from the Cancer Genome Atlas (TCGA) project and DNA methylation data on hepatocellular carcinoma (HCC) from the Columbia University Medical Center (CUMC) suggest that the proposed NEpiC algorithm identifies more cancer-related genes and generates better replication results. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Implications of physical symmetries in adaptive image classifiers

    DEFF Research Database (Denmark)

    Sams, Thomas; Hansen, Jonas Lundbek

    2000-01-01

    It is demonstrated that rotational invariance and reflection symmetry of image classifiers lead to a reduction in the number of free parameters in the classifier. When used in adaptive detectors, e.g. neural networks, this may be used to decrease the number of training samples necessary to learn...... a given classification task, or to improve generalization of the neural network. Notably, the symmetrization of the detector does not compromise the ability to distinguish objects that break the symmetry. (C) 2000 Elsevier Science Ltd. All rights reserved....

  6. A Combination of Central Pattern Generator-based and Reflex-based Neural Networks for Dynamic, Adaptive, Robust Bipedal Locomotion

    DEFF Research Database (Denmark)

    Di Canio, Giuliano; Larsen, Jørgen Christian; Wörgötter, Florentin

    2016-01-01

    Robotic systems inspired from humans have always been lightening up the curiosity of engineers and scientists. Of many challenges, human locomotion is a very difficult one where a number of different systems needs to interact in order to generate a correct and balanced pattern. To simulate...... the interaction of these systems, implementations with reflexbased or central pattern generator (CPG)-based controllers have been tested on bipedal robot systems. In this paper we will combine the two controller types, into a controller that works with both reflex and CPG signals. We use a reflex-based neural...... network to generate basic walking patterns of a dynamic bipedal walking robot (DACBOT) and then a CPG-based neural network to ensure robust walking behavior...

  7. Classified facilities for environmental protection

    International Nuclear Information System (INIS)

    Anon.

    1993-02-01

    The legislation of the classified facilities governs most of the dangerous or polluting industries or fixed activities. It rests on the law of 9 July 1976 concerning facilities classified for environmental protection and its application decree of 21 September 1977. This legislation, the general texts of which appear in this volume 1, aims to prevent all the risks and the harmful effects coming from an installation (air, water or soil pollutions, wastes, even aesthetic breaches). The polluting or dangerous activities are defined in a list called nomenclature which subjects the facilities to a declaration or an authorization procedure. The authorization is delivered by the prefect at the end of an open and contradictory procedure after a public survey. In addition, the facilities can be subjected to technical regulations fixed by the Environment Minister (volume 2) or by the prefect for facilities subjected to declaration (volume 3). (A.B.)

  8. Combined centralised and distributed mechanism for utilisation of node association in broadband wireless network

    Science.gov (United States)

    Ulvan, A.; Ulvan, M.; Pranoto, H.

    2018-02-01

    Mobile broadband wireless access system has the stations that might be fixed, nomadic or mobile. Regarding the mobility, the node association procedure is critical for network entry as well as network re-entry during handover. The flexibility and utilisation of MAC protocols scheduling have an important role. The standard provides the Partition Scheme as the scheduling mechanism which separates the allocation of minislots for scheduling. However, minislots cannot be flexibly reserved for centralised and distributed scheduling. In this paper we analysed the scheduling mechanism to improve the utilisation of minislots allocation during the exchange of MAC massages. The centralised and distributed scheduling is implemented in some topology scenarios. The result shows the proposed mechanism has better performance for node association than partition scheme.

  9. Combining evolutionary game theory and network theory to analyze human cooperation patterns

    International Nuclear Information System (INIS)

    Scatà, Marialisa; Di Stefano, Alessandro; La Corte, Aurelio; Liò, Pietro; Catania, Emanuele; Guardo, Ermanno; Pagano, Salvatore

    2016-01-01

    Highlights: • We investigate the evolutionary dynamics of human cooperation in a social network. • We introduce the concepts of “Critical Mass”, centrality measure and homophily. • The emergence of cooperation is affected by the spatial choice of the “Critical Mass”. • Our findings show that homophily speeds up the convergence towards cooperation. • Centrality and “Critical Mass” spatial choice partially offset the impact of homophily. - Abstract: As natural systems continuously evolve, the human cooperation dilemma represents an increasingly more challenging question. Humans cooperate in natural and social systems, but how it happens and what are the mechanisms which rule the emergence of cooperation, represent an open and fascinating issue. In this work, we investigate the evolution of cooperation through the analysis of the evolutionary dynamics of behaviours within the social network, where nodes can choose to cooperate or defect following the classical social dilemmas represented by Prisoner’s Dilemma and Snowdrift games. To this aim, we introduce a sociological concept and statistical estimator, “Critical Mass”, to detect the minimum initial seed of cooperators able to trigger the diffusion process, and the centrality measure to select within the social network. Selecting different spatial configurations of the Critical Mass nodes, we highlight how the emergence of cooperation can be influenced by this spatial choice of the initial core in the network. Moreover, we target to shed light how the concept of homophily, a social shaping factor for which “birds of a feather flock together”, can affect the evolutionary process. Our findings show that homophily allows speeding up the diffusion process and make quicker the convergence towards human cooperation, while centrality measure and thus the Critical Mass selection, play a key role in the evolution showing how the spatial configurations can create some hidden patterns, partially

  10. Neural networks for combined control of capacitor banks and voltage regulators in distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Gu, Z.; Rizy, D.T.

    1996-02-01

    A neural network for controlling shunt capacitor banks and feeder voltage regulators in electric distribution systems is presented. The objective of the neural controller is to minimize total I{sup 2}R losses and maintain all bus voltages within standard limits. The performance of the neural network for different input selections and training data is discussed and compared. Two different input selections are tried, one using the previous control states of the capacitors and regulator along with measured line flows and voltage which is equivalent to having feedback and the other with measured line flows and voltage without previous control settings. The results indicate that the neural net controller with feedback can outperform the one without. Also, proper selection of a training data set that adequately covers the operating space of the distribution system is important for achieving satisfactory performance with the neural controller. The neural controller is tested on a radially configured distribution system with 30 buses, 5 switchable capacitor banks an d one nine tap line regulator to demonstrate the performance characteristics associated with these principles. Monte Carlo simulations show that a carefully designed and relatively compact neural network with a small but carefully developed training set can perform quite well under slight and extreme variation of loading conditions.

  11. An expert-based approach to forest road network planning by combining Delphi and spatial multi-criteria evaluation.

    Science.gov (United States)

    Hayati, Elyas; Majnounian, Baris; Abdi, Ehsan; Sessions, John; Makhdoum, Majid

    2013-02-01

    Changes in forest landscapes resulting from road construction have increased remarkably in the last few years. On the other hand, the sustainable management of forest resources can only be achieved through a well-organized road network. In order to minimize the environmental impacts of forest roads, forest road managers must design the road network efficiently and environmentally as well. Efficient planning methodologies can assist forest road managers in considering the technical, economic, and environmental factors that affect forest road planning. This paper describes a three-stage methodology using the Delphi method for selecting the important criteria, the Analytic Hierarchy Process for obtaining the relative importance of the criteria, and finally, a spatial multi-criteria evaluation in a geographic information system (GIS) environment for identifying the lowest-impact road network alternative. Results of the Delphi method revealed that ground slope, lithology, distance from stream network, distance from faults, landslide susceptibility, erosion susceptibility, geology, and soil texture are the most important criteria for forest road planning in the study area. The suitability map for road planning was then obtained by combining the fuzzy map layers of these criteria with respect to their weights. Nine road network alternatives were designed using PEGGER, an ArcView GIS extension, and finally, their values were extracted from the suitability map. Results showed that the methodology was useful for identifying road that met environmental and cost considerations. Based on this work, we suggest future work in forest road planning using multi-criteria evaluation and decision making be considered in other regions and that the road planning criteria identified in this study may be useful.

  12. 76 FR 34761 - Classified National Security Information

    Science.gov (United States)

    2011-06-14

    ... MARINE MAMMAL COMMISSION Classified National Security Information [Directive 11-01] AGENCY: Marine... Commission's (MMC) policy on classified information, as directed by Information Security Oversight Office... of Executive Order 13526, ``Classified National Security Information,'' and 32 CFR part 2001...

  13. Application of a Hybrid Method Combining Grey Model and Back Propagation Artificial Neural Networks to Forecast Hepatitis B in China

    Directory of Open Access Journals (Sweden)

    Ruijing Gan

    2015-01-01

    Full Text Available Accurate incidence forecasting of infectious disease provides potentially valuable insights in its own right. It is critical for early prevention and may contribute to health services management and syndrome surveillance. This study investigates the use of a hybrid algorithm combining grey model (GM and back propagation artificial neural networks (BP-ANN to forecast hepatitis B in China based on the yearly numbers of hepatitis B and to evaluate the method’s feasibility. The results showed that the proposal method has advantages over GM (1, 1 and GM (2, 1 in all the evaluation indexes.

  14. Application of a hybrid method combining grey model and back propagation artificial neural networks to forecast hepatitis B in china.

    Science.gov (United States)

    Gan, Ruijing; Chen, Xiaojun; Yan, Yu; Huang, Daizheng

    2015-01-01

    Accurate incidence forecasting of infectious disease provides potentially valuable insights in its own right. It is critical for early prevention and may contribute to health services management and syndrome surveillance. This study investigates the use of a hybrid algorithm combining grey model (GM) and back propagation artificial neural networks (BP-ANN) to forecast hepatitis B in China based on the yearly numbers of hepatitis B and to evaluate the method's feasibility. The results showed that the proposal method has advantages over GM (1, 1) and GM (2, 1) in all the evaluation indexes.

  15. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.; Ratterman, Joseph D.

    2018-01-30

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  16. Towards a proper assignment of systemic risk: the combined roles of network topology and shock characteristics.

    Science.gov (United States)

    Loepfe, Lasse; Cabrales, Antonio; Sánchez, Angel

    2013-01-01

    The 2007-2008 financial crisis solidified the consensus among policymakers that a macro-prudential approach to regulation and supervision should be adopted. The currently preferred policy option is the regulation of capital requirements, with the main focus on combating procyclicality and on identifying the banks that have a high systemic importance, those that are "too big to fail". Here we argue that the concept of systemic risk should include the analysis of the system as a whole and we explore systematically the most important properties for policy purposes of networks topology on resistance to shocks. In a thorough study going from analytical models to empirical data, we show two sharp transitions from safe to risky regimes: 1) diversification becomes harmful with just a small fraction (~2%) of the shocks sampled from a fat tailed shock distributions and 2) when large shocks are present a critical link density exists where an effective giant cluster forms and most firms become vulnerable. This threshold depends on the network topology, especially on modularity. Firm size heterogeneity has important but diverse effects that are heavily dependent on shock characteristics. Similarly, degree heterogeneity increases vulnerability only when shocks are directed at the most connected firms. Furthermore, by studying the structure of the core of the transnational corporation network from real data, we show that its stability could be clearly increased by removing some of the links with highest centrality betweenness. Our results provide a novel insight and arguments for policy makers to focus surveillance on the connections between firms, in addition to capital requirements directed at the nodes.

  17. Predicting targeted drug combinations based on Pareto optimal patterns of coexpression network connectivity.

    Science.gov (United States)

    Penrod, Nadia M; Greene, Casey S; Moore, Jason H

    2014-01-01

    Molecularly targeted drugs promise a safer and more effective treatment modality than conventional chemotherapy for cancer patients. However, tumors are dynamic systems that readily adapt to these agents activating alternative survival pathways as they evolve resistant phenotypes. Combination therapies can overcome resistance but finding the optimal combinations efficiently presents a formidable challenge. Here we introduce a new paradigm for the design of combination therapy treatment strategies that exploits the tumor adaptive process to identify context-dependent essential genes as druggable targets. We have developed a framework to mine high-throughput transcriptomic data, based on differential coexpression and Pareto optimization, to investigate drug-induced tumor adaptation. We use this approach to identify tumor-essential genes as druggable candidates. We apply our method to a set of ER(+) breast tumor samples, collected before (n = 58) and after (n = 60) neoadjuvant treatment with the aromatase inhibitor letrozole, to prioritize genes as targets for combination therapy with letrozole treatment. We validate letrozole-induced tumor adaptation through coexpression and pathway analyses in an independent data set (n = 18). We find pervasive differential coexpression between the untreated and letrozole-treated tumor samples as evidence of letrozole-induced tumor adaptation. Based on patterns of coexpression, we identify ten genes as potential candidates for combination therapy with letrozole including EPCAM, a letrozole-induced essential gene and a target to which drugs have already been developed as cancer therapeutics. Through replication, we validate six letrozole-induced coexpression relationships and confirm the epithelial-to-mesenchymal transition as a process that is upregulated in the residual tumor samples following letrozole treatment. To derive the greatest benefit from molecularly targeted drugs it is critical to design combination

  18. The spatial decision-supporting system combination of RBR & CBR based on artificial neural network and association rules

    Science.gov (United States)

    Tian, Yangge; Bian, Fuling

    2007-06-01

    The technology of artificial intelligence should be imported on the basis of the geographic information system to bring up the spatial decision-supporting system (SDSS). The paper discusses the structure of SDSS, after comparing the characteristics of RBR and CBR, the paper brings up the frame of a spatial decisional system that combines RBR and CBR, which has combined the advantages of them both. And the paper discusses the CBR in agriculture spatial decisions, the application of ANN (Artificial Neural Network) in CBR, and enriching the inference rule base based on association rules, etc. And the paper tests and verifies the design of this system with the examples of the evaluation of the crops' adaptability.

  19. A neural-fuzzy approach to classify the ecological status in surface waters

    International Nuclear Information System (INIS)

    Ocampo-Duque, William; Schuhmacher, Marta; Domingo, Jose L.

    2007-01-01

    A methodology based on a hybrid approach that combines fuzzy inference systems and artificial neural networks has been used to classify ecological status in surface waters. This methodology has been proposed to deal efficiently with the non-linearity and highly subjective nature of variables involved in this serious problem. Ecological status has been assessed with biological, hydro-morphological, and physicochemical indicators. A data set collected from 378 sampling sites in the Ebro river basin has been used to train and validate the hybrid model. Up to 97.6% of sampling sites have been correctly classified with neural-fuzzy models. Such performance resulted very competitive when compared with other classification algorithms. With non-parametric classification-regression trees and probabilistic neural networks, the predictive capacities were 90.7% and 97.0%, respectively. The proposed methodology can support decision-makers in evaluation and classification of ecological status, as required by the EU Water Framework Directive. - Fuzzy inference systems can be used as environmental classifiers

  20. Two channel EEG thought pattern classifier.

    Science.gov (United States)

    Craig, D A; Nguyen, H T; Burchey, H A

    2006-01-01

    This paper presents a real-time electro-encephalogram (EEG) identification system with the goal of achieving hands free control. With two EEG electrodes placed on the scalp of the user, EEG signals are amplified and digitised directly using a ProComp+ encoder and transferred to the host computer through the RS232 interface. Using a real-time multilayer neural network, the actual classification for the control of a powered wheelchair has a very fast response. It can detect changes in the user's thought pattern in 1 second. Using only two EEG electrodes at positions O(1) and C(4) the system can classify three mental commands (forward, left and right) with an accuracy of more than 79 %

  1. STATISTICAL TOOLS FOR CLASSIFYING GALAXY GROUP DYNAMICS

    International Nuclear Information System (INIS)

    Hou, Annie; Parker, Laura C.; Harris, William E.; Wilman, David J.

    2009-01-01

    The dynamical state of galaxy groups at intermediate redshifts can provide information about the growth of structure in the universe. We examine three goodness-of-fit tests, the Anderson-Darling (A-D), Kolmogorov, and χ 2 tests, in order to determine which statistical tool is best able to distinguish between groups that are relaxed and those that are dynamically complex. We perform Monte Carlo simulations of these three tests and show that the χ 2 test is profoundly unreliable for groups with fewer than 30 members. Power studies of the Kolmogorov and A-D tests are conducted to test their robustness for various sample sizes. We then apply these tests to a sample of the second Canadian Network for Observational Cosmology Redshift Survey (CNOC2) galaxy groups and find that the A-D test is far more reliable and powerful at detecting real departures from an underlying Gaussian distribution than the more commonly used χ 2 and Kolmogorov tests. We use this statistic to classify a sample of the CNOC2 groups and find that 34 of 106 groups are inconsistent with an underlying Gaussian velocity distribution, and thus do not appear relaxed. In addition, we compute velocity dispersion profiles (VDPs) for all groups with more than 20 members and compare the overall features of the Gaussian and non-Gaussian groups, finding that the VDPs of the non-Gaussian groups are distinct from those classified as Gaussian.

  2. Combining a dispersal model with network theory to assess habitat connectivity.

    Science.gov (United States)

    Lookingbill, Todd R; Gardner, Robert H; Ferrari, Joseph R; Keller, Cherry E

    2010-03-01

    Assessing the potential for threatened species to persist and spread within fragmented landscapes requires the identification of core areas that can sustain resident populations and dispersal corridors that can link these core areas with isolated patches of remnant habitat. We developed a set of GIS tools, simulation methods, and network analysis procedures to assess potential landscape connectivity for the Delmarva fox squirrel (DFS; Sciurus niger cinereus), an endangered species inhabiting forested areas on the Delmarva Peninsula, USA. Information on the DFS's life history and dispersal characteristics, together with data on the composition and configuration of land cover on the peninsula, were used as input data for an individual-based model to simulate dispersal patterns of millions of squirrels. Simulation results were then assessed using methods from graph theory, which quantifies habitat attributes associated with local and global connectivity. Several bottlenecks to dispersal were identified that were not apparent from simple distance-based metrics, highlighting specific locations for landscape conservation, restoration, and/or squirrel translocations. Our approach links simulation models, network analysis, and available field data in an efficient and general manner, making these methods useful and appropriate for assessing the movement dynamics of threatened species within landscapes being altered by human and natural disturbances.

  3. A New Processing Method Combined with BP Neural Network for Francis Turbine Synthetic Characteristic Curve Research

    Directory of Open Access Journals (Sweden)

    Junyi Li

    2017-01-01

    Full Text Available A BP (backpropagation neural network method is employed to solve the problems existing in the synthetic characteristic curve processing of hydroturbine at present that most studies are only concerned with data in the high efficiency and large guide vane opening area, which can hardly meet the research requirements of transition process especially in large fluctuation situation. The principle of the proposed method is to convert the nonlinear characteristics of turbine to torque and flow characteristics, which can be used for real-time simulation directly based on neural network. Results show that obtained sample data can be extended successfully to cover working areas wider under different operation conditions. Another major contribution of this paper is the resampling technique proposed in the paper to overcome the limitation to sample period simulation. In addition, a detailed analysis for improvements of iteration convergence of the pressure loop is proposed, leading to a better iterative convergence during the head pressure calculation. Actual applications verify that methods proposed in this paper have better simulation results which are closer to the field and provide a new perspective for hydroturbine synthetic characteristic curve fitting and modeling.

  4. The development of artificial neural networks to predict virological response to combination HIV therapy

    NARCIS (Netherlands)

    Larder, Brendan; Wang, Dechao; Revell, Andrew; Montaner, Julio; Harrigan, Richard; de Wolf, Frank; Lange, Joep; Wegner, Scott; Ruiz, Lidia; Pérez-Elías, Maria Jésus; Emery, Sean; Gatell, Jose; D'Arminio Monforte, Antonella; Torti, Carlo; Zazzi, Maurizio; Lane, Clifford

    2007-01-01

    When used in combination, antiretroviral drugs are highly effective for suppressing HIV replication. Nevertheless, treatment failure commonly occurs and is generally associated with viral drug resistance. The choice of an alternative regimen may be guided by a drug-resistance test. However,

  5. Blind source extraction for a combined fixed and wireless sensor network

    NARCIS (Netherlands)

    Bloemendal, B.B.A.J.; Laar, van de J.; Sommen, P.C.W.

    2012-01-01

    The emergence of wireless microphones in everyday life creates opportunities to exploit spatial diversity when using fixed microphone arrays combined with these wireless microphones. Traditional array signal processing (ASP) techniques are not suitable for such a scenario since the locations of the

  6. Equal gain combining for cooperative spectrum sensing in cognitive radio networks

    KAUST Repository

    Hamza, Doha R.; Aï ssa, Sonia; Aniba, Ghassane

    2014-01-01

    are not tight. The cases of hard sensing and soft sensing are considered and we provide examples in which hard sensing is advantageous to soft sensing. We contrast the performance of SEGC with maximum ratio combining of the sensors' results and provide examples

  7. Classifier-guided sampling for discrete variable, discontinuous design space exploration: Convergence and computational performance

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shahan, David W. [HRL Labs., LLC, Malibu, CA (United States); Seepersad, Carolyn Conner [Univ. of Texas, Austin, TX (United States)

    2014-04-22

    A classifier-guided sampling (CGS) method is introduced for solving engineering design optimization problems with discrete and/or continuous variables and continuous and/or discontinuous responses. The method merges concepts from metamodel-guided sampling and population-based optimization algorithms. The CGS method uses a Bayesian network classifier for predicting the performance of new designs based on a set of known observations or training points. Unlike most metamodeling techniques, however, the classifier assigns a categorical class label to a new design, rather than predicting the resulting response in continuous space, and thereby accommodates nondifferentiable and discontinuous functions of discrete or categorical variables. The CGS method uses these classifiers to guide a population-based sampling process towards combinations of discrete and/or continuous variable values with a high probability of yielding preferred performance. Accordingly, the CGS method is appropriate for discrete/discontinuous design problems that are ill-suited for conventional metamodeling techniques and too computationally expensive to be solved by population-based algorithms alone. In addition, the rates of convergence and computational properties of the CGS method are investigated when applied to a set of discrete variable optimization problems. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, when compared to genetic algorithms.

  8. Combining wireless sensor networks and semantic middleware for an Internet of Things-based sportsman/woman monitoring application.

    Science.gov (United States)

    Rodríguez-Molina, Jesús; Martínez, José-Fernán; Castillejo, Pedro; López, Lourdes

    2013-01-31

    Wireless Sensor Networks (WSNs) are spearheading the efforts taken to build and deploy systems aiming to accomplish the ultimate objectives of the Internet of Things. Due to the sensors WSNs nodes are provided with, and to their ubiquity and pervasive capabilities, these networks become extremely suitable for many applications that so-called conventional cabled or wireless networks are unable to handle. One of these still underdeveloped applications is monitoring physical parameters on a person. This is an especially interesting application regarding their age or activity, for any detected hazardous parameter can be notified not only to the monitored person as a warning, but also to any third party that may be helpful under critical circumstances, such as relatives or healthcare centers. We propose a system built to monitor a sportsman/woman during a workout session or performing a sport-related indoor activity. Sensors have been deployed by means of several nodes acting as the nodes of a WSN, along with a semantic middleware development used for hardware complexity abstraction purposes. The data extracted from the environment, combined with the information obtained from the user, will compose the basis of the services that can be obtained.

  9. Combining Wireless Sensor Networks and Semantic Middleware for an Internet of Things-Based Sportsman/Woman Monitoring Application

    Science.gov (United States)

    Rodríguez-Molina, Jesús; Martínez, José-Fernán; Castillejo, Pedro; López, Lourdes

    2013-01-01

    Wireless Sensor Networks (WSNs) are spearheading the efforts taken to build and deploy systems aiming to accomplish the ultimate objectives of the Internet of Things. Due to the sensors WSNs nodes are provided with, and to their ubiquity and pervasive capabilities, these networks become extremely suitable for many applications that so-called conventional cabled or wireless networks are unable to handle. One of these still underdeveloped applications is monitoring physical parameters on a person. This is an especially interesting application regarding their age or activity, for any detected hazardous parameter can be notified not only to the monitored person as a warning, but also to any third party that may be helpful under critical circumstances, such as relatives or healthcare centers. We propose a system built to monitor a sportsman/woman during a workout session or performing a sport-related indoor activity. Sensors have been deployed by means of several nodes acting as the nodes of a WSN, along with a semantic middleware development used for hardware complexity abstraction purposes. The data extracted from the environment, combined with the information obtained from the user, will compose the basis of the services that can be obtained. PMID:23385405

  10. Combination of Markov state models and kinetic networks for the analysis of molecular dynamics simulations of peptide folding.

    Science.gov (United States)

    Radford, Isolde H; Fersht, Alan R; Settanni, Giovanni

    2011-06-09

    Atomistic molecular dynamics simulations of the TZ1 beta-hairpin peptide have been carried out using an implicit model for the solvent. The trajectories have been analyzed using a Markov state model defined on the projections along two significant observables and a kinetic network approach. The Markov state model allowed for an unbiased identification of the metastable states of the system, and provided the basis for commitment probability calculations performed on the kinetic network. The kinetic network analysis served to extract the main transition state for folding of the peptide and to validate the results from the Markov state analysis. The combination of the two techniques allowed for a consistent and concise characterization of the dynamics of the peptide. The slowest relaxation process identified is the exchange between variably folded and denatured species, and the second slowest process is the exchange between two different subsets of the denatured state which could not be otherwise identified by simple inspection of the projected trajectory. The third slowest process is the exchange between a fully native and a partially folded intermediate state characterized by a native turn with a proximal backbone H-bond, and frayed side-chain packing and termini. The transition state for the main folding reaction is similar to the intermediate state, although a more native like side-chain packing is observed.

  11. Novel amphiphilic poly(dimethylsiloxane) based polyurethane networks tethered with carboxybetaine and their combined antibacterial and anti-adhesive property

    Science.gov (United States)

    Jiang, Jingxian; Fu, Yuchen; Zhang, Qinghua; Zhan, Xiaoli; Chen, Fengqiu

    2017-08-01

    The traditional nonfouling materials are powerless against bacterial cells attachment, while the hydrophobic bactericidal surfaces always suffer from nonspecific protein adsorption and dead bacterial cells accumulation. Here, amphiphilic polyurethane (PU) networks modified with poly(dimethylsiloxane) (PDMS) and cationic carboxybetaine diol through simple crosslinking reaction were developed, which had an antibacterial efficiency of 97.7%. Thereafter, the hydrolysis of carboxybetaine ester into zwitterionic groups brought about anti-adhesive properties against bacteria and proteins. The surface chemical composition and wettability performance of the PU network surfaces were investigated by attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR), X-ray photoelectron spectroscopy (XPS) and contact angle analysis. The surface distribution of PDMS and zwitterionic segments produced an obvious amphiphilic heterogeneous surface, which was demonstrated by atomic force microscopy (AFM). Enzyme-linked immunosorbent assays (ELISA) were used to test the nonspecific protein adsorption behaviors. With the advantages of the transition from excellent bactericidal performance to anti-adhesion and the combination of fouling resistance and fouling release property, the designed PDMS-based amphiphilic PU network shows great application potential in biomedical devices and marine facilities.

  12. Combining Wireless Sensor Networks and Semantic Middleware for an Internet of Things-Based Sportsman/Woman Monitoring Application

    Directory of Open Access Journals (Sweden)

    Lourdes López

    2013-01-01

    Full Text Available Wireless Sensor Networks (WSNs are spearheading the efforts taken to build and deploy systems aiming to accomplish the ultimate objectives of the Internet of Things. Due to the sensors WSNs nodes are provided with, and to their ubiquity and pervasive capabilities, these networks become extremely suitable for many applications that so-called conventional cabled or wireless networks are unable to handle. One of these still underdeveloped applications is monitoring physical parameters on a person. This is an especially interesting application regarding their age or activity, for any detected hazardous parameter can be notified not only to the monitored person as a warning, but also to any third party that may be helpful under critical circumstances, such as relatives or healthcare centers. We propose a system built to monitor a sportsman/woman during a workout session or performing a sport-related indoor activity. Sensors have been deployed by means of several nodes acting as the nodes of a WSN, along with a semantic middleware development used for hardware complexity abstraction purposes. The data extracted from the environment, combined with the information obtained from the user, will compose the basis of the services that can be obtained.

  13. Development of a platform to combine sensor networks and home robots to improve fall detection in the home environment.

    Science.gov (United States)

    Della Toffola, Luca; Patel, Shyamal; Chen, Bor-rong; Ozsecen, Yalgin M; Puiatti, Alessandro; Bonato, Paolo

    2011-01-01

    Over the last decade, significant progress has been made in the development of wearable sensor systems for continuous health monitoring in the home and community settings. One of the main areas of application for these wearable sensor systems is in detecting emergency events such as falls. Wearable sensors like accelerometers are increasingly being used to monitor daily activities of individuals at a risk of falls, detect emergency events and send alerts to caregivers. However, such systems tend to have a high rate of false alarms, which leads to low compliance levels. Home robots can enable caregivers with the ability to quickly make an assessment and intervene if an emergency event is detected. This can provide an additional layer for detecting false positives, which can lead to improve compliance. In this paper, we present preliminary work on the development of a fall detection system based on a combination sensor networks and home robots. The sensor network architecture comprises of body worn sensors and ambient sensors distributed in the environment. We present the software architecture and conceptual design home robotic platform. We also perform preliminary characterization of the sensor network in terms of latencies and battery lifetime.

  14. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    International Nuclear Information System (INIS)

    Blanco, A; Rodriguez, R; Martinez-Maranon, I

    2014-01-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity

  15. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    Science.gov (United States)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  16. Application of a hybrid method based on the combination of genetic algorithm and Hopfield neural network for burnable poison placement

    International Nuclear Information System (INIS)

    Khoshahval, F.; Fadaei, A.

    2012-01-01

    Highlights: ► The performance of GA, HNN and combination of them in BPP optimization in PWR core are adequate. ► It seems HNN + GA arrives to better final parameter value in comparison with the two other methods. ► The computation time for HNN + GA is higher than GA and HNN. Thus a trade-off is necessary. - Abstract: In the last decades genetic algorithm (GA) and Hopfield Neural Network (HNN) have attracted considerable attention for the solution of optimization problems. In this paper, a hybrid optimization method based on the combination of the GA and HNN is introduced and applied to the burnable poison placement (BPP) problem to increase the quality of the results. BPP in a nuclear reactor core is a combinatorial and complicated problem. Arrangement and the worth of the burnable poisons (BPs) has an impressive effect on the main control parameters of a nuclear reactor. Improper design and arrangement of the BPs can be dangerous with respect to the nuclear reactor safety. In this paper, increasing BP worth along with minimizing the radial power peaking are considered as objective functions. Three optimization algorithms, genetic algorithm, Hopfield neural network optimization and a hybrid optimization method, are applied to the BPP problem and their efficiencies are compared. The hybrid optimization method gives better result in finding a better BP arrangement.

  17. Semiautomated tremor detection using a combined cross-correlation and neural network approach

    Science.gov (United States)

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2013-01-01

    Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low‒amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross‒correlation technique, followed by a Self‒Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being “semiautomated”. We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal‒to‒noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal‒to‒noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.

  18. Application of 1 D Finite Element Method in Combination with Laminar Solution Method for Pipe Network Analysis

    Science.gov (United States)

    Dudar, O. I.; Dudar, E. S.

    2017-11-01

    The features of application of the 1D dimensional finite element method (FEM) in combination with the laminar solutions method (LSM) for the calculation of underground ventilating networks are considered. In this case the processes of heat and mass transfer change the properties of a fluid (binary vapour-air mix). Under the action of gravitational forces it leads to such phenomena as natural draft, local circulation, etc. The FEM relations considering the action of gravity, the mass conservation law, the dependence of vapour-air mix properties on the thermodynamic parameters are derived so that it allows one to model the mentioned phenomena. The analogy of the elastic and plastic rod deformation processes to the processes of laminar and turbulent flow in a pipe is described. Owing to this analogy, the guaranteed convergence of the elastic solutions method for the materials of plastic type means the guaranteed convergence of the LSM for any regime of a turbulent flow in a rough pipe. By means of numerical experiments the convergence rate of the FEM - LSM is investigated. This convergence rate appeared much higher than the convergence rate of the Cross - Andriyashev method. Data of other authors on the convergence rate comparison for the finite element method, the Newton method and the method of gradient are provided. These data allow one to conclude that the FEM in combination with the LSM is one of the most effective methods of calculation of hydraulic and ventilating networks. The FEM - LSM has been used for creation of the research application programme package “MineClimate” allowing to calculate the microclimate parameters in the underground ventilating networks.

  19. The combined geodetic network adjusted on the reference ellipsoid – a comparison of three functional models for GNSS observations

    Directory of Open Access Journals (Sweden)

    Kadaj Roman

    2016-12-01

    Full Text Available The adjustment problem of the so-called combined (hybrid, integrated network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients. While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional

  20. Diagnosis of Broiler Livers by Classifying Image Patches

    DEFF Research Database (Denmark)

    Jørgensen, Anders; Fagertun, Jens; Moeslund, Thomas B.

    2017-01-01

    The manual health inspection are becoming the bottleneck at poultry processing plants. We present a computer vision method for automatic diagnosis of broiler livers. The non-rigid livers, of varying shape and sizes, are classified in patches by a convolutional neural network, outputting maps...

  1. The Closing of the Classified Catalog at Boston University

    Science.gov (United States)

    Hazen, Margaret Hindle

    1974-01-01

    Although the classified catalog at Boston University libraries has been a useful research tool, it has proven too expensive to keep current. The library has converted to a traditional alphabetic subject catalog and will recieve catalog cards from the Ohio College Library Center through the New England Library Network. (Author/LS)

  2. Growing adaptive machines combining development and learning in artificial neural networks

    CERN Document Server

    Bredeche, Nicolas; Doursat, René

    2014-01-01

    The pursuit of artificial intelligence has been a highly active domain of research for decades, yielding exciting scientific insights and productive new technologies. In terms of generating intelligence, however, this pursuit has yielded only limited success. This book explores the hypothesis that adaptive growth is a means of moving forward. By emulating the biological process of development, we can incorporate desirable characteristics of natural neural systems into engineered designs, and thus move closer towards the creation of brain-like systems. The particular focus is on how to design artificial neural networks for engineering tasks. The book consists of contributions from 18 researchers, ranging from detailed reviews of recent domains by senior scientists, to exciting new contributions representing the state of the art in machine learning research. The book begins with broad overviews of artificial neurogenesis and bio-inspired machine learning, suitable both as an introduction to the domains and as a...

  3. Design and Performance Investigation for the Optical Combinational Networks at High Data Rate

    Science.gov (United States)

    Tripathi, Devendra Kr.

    2017-05-01

    This article explores performance study for optical combinational designs based on nonlinear characteristics with semiconductor optical amplifier (SOA). Two configurations for optical half-adder with non-return-to-zero modulation pattern altogether with Mach-Zehnder modulator, interferometer at 50-Gbps data rate have been successfully realized. Accordingly, SUM and CARRY outputs have been concurrently executed and verified for their output waveforms. Numerical simulations for variation of data rate and key design parameters have been effectively executed outcome with optimum performance. Investigations depict overall good performance of the design in terms of the extinction factor. It also inferred that all-optical realization based on SOA is competent scheme, as it circumvents costly optoelectronic translation. This could be well supportive to erect larger complex optical combinational circuits.

  4. Combining affinity proteomics and network context to identify new phosphatase substrates and adapters in growth pathways.

    Directory of Open Access Journals (Sweden)

    Francesca eSacco

    2014-05-01

    Full Text Available Protein phosphorylation homoeostasis is tightly controlled and pathological conditions are caused by subtle alterations of the cell phosphorylation profile. Altered levels of kinase activities have already been associated to specific diseases. Less is known about the impact of phosphatases, the enzymes that down-regulate phosphorylation by removing the phosphate groups. This is partly due to our poor understanding of the phosphatase-substrate network. Much of phosphatase substrate specificity is not based on intrinsic enzyme specificity with the catalytic pocket recognizing the sequence/structure context of the phosphorylated residue. In addition many phosphatase catalytic subunits do not form a stable complex with their substrates. This makes the inference and validation of phosphatase substrates a non-trivial task. Here, we present a novel approach that builds on the observation that much of phosphatase substrate selection is based on the network of physical interactions linking the phosphatase to the substrate. We first used affinity proteomics coupled to quantitative mass spectrometry to saturate the interactome of eight phosphatases whose down regulations was shown to affect the activation of the RAS-PI#K pathway. By integrating information from functional siRNA with protein interaction information, we develop a strategy that aims at inferring phosphatase physiological substrates. Graph analysis is used to identify protein scaffolds that may link the catalytic subunits to their substrates. By this approach we rediscover several previously described phosphatase substrate interactions and characterize two new protein scaffolds that promote the dephosphorylation of PTPN11 and ERK by DUSP18 and DUSP26 respectively.

  5. A diversity compression and combining technique based on channel shortening for cooperative networks

    KAUST Repository

    Hussain, Syed Imtiaz

    2012-02-01

    The cooperative relaying process with multiple relays needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems where the nodes are equipped with very basic communication hardware. We consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination captures the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening principles. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. © 2012 IEEE.

  6. Waste classifying and separation device

    International Nuclear Information System (INIS)

    Kakiuchi, Hiroki.

    1997-01-01

    A flexible plastic bags containing solid wastes of indefinite shape is broken and the wastes are classified. The bag cutting-portion of the device has an ultrasonic-type or a heater-type cutting means, and the cutting means moves in parallel with the transferring direction of the plastic bags. A classification portion separates and discriminates the plastic bag from the contents and conducts classification while rotating a classification table. Accordingly, the plastic bag containing solids of indefinite shape can be broken and classification can be conducted efficiently and reliably. The device of the present invention has a simple structure which requires small installation space and enables easy maintenance. (T.M.)

  7. Defining and Classifying Interest Groups

    DEFF Research Database (Denmark)

    Baroni, Laura; Carroll, Brendan; Chalmers, Adam

    2014-01-01

    The interest group concept is defined in many different ways in the existing literature and a range of different classification schemes are employed. This complicates comparisons between different studies and their findings. One of the important tasks faced by interest group scholars engaged...... in large-N studies is therefore to define the concept of an interest group and to determine which classification scheme to use for different group types. After reviewing the existing literature, this article sets out to compare different approaches to defining and classifying interest groups with a sample...... in the organizational attributes of specific interest group types. As expected, our comparison of coding schemes reveals a closer link between group attributes and group type in narrower classification schemes based on group organizational characteristics than those based on a behavioral definition of lobbying....

  8. The interventional effect of new drugs combined with the Stupp protocol on glioblastoma: A network meta-analysis.

    Science.gov (United States)

    Li, Mei; Song, Xiangqi; Zhu, Jun; Fu, Aijun; Li, Jianmin; Chen, Tong

    2017-08-01

    New therapeutic agents in combination with the standard Stupp protocol (a protocol about the temozolomide combined with radiotherapy treatment with glioblastoma was research by Stupp R in 2005) were assessed to evaluate whether they were superior to the Stupp protocol alone, to determine the optimum treatment regimen for patients with newly diagnosed glioblastoma. We implemented a search strategy to identify studies in the following databases: PubMed, Cochrane Library, EMBASE, CNKI, CBM, Wanfang, and VIP, and assessed the quality of extracted data from the trials included. Statistical software was used to perform network meta-analysis. The use of novel therapeutic agents in combination with the Stupp protocol were all shown to be superior than the Stupp protocol alone for the treatment of newly diagnosed glioblastoma, ranked as follows: cilengitide 2000mg/5/week, bevacizumab in combination with irinotecan, nimotuzumab, bevacizumab, cilengitide 2000mg/2/week, cytokine-induced killer cell immunotherapy, and the Stupp protocol. In terms of serious adverse effects, the intervention group showed a 29% increase in the incidence of adverse events compared with the control group (patients treated only with Stupp protocol) with a statistically significant difference (RR=1.29; 95%CI 1.17-1.43; P<0.001). The most common adverse events were thrombocytopenia, lymphopenia, neutropenia, pneumonia, nausea, and vomiting, none of which were significantly different between the groups except for neutropenia, pneumonia, and embolism. All intervention drugs evaluated in our study were superior to the Stupp protocol alone when used in combination with it. However, we could not conclusively confirm whether cilengitide 2000mg/5/week was the optimum regime, as only one trial using this protocol was included in our study. Copyright © 2017. Published by Elsevier B.V.

  9. Compression and Combining Based on Channel Shortening and Rank Reduction Technique for Cooperative Wireless Sensor Networks

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-12-18

    This paper investigates and compares the performance of wireless sensor networks where sensors operate on the principles of cooperative communications. We consider a scenario where the source transmits signals to the destination with the help of L sensors. As the destination has the capacity of processing only U out of these L signals, the strongest U signals are selected while the remaining (L?U) signals are suppressed. A preprocessing block similar to channel-shortening is proposed in this contribution. However, this preprocessing block employs a rank-reduction technique instead of channel-shortening. By employing this preprocessing, we are able to decrease the computational complexity of the system without affecting the bit error rate (BER) performance. From our simulations, it can be shown that these schemes outperform the channel-shortening schemes in terms of computational complexity. In addition, the proposed schemes have a superior BER performance as compared to channel-shortening schemes when sensors employ fixed gain amplification. However, for sensors which employ variable gain amplification, a tradeoff exists in terms of BER performance between the channel-shortening and these schemes. These schemes outperform channel-shortening scheme for lower signal-to-noise ratio.

  10. Adaptive Steganalysis Based on Selection Region and Combined Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Donghui Hu

    2017-01-01

    Full Text Available Digital image steganalysis is the art of detecting the presence of information hiding in carrier images. When detecting recently developed adaptive image steganography methods, state-of-art steganalysis methods cannot achieve satisfactory detection accuracy, because the adaptive steganography methods can adaptively embed information into regions with rich textures via the guidance of distortion function and thus make the effective steganalysis features hard to be extracted. Inspired by the promising success which convolutional neural network (CNN has achieved in the fields of digital image analysis, increasing researchers are devoted to designing CNN based steganalysis methods. But as for detecting adaptive steganography methods, the results achieved by CNN based methods are still far from expected. In this paper, we propose a hybrid approach by designing a region selection method and a new CNN framework. In order to make the CNN focus on the regions with complex textures, we design a region selection method by finding a region with the maximal sum of the embedding probabilities. To evolve more diverse and effective steganalysis features, we design a new CNN framework consisting of three separate subnets with independent structure and configuration parameters and then merge and split the three subnets repeatedly. Experimental results indicate that our approach can lead to performance improvement in detecting adaptive steganography.

  11. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  12. Multiple-instance learning as a classifier combining problem

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M. J.; Duin, Robert P. W.

    2013-01-01

    In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the ass...

  13. Learning multiscale and deep representations for classifying remotely sensed imagery

    Science.gov (United States)

    Zhao, Wenzhi; Du, Shihong

    2016-03-01

    It is widely agreed that spatial features can be combined with spectral properties for improving interpretation performances on very-high-resolution (VHR) images in urban areas. However, many existing methods for extracting spatial features can only generate low-level features and consider limited scales, leading to unpleasant classification results. In this study, multiscale convolutional neural network (MCNN) algorithm was presented to learn spatial-related deep features for hyperspectral remote imagery classification. Unlike traditional methods for extracting spatial features, the MCNN first transforms the original data sets into a pyramid structure containing spatial information at multiple scales, and then automatically extracts high-level spatial features using multiscale training data sets. Specifically, the MCNN has two merits: (1) high-level spatial features can be effectively learned by using the hierarchical learning structure and (2) multiscale learning scheme can capture contextual information at different scales. To evaluate the effectiveness of the proposed approach, the MCNN was applied to classify the well-known hyperspectral data sets and compared with traditional methods. The experimental results shown a significant increase in classification accuracies especially for urban areas.

  14. Risk assessment of 170 kV GIS connected to combined cable/OHL network

    DEFF Research Database (Denmark)

    Bak, Claus Leth; Kessel, Jakob; Atlason, Vidir

    2009-01-01

    performance, compared to a system consisting solely of AIS connected through overhead lines. The main purpose is to investigate whether overvoltage protection is necessary at the GIS busbar. The analysis is conducted by implementing a simulation model in PSCAD/EMTDC. Simulations are conducted for both SF......This paper concerns different investigations of lightning simulation of a combined 170 kV overhead line/cable connected GIS. This is interesting due to the increasing amount of underground cables and GIS in the Danish transmission system. This creates a different system with respect to lightning...... and BFO. Overvoltages are evaluated for varying front times of the lightning surge, different soil resistivities at the surge arrester grounding in the overhead line/cable transition point and a varying length of the connection cable between the transformer and the GIS busbar with a SA implemented...

  15. PRODIAG: Combined expert system/neural network for process fault diagnosis. Volume 1, Theory

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.; Wei, T.Y.C.; Vitela, J.E.

    1995-09-01

    The function of the PRODIAG code is to diagnose on-line the root cause of a thermal-hydraulic (T-H) system transient with trace back to the identification of the malfunctioning component using the T-H instrumentation signals exclusively. The code methodology is based on the Al techniques of automated reasoning/expert systems (ES) and artificial neural networks (ANN). The research and development objective is to develop a generic code methodology which would be plant- and T-H-system-independent. For the ES part the only plant or T-H system specific code requirements would be implemented through input only and at that only through a Piping and Instrumentation Diagram (PID) database. For the ANN part the only plant or T-H system specific code requirements would be through the ANN training data for normal component characteristics and the same PID database information. PRODIAG would, therefore, be generic and portable from T-H system to T-H system and from plant to plant without requiring any code-related modifications except for the PID database and the ANN training with the normal component characteristics. This would give PRODIAG the generic feature which numerical simulation plant codes such as TRAC or RELAP5 have. As the code is applied to different plants and different T-H systems, only the connectivity information, the operating conditions and the normal component characteristics are changed, and the changes are made entirely through input. Verification and validation of PRODIAG would, be T-H system independent and would be performed only ``once``.

  16. Combining advanced networked technology and pedagogical methods to improve collaborative distance learning.

    Science.gov (United States)

    Staccini, Pascal; Dufour, Jean-Charles; Raps, Hervé; Fieschi, Marius

    2005-01-01

    Making educational material be available on a network cannot be reduced to merely implementing hypermedia and interactive resources on a server. A pedagogical schema has to be defined to guide students for learning and to provide teachers with guidelines to prepare valuable and upgradeable resources. Components of a learning environment, as well as interactions between students and other roles such as author, tutor and manager, can be deduced from cognitive foundations of learning, such as the constructivist approach. Scripting the way a student will to navigate among information nodes and interact with tools to build his/her own knowledge can be a good way of deducing the features of the graphic interface related to the management of the objects. We defined a typology of pedagogical resources, their data model and their logic of use. We implemented a generic and web-based authoring and publishing platform (called J@LON for Join And Learn On the Net) within an object-oriented and open-source programming environment (called Zope) embedding a content management system (called Plone). Workflow features have been used to mark the progress of students and to trace the life cycle of resources shared by the teaching staff. The platform integrated advanced on line authoring features to create interactive exercises and support live courses diffusion. The platform engine has been generalized to the whole curriculum of medical studies in our faculty; it also supports an international master of risk management in health care and will be extent to all other continuous training diploma.

  17. Combining CFD simulations with blockoriented heatflow-network model for prediction of photovoltaic energy-production

    International Nuclear Information System (INIS)

    Haber, I E; Farkas, I

    2011-01-01

    The exterior factors which influencing the working circumstances of photovoltaic modules are the irradiation, the optical air layer (Air Mass - AM), the irradiation angle, the environmental temperature and the cooling effect of the wind. The efficiency of photovoltaic (PV) devices is inversely proportional to the cell temperature and therefore the mounting of the PV modules can have a big affect on the cooling, due to wind flow-around and naturally convection. The construction of the modules could be described by a heatflow-network model, and that can define the equation which determines the cells temperature. An equation like this can be solved as a block oriented model with hybrid-analogue simulator such as Matlab-Simulink. In view of the flow field and the heat transfer, witch was calculated numerically, the heat transfer coefficients can be determined. Five inflow rates were set up for both pitched and flat roof cases, to let the trend of the heat transfer coefficient know, while these functions can be used for the Matlab/Simulink model. To model the free convection flows, the Boussinesq-approximation were used, integrated into the Navier-Stokes equations and the energy equation. It has been found that under a constant solar heat gain, the air velocity around the modules and behind the pitched-roof mounted module is increasing, proportionately to the wind velocities, and as result the heat transfer coefficient increases linearly, and can be described by a function in both cases. To the block based model the meteorological parameters and the results of the CFD simulations as single functions were attached. The final aim was to make a model that could be used for planning photovoltaic systems, and define their accurate performance for better sizing of an array of modules.

  18. GANN: Genetic algorithm neural networks for the detection of conserved combinations of features in DNA

    Directory of Open Access Journals (Sweden)

    Beiko Robert G

    2005-02-01

    Full Text Available Abstract Background The multitude of motif detection algorithms developed to date have largely focused on the detection of patterns in primary sequence. Since sequence-dependent DNA structure and flexibility may also play a role in protein-DNA interactions, the simultaneous exploration of sequence- and structure-based hypotheses about the composition of binding sites and the ordering of features in a regulatory region should be considered as well. The consideration of structural features requires the development of new detection tools that can deal with data types other than primary sequence. Results GANN (available at http://bioinformatics.org.au/gann is a machine learning tool for the detection of conserved features in DNA. The software suite contains programs to extract different regions of genomic DNA from flat files and convert these sequences to indices that reflect sequence and structural composition or the presence of specific protein binding sites. The machine learning component allows the classification of different types of sequences based on subsamples of these indices, and can identify the best combinations of indices and machine learning architecture for sequence discrimination. Another key feature of GANN is the replicated splitting of data into training and test sets, and the implementation of negative controls. In validation experiments, GANN successfully merged important sequence and structural features to yield good predictive models for synthetic and real regulatory regions. Conclusion GANN is a flexible tool that can search through large sets of sequence and structural feature combinations to identify those that best characterize a set of sequences.

  19. A stereo-compound hybrid microscope for combined intracellular and optical recording of invertebrate neural network activity.

    Science.gov (United States)

    Frost, William N; Wang, Jean; Brandon, Christopher J

    2007-05-15

    Optical recording studies of invertebrate neural networks with voltage-sensitive dyes seldom employ conventional intracellular electrodes. This may in part be due to the traditional reliance on compound microscopes for such work. While such microscopes have high light-gathering power, they do not provide depth of field, making working with sharp electrodes difficult. Here we describe a hybrid microscope design, with switchable compound and stereo objectives, that eases the use of conventional intracellular electrodes in optical recording experiments. We use it, in combination with a voltage-sensitive dye and photodiode array, to identify neurons participating in the swim motor program of the marine mollusk Tritonia. This microscope design should be applicable to optical recording studies in many preparations.

  20. Study of Aided Diagnosis of Hepatic Carcinoma Based on Artificial Neural Network Combined with Tumor Marker Group

    Science.gov (United States)

    Tan, Shanjuan; Feng, Feifei; Wu, Yongjun; Wu, Yiming

    To develop a computer-aided diagnostic scheme by using an artificial neural network (ANN) combined with tumor markers for diagnosis of hepatic carcinoma (HCC) as a clinical assistant method. 140 serum samples (50 malignant, 40 benign and 50 normal) were analyzed for α-fetoprotein (AFP), carbohydrate antigen 125 (CA125), carcinoembryonic antigen (CEA), sialic acid (SA) and calcium (Ca). The five tumor marker values were then used as ANN inputs data. The result of ANN was compared with that of discriminant analysis by receiver operating characteristic (ROC) curve (AUC) analysis. The diagnostic accuracy of ANN and discriminant analysis among all samples of the test group was 95.5% and 79.3%, respectively. Analysis of multiple tumor markers based on ANN may be a better choice than the traditional statistical methods for differentiating HCC from benign or normal.

  1. Impact of dam failure-induced flood on road network using combined remote sensing and geospatial approach

    Science.gov (United States)

    Foumelis, Michael

    2017-01-01

    The applicability of the normalized difference water index (NDWI) to the delineation of dam failure-induced floods is demonstrated for the case of the Sparmos dam (Larissa, Central Greece). The approach followed was based on the differentiation of NDWI maps to accurately define the extent of the inundated area over different time spans using multimission Earth observation optical data. Besides using Landsat data, for which the index was initially designed, higher spatial resolution data from Sentinel-2 mission were also successfully exploited. A geospatial analysis approach was then introduced to rapidly identify potentially affected segments of the road network. This allowed for further correlation to actual damages in the following damage assessment and remediation activities. The proposed combination of geographic information systems and remote sensing techniques can be easily implemented by local authorities and civil protection agencies for mapping and monitoring flood events.

  2. Blood hyperviscosity identification with reflective spectroscopy of tongue tip based on principal component analysis combining artificial neural network.

    Science.gov (United States)

    Liu, Ming; Zhao, Jing; Lu, XiaoZuo; Li, Gang; Wu, Taixia; Zhang, LiFu

    2018-05-10

    With spectral methods, noninvasive determination of blood hyperviscosity in vivo is very potential and meaningful in clinical diagnosis. In this study, 67 male subjects (41 health, and 26 hyperviscosity according to blood sample analysis results) participate. Reflectance spectra of subjects' tongue tips is measured, and a classification method bases on principal component analysis combined with artificial neural network model is built to identify hyperviscosity. Hold-out and Leave-one-out methods are used to avoid significant bias and lessen overfitting problem, which are widely accepted in the model validation. To measure the performance of the classification, sensitivity, specificity, accuracy and F-measure are calculated, respectively. The accuracies with 100 times Hold-out method and 67 times Leave-one-out method are 88.05% and 97.01%, respectively. Experimental results indicate that the built classification model has certain practical value and proves the feasibility of using spectroscopy to identify hyperviscosity by noninvasive determination.

  3. A Practical Application Combining Wireless Sensor Networks and Internet of Things: Safety Management System for Tower Crane Groups

    Directory of Open Access Journals (Sweden)

    Dexing Zhong

    2014-07-01

    Full Text Available The so-called Internet of Things (IoT has attracted increasing attention in the field of computer and information science. In this paper, a specific application of IoT, named Safety Management System for Tower Crane Groups (SMS-TC, is proposed for use in the construction industry field. The operating status of each tower crane was detected by a set of customized sensors, including horizontal and vertical position sensors for the trolley, angle sensors for the jib and load, tilt and wind speed sensors for the tower body. The sensor data is collected and processed by the Tower Crane Safety Terminal Equipment (TC-STE installed in the driver’s operating room. Wireless communication between each TC-STE and the Local Monitoring Terminal (LMT at the ground worksite were fulfilled through a Zigbee wireless network. LMT can share the status information of the whole group with each TC-STE, while the LMT records the real-time data and reports it to the Remote Supervision Platform (RSP through General Packet Radio Service (GPRS. Based on the global status data of the whole group, an anti-collision algorithm was executed in each TC-STE to ensure the safety of each tower crane during construction. Remote supervision can be fulfilled using our client software installed on a personal computer (PC or smartphone. SMS-TC could be considered as a promising practical application that combines a Wireless Sensor Network with the Internet of Things.

  4. A Practical Application Combining Wireless Sensor Networks and Internet of Things: Safety Management System for Tower Crane Groups

    Science.gov (United States)

    Zhong, Dexing; Lv, Hongqiang; Han, Jiuqiang; Wei, Quanrui

    2014-01-01

    The so-called Internet of Things (IoT) has attracted increasing attention in the field of computer and information science. In this paper, a specific application of IoT, named Safety Management System for Tower Crane Groups (SMS-TC), is proposed for use in the construction industry field. The operating status of each tower crane was detected by a set of customized sensors, including horizontal and vertical position sensors for the trolley, angle sensors for the jib and load, tilt and wind speed sensors for the tower body. The sensor data is collected and processed by the Tower Crane Safety Terminal Equipment (TC-STE) installed in the driver's operating room. Wireless communication between each TC-STE and the Local Monitoring Terminal (LMT) at the ground worksite were fulfilled through a Zigbee wireless network. LMT can share the status information of the whole group with each TC-STE, while the LMT records the real-time data and reports it to the Remote Supervision Platform (RSP) through General Packet Radio Service (GPRS). Based on the global status data of the whole group, an anti-collision algorithm was executed in each TC-STE to ensure the safety of each tower crane during construction. Remote supervision can be fulfilled using our client software installed on a personal computer (PC) or smartphone. SMS-TC could be considered as a promising practical application that combines a Wireless Sensor Network with the Internet of Things. PMID:25196106

  5. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    Science.gov (United States)

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  6. Assessment of the service performance of drainage system and transformation of pipeline network based on urban combined sewer system model.

    Science.gov (United States)

    Peng, Hai-Qin; Liu, Yan; Wang, Hong-Wu; Ma, Lu-Ming

    2015-10-01

    In recent years, due to global climate change and rapid urbanization, extreme weather events occur to the city at an increasing frequency. Waterlogging is common because of heavy rains. In this case, the urban drainage system can no longer meet the original design requirements, resulting in traffic jams and even paralysis and post a threat to urban safety. Therefore, it provides a necessary foundation for urban drainage planning and design to accurately assess the capacity of the drainage system and correctly simulate the transport effect of drainage network and the carrying capacity of drainage facilities. This study adopts InfoWorks Integrated Catchment Management (ICM) to present the two combined sewer drainage systems in Yangpu District, Shanghai (China). The model can assist the design of the drainage system. Model calibration is performed based on the historical rainfall events. The calibrated model is used for the assessment of the outlet drainage and pipe loads for the storm scenario currently existing or possibly occurring in the future. The study found that the simulation and analysis results of the drainage system model were reliable. They could fully reflect the service performance of the drainage system in the study area and provide decision-making support for regional flood control and transformation of pipeline network.

  7. Distributed Least-Squares Estimation of a Remote Chemical Source via Convex Combination in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Meng-Li Cao

    2014-06-01

    Full Text Available This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN. Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE method to solve the chemical source localization (CSL problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  8. A practical application combining wireless sensor networks and Internet of Things: Safety Management System for Tower Crane Groups.

    Science.gov (United States)

    Zhong, Dexing; Lv, Hongqiang; Han, Jiuqiang; Wei, Quanrui

    2014-07-30

    The so-called Internet of Things (IoT) has attracted increasing attention in the field of computer and information science. In this paper, a specific application of IoT, named Safety Management System for Tower Crane Groups (SMS-TC), is proposed for use in the construction industry field. The operating status of each tower crane was detected by a set of customized sensors, including horizontal and vertical position sensors for the trolley, angle sensors for the jib and load, tilt and wind speed sensors for the tower body. The sensor data is collected and processed by the Tower Crane Safety Terminal Equipment (TC-STE) installed in the driver's operating room. Wireless communication between each TC-STE and the Local Monitoring Terminal (LMT) at the ground worksite were fulfilled through a Zigbee wireless network. LMT can share the status information of the whole group with each TC-STE, while the LMT records the real-time data and reports it to the Remote Supervision Platform (RSP) through General Packet Radio Service (GPRS). Based on the global status data of the whole group, an anti-collision algorithm was executed in each TC-STE to ensure the safety of each tower crane during construction. Remote supervision can be fulfilled using our client software installed on a personal computer (PC) or smartphone. SMS-TC could be considered as a promising practical application that combines a Wireless Sensor Network with the Internet of Things.

  9. Combining Image and Non-Image Data for Automatic Detection of Retina Disease in a Telemedicine Network

    Energy Technology Data Exchange (ETDEWEB)

    Aykac, Deniz [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK); Fox, Karen [Delta Health Alliance; Garg, Seema [University of North Carolina; Giancardo, Luca [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Nichols, Trent L [ORNL; Tobin Jr, Kenneth William [ORNL

    2011-01-01

    A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion/anomaly detection is a low-cost way of achieving broad-based screening for diabetic retinopathy (DR) and other eye diseases. In the process of a routine eye-screening examination, other non-image data is often available which may be useful in automated diagnosis of disease. In this work, we report on the results of combining this non-image data with image data, using the protocol and processing steps of a prototype system for automated disease diagnosis of retina examinations from a telemedicine network. The system includes quality assessments, automated physiology detection, and automated lesion detection to create an archive of known cases. Non-image data such as diabetes onset date and hemoglobin A1c (HgA1c) for each patient examination are included as well, and the system is used to create a content-based image retrieval engine capable of automated diagnosis of disease into 'normal' and 'abnormal' categories. The system achieves a sensitivity and specificity of 91.2% and 71.6% using hold-one-out validation testing.

  10. Classifying Drivers' Cognitive Load Using EEG Signals.

    Science.gov (United States)

    Barua, Shaibal; Ahmed, Mobyen Uddin; Begum, Shahina

    2017-01-01

    A growing traffic safety issue is the effect of cognitive loading activities on traffic safety and driving performance. To monitor drivers' mental state, understanding cognitive load is important since while driving, performing cognitively loading secondary tasks, for example talking on the phone, can affect the performance in the primary task, i.e. driving. Electroencephalography (EEG) is one of the reliable measures of cognitive load that can detect the changes in instantaneous load and effect of cognitively loading secondary task. In this driving simulator study, 1-back task is carried out while the driver performs three different simulated driving scenarios. This paper presents an EEG based approach to classify a drivers' level of cognitive load using Case-Based Reasoning (CBR). The results show that for each individual scenario as well as using data combined from the different scenarios, CBR based system achieved approximately over 70% of classification accuracy.

  11. On Singularities and Black Holes in Combination-Driven Models of Technological Innovation Networks.

    Directory of Open Access Journals (Sweden)

    Ricard Solé

    Full Text Available It has been suggested that innovations occur mainly by combination: the more inventions accumulate, the higher the probability that new inventions are obtained from previous designs. Additionally, it has been conjectured that the combinatorial nature of innovations naturally leads to a singularity: at some finite time, the number of innovations should diverge. Although these ideas are certainly appealing, no general models have been yet developed to test the conditions under which combinatorial technology should become explosive. Here we present a generalised model of technological evolution that takes into account two major properties: the number of previous technologies needed to create a novel one and how rapidly technology ages. Two different models of combinatorial growth are considered, involving different forms of ageing. When long-range memory is used and thus old inventions are available for novel innovations, singularities can emerge under some conditions with two phases separated by a critical boundary. If the ageing has a characteristic time scale, it is shown that no singularities will be observed. Instead, a "black hole" of old innovations appears and expands in time, making the rate of invention creation slow down into a linear regime.

  12. On Singularities and Black Holes in Combination-Driven Models of Technological Innovation Networks.

    Science.gov (United States)

    Solé, Ricard; Amor, Daniel R; Valverde, Sergi

    2016-01-01

    It has been suggested that innovations occur mainly by combination: the more inventions accumulate, the higher the probability that new inventions are obtained from previous designs. Additionally, it has been conjectured that the combinatorial nature of innovations naturally leads to a singularity: at some finite time, the number of innovations should diverge. Although these ideas are certainly appealing, no general models have been yet developed to test the conditions under which combinatorial technology should become explosive. Here we present a generalised model of technological evolution that takes into account two major properties: the number of previous technologies needed to create a novel one and how rapidly technology ages. Two different models of combinatorial growth are considered, involving different forms of ageing. When long-range memory is used and thus old inventions are available for novel innovations, singularities can emerge under some conditions with two phases separated by a critical boundary. If the ageing has a characteristic time scale, it is shown that no singularities will be observed. Instead, a "black hole" of old innovations appears and expands in time, making the rate of invention creation slow down into a linear regime.

  13. Classifying Transition Behaviour in Postural Activity Monitoring

    Directory of Open Access Journals (Sweden)

    James BRUSEY

    2009-10-01

    Full Text Available A few accelerometers positioned on different parts of the body can be used to accurately classify steady state behaviour, such as walking, running, or sitting. Such systems are usually built using supervised learning approaches. Transitions between postures are, however, difficult to deal with using posture classification systems proposed to date, since there is no label set for intermediary postures and also the exact point at which the transition occurs can sometimes be hard to pinpoint. The usual bypass when using supervised learning to train such systems is to discard a section of the dataset around each transition. This leads to poorer classification performance when the systems are deployed out of the laboratory and used on-line, particularly if the regimes monitored involve fast paced activity changes. Time-based filtering that takes advantage of sequential patterns is a potential mechanism to improve posture classification accuracy in such real-life applications. Also, such filtering should reduce the number of event messages needed to be sent across a wireless network to track posture remotely, hence extending the system’s life. To support time-based filtering, understanding transitions, which are the major event generators in a classification system, is a key. This work examines three approaches to post-process the output of a posture classifier using time-based filtering: a naïve voting scheme, an exponentially weighted voting scheme, and a Bayes filter. Best performance is obtained from the exponentially weighted voting scheme although it is suspected that a more sophisticated treatment of the Bayes filter might yield better results.

  14. Local-global classifier fusion for screening chest radiographs

    Science.gov (United States)

    Ding, Meng; Antani, Sameer; Jaeger, Stefan; Xue, Zhiyun; Candemir, Sema; Kohli, Marc; Thoma, George

    2017-03-01

    Tuberculosis (TB) is a severe comorbidity of HIV and chest x-ray (CXR) analysis is a necessary step in screening for the infective disease. Automatic analysis of digital CXR images for detecting pulmonary abnormalities is critical for population screening, especially in medical resource constrained developing regions. In this article, we describe steps that improve previously reported performance of NLM's CXR screening algorithms and help advance the state of the art in the field. We propose a local-global classifier fusion method where two complementary classification systems are combined. The local classifier focuses on subtle and partial presentation of the disease leveraging information in radiology reports that roughly indicates locations of the abnormalities. In addition, the global classifier models the dominant spatial structure in the gestalt image using GIST descriptor for the semantic differentiation. Finally, the two complementary classifiers are combined using linear fusion, where the weight of each decision is calculated by the confidence probabilities from the two classifiers. We evaluated our method on three datasets in terms of the area under the Receiver Operating Characteristic (ROC) curve, sensitivity, specificity and accuracy. The evaluation demonstrates the superiority of our proposed local-global fusion method over any single classifier.

  15. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  16. Energy-efficient neuromorphic classifiers

    OpenAIRE

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2015-01-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. Neuromorphic engineering promises extremely low energy consumptions, comparable to those of the nervous system. However, until now the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, rendering el...

  17. Aggregation Operator Based Fuzzy Pattern Classifier Design

    DEFF Research Database (Denmark)

    Mönks, Uwe; Larsen, Henrik Legind; Lohweg, Volker

    2009-01-01

    This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable....... The performances of novel classifiers using substitutes of MFPC's geometric mean aggregator are benchmarked in the scope of an image processing application against the MFPC to reveal classification improvement potentials for obtaining higher classification rates....

  18. 15 CFR 4.8 - Classified Information.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily the...

  19. Identifying and Classifying Mobile Business Models Based on Meta-Synthesis Approach

    Directory of Open Access Journals (Sweden)

    Porrandokht Niroomand

    2012-03-01

    Full Text Available The appearance of mobile has provided unique opportunities and fields through the development and creation of businesses and has been able to create the new job opportunities. The current research tries to familiarize entrepreneures who are running the businesses especially in the area of mobile services with business models. These business models can familiarize them for implementing the new ideas and designs since they can enter to business market. Searching in many papers shows that there are no propitiated papers and researches that can identify, categorize and analyze the mobile business models. Consequently, this paper involves innovation. The first part of this paper presents the review about the concepts and theories about the different mobile generations, the mobile commerce and business models. Afterwards, 92 models are compared, interpreted, translated and combined using 33 papers, books based on two different criteria that are expert criterion and kind of product criterion. In the classification of models according to models that are presented by experts, the models are classified based on criteria such as business fields, business partners, the rate of dynamism, the kind of activity, the focus areas, the mobile generations, transparency, the type of operator activities, marketing and advertisements. The models that are classified based on the kind of product have been analyzed and classified at four different areas of mobile commerce including the content production, technology (software and hardware, network and synthetic.

  20. Evidence for network formation during the carbonization of coal from the combination of rheometry and {sup 1}H NMR techniques

    Energy Technology Data Exchange (ETDEWEB)

    Karen M. Steel; Miguel C. Diaz; John W. Patrick; Colin E. Snape [University of Nottingham, Nottingham (United Kingdom). Nottingham Fuel and Energy Centre, School of Chemical, Environmental and Mining Engineering

    2006-09-15

    High-temperature rheometry and {sup 1}H NMR have been combined to assess the microstructural changes taking place during carbonization of a number of different coals. A linear relationship exists between the logarithm of the material's complex viscosity ({eta}{sup {asterisk}}) and the fraction of hydrogen present in rigid structures ({phi}{sub rh}) for the resolidification region in which the material is liquid-like with small amounts of dispersed solid. The relationship is best characterized by the Arrhenius viscosity equation given by {eta}{sup {asterisk}} = {eta}{sub 0}{sup {asterisk}} exp(({eta}){phi}{sub rh}) where {eta}{sub 0}{sup {asterisk}} is the complex viscosity of the liquid medium and {eta} is the intrinsic viscosity of the resolidified material. Attempts to fit the Krieger-Dougherty suspension equation showed that the solid regions formed do not pack together like a normal suspension. Instead, it is more likely that cross-linking and cyclization reactions within the liquid medium give rise to a network structure of solid material and a characteristic gel point. The ratio of hydrogen present in rigid structures to that still present in liquid form at the gel point is approximately 2:3. The resolidified material was found to have a higher {eta} than the components of the coal that remained unsoftened, which suggests that while the unsoftened components have a fairly equant shape, the resolidified components have a much higher hydrodynamic volume. The resolidification process bears similarity with thermosetting polymer networks and the measurements taken for a blend of two coals follow a common two-component polymer blending rule. 35 refs., 13 figs., 4 tabs.

  1. Networking

    OpenAIRE

    Rauno Lindholm, Daniel; Boisen Devantier, Lykke; Nyborg, Karoline Lykke; Høgsbro, Andreas; Fries, de; Skovlund, Louise

    2016-01-01

    The purpose of this project was to examine what influencing factor that has had an impact on the presumed increasement of the use of networking among academics on the labour market and how it is expressed. On the basis of the influence from globalization on the labour market it can be concluded that the globalization has transformed the labour market into a market based on the organization of networks. In this new organization there is a greater emphasis on employees having social qualificati...

  2. Online monitoring and conditional regression tree test: Useful tools for a better understanding of combined sewer network behavior.

    Science.gov (United States)

    Bersinger, T; Bareille, G; Pigot, T; Bru, N; Le Hécho, I

    2018-06-01

    A good knowledge of the dynamic of pollutant concentration and flux in a combined sewer network is necessary when considering solutions to limit the pollutants discharged by combined sewer overflow (CSO) into receiving water during wet weather. Identification of the parameters that influence pollutant concentration and flux is important. Nevertheless, few studies have obtained satisfactory results for the identification of these parameters using statistical tools. Thus, this work uses a large database of rain events (116 over one year) obtained via continuous measurement of rainfall, discharge flow and chemical oxygen demand (COD) estimated using online turbidity for the identification of these parameters. We carried out a statistical study of the parameters influencing the maximum COD concentration, the discharge flow and the discharge COD flux. In this study a new test was used that has never been used in this field: the conditional regression tree test. We have demonstrated that the antecedent dry weather period, the rain event average intensity and the flow before the event are the three main factors influencing the maximum COD concentration during a rainfall event. Regarding the discharge flow, it is mainly influenced by the overall rainfall height but not by the maximum rainfall intensity. Finally, COD discharge flux is influenced by the discharge volume and the maximum COD concentration. Regression trees seem much more appropriate than common tests like PCA and PLS for this type of study as they take into account the thresholds and cumulative effects of various parameters as a function of the target variable. These results could help to improve sewer and CSO management in order to decrease the discharge of pollutants into receiving waters. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Hallucination- and speech-specific hypercoupling in frontotemporal auditory and language networks in schizophrenia using combined task-based fMRI data: An fBIRN study.

    Science.gov (United States)

    Lavigne, Katie M; Woodward, Todd S

    2018-04-01

    Hypercoupling of activity in speech-perception-specific brain networks has been proposed to play a role in the generation of auditory-verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task-based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI-CPCA), which allowed for comparison of task-related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory-motor, (b) language processing, and (c) default-mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory-motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non-AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech-perception-related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation. © 2017 Wiley Periodicals, Inc.

  4. Current Directional Protection of Series Compensated Line Using Intelligent Classifier

    Directory of Open Access Journals (Sweden)

    M. Mollanezhad Heydarabadi

    2016-12-01

    Full Text Available Current inversion condition leads to incorrect operation of current based directional relay in power system with series compensated device. Application of the intelligent system for fault direction classification has been suggested in this paper. A new current directional protection scheme based on intelligent classifier is proposed for the series compensated line. The proposed classifier uses only half cycle of pre-fault and post fault current samples at relay location to feed the classifier. A lot of forward and backward fault simulations under different system conditions upon a transmission line with a fixed series capacitor are carried out using PSCAD/EMTDC software. The applicability of decision tree (DT, probabilistic neural network (PNN and support vector machine (SVM are investigated using simulated data under different system conditions. The performance comparison of the classifiers indicates that the SVM is a best suitable classifier for fault direction discriminating. The backward faults can be accurately distinguished from forward faults even under current inversion without require to detect of the current inversion condition.

  5. Dynamic cluster generation for a fuzzy classifier with ellipsoidal regions.

    Science.gov (United States)

    Abe, S

    1998-01-01

    In this paper, we discuss a fuzzy classifier with ellipsoidal regions that dynamically generates clusters. First, for the data belonging to a class we define a fuzzy rule with an ellipsoidal region. Namely, using the training data for each class, we calculate the center and the covariance matrix of the ellipsoidal region for the class. Then we tune the fuzzy rules, i.e., the slopes of the membership functions, successively until there is no improvement in the recognition rate of the training data. Then if the number of the data belonging to a class that are misclassified into another class exceeds a prescribed number, we define a new cluster to which those data belong and the associated fuzzy rule. Then we tune the newly defined fuzzy rules in the similar way as stated above, fixing the already obtained fuzzy rules. We iterate generation of clusters and tuning of the newly generated fuzzy rules until the number of the data belonging to a class that are misclassified into another class does not exceed the prescribed number. We evaluate our method using thyroid data, Japanese Hiragana data of vehicle license plates, and blood cell data. By dynamic cluster generation, the generalization ability of the classifier is improved and the recognition rate of the fuzzy classifier for the test data is the best among the neural network classifiers and other fuzzy classifiers if there are no discrete input variables.

  6. Economic competitiveness of underground coal gasification combined with carbon capture and storage in the Bulgarian energy network

    Energy Technology Data Exchange (ETDEWEB)

    Nakaten, Natalie Christine

    2014-11-15

    Underground coal gasification (UCG) allows for exploitation of deep-seated coal seams not economically exploitable by conventional coal mining. Aim of the present study is to examine UCG economics based on coal conversion into a synthesis gas to fuel a combined cycle gas turbine power plant (CCGT) with CO2 capture and storage (CCS). Thereto, a techno-economic model is developed for UCG-CCGT-CCS costs of electricity (COE) determination which, considering sitespecific data of a selected target area in Bulgaria, sum up to 72 Euro/MWh in total. To quantify the impact of model constraints on COE, sensitivity analyses are undertaken revealing that varying geological model constraints impact COE with 0.4% to 4%, chemical with 13%, technical with 8% to 17% and market-dependent with 2% to 25%. Besides site-specific boundary conditions, UCG-CCGT-CCS economics depend on resources availability and infrastructural characteristics of the overall energy system. Assessing a model based implementation of UCG-CCGT-CCS and CCS power plants into the Bulgarian energy network revealed that both technologies provide essential and economically competitive options to achieve the EU environmental targets and a complete substitution of gas imports by UCG synthesis gas production.

  7. Combining natural language processing and network analysis to examine how advocacy organizations stimulate conversation on social media.

    Science.gov (United States)

    Bail, Christopher Andrew

    2016-10-18

    Social media sites are rapidly becoming one of the most important forums for public deliberation about advocacy issues. However, social scientists have not explained why some advocacy organizations produce social media messages that inspire far-ranging conversation among social media users, whereas the vast majority of them receive little or no attention. I argue that advocacy organizations are more likely to inspire comments from new social media audiences if they create "cultural bridges," or produce messages that combine conversational themes within an advocacy field that are seldom discussed together. I use natural language processing, network analysis, and a social media application to analyze how cultural bridges shaped public discourse about autism spectrum disorders on Facebook over the course of 1.5 years, controlling for various characteristics of advocacy organizations, their social media audiences, and the broader social context in which they interact. I show that organizations that create substantial cultural bridges provoke 2.52 times more comments about their messages from new social media users than those that do not, controlling for these factors. This study thus offers a theory of cultural messaging and public deliberation and computational techniques for text analysis and application-based survey research.

  8. Forecasting of UV-Vis absorbance time series using artificial neural networks combined with principal component analysis.

    Science.gov (United States)

    Plazas-Nossa, Leonardo; Hofer, Thomas; Gruber, Günter; Torres, Andres

    2017-02-01

    This work proposes a methodology for the forecasting of online water quality data provided by UV-Vis spectrometry. Therefore, a combination of principal component analysis (PCA) to reduce the dimensionality of a data set and artificial neural networks (ANNs) for forecasting purposes was used. The results obtained were compared with those obtained by using discrete Fourier transform (DFT). The proposed methodology was applied to four absorbance time series data sets composed by a total number of 5705 UV-Vis spectra. Absolute percentage errors obtained by applying the proposed PCA/ANN methodology vary between 10% and 13% for all four study sites. In general terms, the results obtained were hardly generalizable, as they appeared to be highly dependent on specific dynamics of the water system; however, some trends can be outlined. PCA/ANN methodology gives better results than PCA/DFT forecasting procedure by using a specific spectra range for the following conditions: (i) for Salitre wastewater treatment plant (WWTP) (first hour) and Graz West R05 (first 18 min), from the last part of UV range to all visible range; (ii) for Gibraltar pumping station (first 6 min) for all UV-Vis absorbance spectra; and (iii) for San Fernando WWTP (first 24 min) for all of UV range to middle part of visible range.

  9. Age-related reorganization of functional networks for successful conflict resolution: a combined functional and structural MRI study.

    Science.gov (United States)

    Schulte, Tilman; Müller-Oehring, Eva M; Chanraud, Sandra; Rosenbloom, Margaret J; Pfefferbaum, Adolf; Sullivan, Edith V

    2011-11-01

    Aging has readily observable effects on the ability to resolve conflict between competing stimulus attributes that are likely related to selective structural and functional brain changes. To identify age-related differences in neural circuits subserving conflict processing, we combined structural and functional MRI and a Stroop Match-to-Sample task involving perceptual cueing and repetition to modulate resources in healthy young and older adults. In our Stroop Match-to-Sample task, older adults handled conflict by activating a frontoparietal attention system more than young adults and engaged a visuomotor network more than young adults when processing repetitive conflict and when processing conflict following valid perceptual cueing. By contrast, young adults activated frontal regions more than older adults when processing conflict with perceptual cueing. These differential activation patterns were not correlated with regional gray matter volume despite smaller volumes in older than young adults. Given comparable performance in speed and accuracy of responding between both groups, these data suggest that successful aging is associated with functional reorganization of neural systems to accommodate functionally increasing task demands on perceptual and attentional operations. Copyright © 2009 Elsevier Inc. All rights reserved.

  10. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  11. A robust observer based on H∞ filtering with parameter uncertainties combined with Neural Networks for estimation of vehicle roll angle

    Science.gov (United States)

    Boada, Beatriz L.; Boada, Maria Jesus L.; Vargas-Melendez, Leandro; Diaz, Vicente

    2018-01-01

    Nowadays, one of the main objectives in road transport is to decrease the number of accident victims. Rollover accidents caused nearly 33% of all deaths from passenger vehicle crashes. Roll Stability Control (RSC) systems prevent vehicles from untripped rollover accidents. The lateral load transfer is the main parameter which is taken into account in the RSC systems. This parameter is related to the roll angle, which can be directly measured from a dual-antenna GPS. Nevertheless, this is a costly technique. For this reason, roll angle has to be estimated. In this paper, a novel observer based on H∞ filtering in combination with a neural network (NN) for the vehicle roll angle estimation is proposed. The design of this observer is based on four main criteria: to use a simplified vehicle model, to use signals of sensors which are installed onboard in current vehicles, to consider the inaccuracy in the system model and to attenuate the effect of the external disturbances. Experimental results show the effectiveness of the proposed observer.

  12. Economic competitiveness of underground coal gasification combined with carbon capture and storage in the Bulgarian energy network

    International Nuclear Information System (INIS)

    Nakaten, Natalie Christine

    2014-01-01

    Underground coal gasification (UCG) allows for exploitation of deep-seated coal seams not economically exploitable by conventional coal mining. Aim of the present study is to examine UCG economics based on coal conversion into a synthesis gas to fuel a combined cycle gas turbine power plant (CCGT) with CO2 capture and storage (CCS). Thereto, a techno-economic model is developed for UCG-CCGT-CCS costs of electricity (COE) determination which, considering sitespecific data of a selected target area in Bulgaria, sum up to 72 Euro/MWh in total. To quantify the impact of model constraints on COE, sensitivity analyses are undertaken revealing that varying geological model constraints impact COE with 0.4% to 4%, chemical with 13%, technical with 8% to 17% and market-dependent with 2% to 25%. Besides site-specific boundary conditions, UCG-CCGT-CCS economics depend on resources availability and infrastructural characteristics of the overall energy system. Assessing a model based implementation of UCG-CCGT-CCS and CCS power plants into the Bulgarian energy network revealed that both technologies provide essential and economically competitive options to achieve the EU environmental targets and a complete substitution of gas imports by UCG synthesis gas production.

  13. Alcoholic fermentation under oenological conditions. Use of a combination of data analysis and neural networks to predict sluggish and stuck fermentations

    Energy Technology Data Exchange (ETDEWEB)

    Insa, G. [Inst. National de la Recherche Agronomique, Inst. des Produits de la Vigne, Lab. de Microbiologie et Technologie des Fermentations, 34 - Montpellier (France); Sablayrolles, J.M. [Inst. National de la Recherche Agronomique, Inst. des Produits de la Vigne, Lab. de Microbiologie et Technologie des Fermentations, 34 - Montpellier (France); Douzal, V. [Centre National du Machinisme Agricole du Genie Rural des Eaux et Forets, 92 - Antony (France)

    1995-09-01

    The possibility of predicting sluggish fermentations under oenological conditions was investigated by studying 117 musts of different French grape varieties using an automatic device for fermentation monitoring. The objective was to detect sluggish or stuck fermentations at the halfway point of fermentation. Seventy nine percent of fermentations were correctly predicted by combining data analysis and neural networks. (orig.)

  14. Identification of flooded area from satellite images using Hybrid Kohonen Fuzzy C-Means sigma classifier

    Directory of Open Access Journals (Sweden)

    Krishna Kant Singh

    2017-06-01

    Full Text Available A novel neuro fuzzy classifier Hybrid Kohonen Fuzzy C-Means-σ (HKFCM-σ is proposed in this paper. The proposed classifier is a hybridization of Kohonen Clustering Network (KCN with FCM-σ clustering algorithm. The network architecture of HKFCM-σ is similar to simple KCN network having only two layers, i.e., input and output layer. However, the selection of winner neuron is done based on FCM-σ algorithm. Thus, embedding the features of both, a neural network and a fuzzy clustering algorithm in the classifier. This hybridization results in a more efficient, less complex and faster classifier for classifying satellite images. HKFCM-σ is used to identify the flooding that occurred in Kashmir area in September 2014. The HKFCM-σ classifier is applied on pre and post flooding Landsat 8 OLI images of Kashmir to detect the areas that were flooded due to the heavy rainfalls of September, 2014. The classifier is trained using the mean values of the various spectral indices like NDVI, NDWI, NDBI and first component of Principal Component Analysis. The error matrix was computed to test the performance of the method. The method yields high producer’s accuracy, consumer’s accuracy and kappa coefficient value indicating that the proposed classifier is highly effective and efficient.

  15. Error minimizing algorithms for nearest eighbor classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory; Zimmer, G. Beate [TEXAS A& M

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.

  16. Molecular Determinants Underlying Binding Specificities of the ABL Kinase Inhibitors: Combining Alanine Scanning of Binding Hot Spots with Network Analysis of Residue Interactions and Coevolution.

    Directory of Open Access Journals (Sweden)

    Amanda Tse

    Full Text Available Quantifying binding specificity and drug resistance of protein kinase inhibitors is of fundamental importance and remains highly challenging due to complex interplay of structural and thermodynamic factors. In this work, molecular simulations and computational alanine scanning are combined with the network-based approaches to characterize molecular determinants underlying binding specificities of the ABL kinase inhibitors. The proposed theoretical framework unveiled a relationship between ligand binding and inhibitor-mediated changes in the residue interaction networks. By using topological parameters, we have described the organization of the residue interaction networks and networks of coevolving residues in the ABL kinase structures. This analysis has shown that functionally critical regulatory residues can simultaneously embody strong coevolutionary signal and high network centrality with a propensity to be energetic hot spots for drug binding. We have found that selective (Nilotinib and promiscuous (Bosutinib, Dasatinib kinase inhibitors can use their energetic hot spots to differentially modulate stability of the residue interaction networks, thus inhibiting or promoting conformational equilibrium between inactive and active states. According to our results, Nilotinib binding may induce a significant network-bridging effect and enhance centrality of the hot spot residues that stabilize structural environment favored by the specific kinase form. In contrast, Bosutinib and Dasatinib can incur modest changes in the residue interaction network in which ligand binding is primarily coupled only with the identity of the gate-keeper residue. These factors may promote structural adaptability of the active kinase states in binding with these promiscuous inhibitors. Our results have related ligand-induced changes in the residue interaction networks with drug resistance effects, showing that network robustness may be compromised by targeted mutations

  17. Molecular Determinants Underlying Binding Specificities of the ABL Kinase Inhibitors: Combining Alanine Scanning of Binding Hot Spots with Network Analysis of Residue Interactions and Coevolution

    Science.gov (United States)

    Tse, Amanda; Verkhivker, Gennady M.

    2015-01-01

    Quantifying binding specificity and drug resistance of protein kinase inhibitors is of fundamental importance and remains highly challenging due to complex interplay of structural and thermodynamic factors. In this work, molecular simulations and computational alanine scanning are combined with the network-based approaches to characterize molecular determinants underlying binding specificities of the ABL kinase inhibitors. The proposed theoretical framework unveiled a relationship between ligand binding and inhibitor-mediated changes in the residue interaction networks. By using topological parameters, we have described the organization of the residue interaction networks and networks of coevolving residues in the ABL kinase structures. This analysis has shown that functionally critical regulatory residues can simultaneously embody strong coevolutionary signal and high network centrality with a propensity to be energetic hot spots for drug binding. We have found that selective (Nilotinib) and promiscuous (Bosutinib, Dasatinib) kinase inhibitors can use their energetic hot spots to differentially modulate stability of the residue interaction networks, thus inhibiting or promoting conformational equilibrium between inactive and active states. According to our results, Nilotinib binding may induce a significant network-bridging effect and enhance centrality of the hot spot residues that stabilize structural environment favored by the specific kinase form. In contrast, Bosutinib and Dasatinib can incur modest changes in the residue interaction network in which ligand binding is primarily coupled only with the identity of the gate-keeper residue. These factors may promote structural adaptability of the active kinase states in binding with these promiscuous inhibitors. Our results have related ligand-induced changes in the residue interaction networks with drug resistance effects, showing that network robustness may be compromised by targeted mutations of key mediating

  18. Combination of DTI and fMRI reveals the white matter changes correlating with the decline of default-mode network activity in Alzheimer's disease

    Science.gov (United States)

    Wu, Xianjun; Di, Qian; Li, Yao; Zhao, Xiaojie

    2009-02-01

    Recently, evidences from fMRI studies have shown that there was decreased activity among the default-mode network in Alzheimer's disease (AD), and DTI researches also demonstrated that demyelinations exist in white matter of AD patients. Therefore, combining these two MRI methods may help to reveal the relationship between white matter damages and alterations of the resting state functional connectivity network. In the present study, we tried to address this issue by means of correlation analysis between DTI and resting state fMRI images. The default-mode networks of AD and normal control groups were compared to find the areas with significantly declined activity firstly. Then, the white matter regions whose fractional anisotropy (FA) value correlated with this decline were located through multiple regressions between the FA values and the BOLD response of the default networks. Among these correlating white matter regions, those whose FA values also declined were found by a group comparison between AD patients and healthy elderly control subjects. Our results showed that the areas with decreased activity among default-mode network included left posterior cingulated cortex (PCC), left medial temporal gyrus et al. And the damaged white matter areas correlated with the default-mode network alterations were located around left sub-gyral temporal lobe. These changes may relate to the decreased connectivity between PCC and medial temporal lobe (MTL), and thus correlate with the deficiency of default-mode network activity.

  19. Water balance estimation in high Alpine terrain by combining distributed modeling and a neural network approach (Berchtesgaden Alps, Germany

    Directory of Open Access Journals (Sweden)

    G. Kraller

    2012-07-01

    Full Text Available The water balance in high Alpine regions is often characterized by significant variation of meteorological variables in space and time, a complex hydrogeological situation and steep gradients. The system is even more complex when the rock composition is dominated by soluble limestone, because unknown underground flow conditions and flow directions lead to unknown storage quantities. Reliable distributed modeling cannot be implemented by traditional approaches due to unknown storage processes at local and catchment scale. We present an artificial neural network extension of a distributed hydrological model (WaSiM-ETH that allows to account for subsurface water transfer in a karstic environment. The extension was developed for the Alpine catchment of the river "Berchtesgadener Ache" (Berchtesgaden Alps, Germany, which is characterized by extreme topography and calcareous rocks. The model assumes porous conditions and does not account for karstic environments, resulting in systematic mismatch of modeled and measured runoff in discharge curves at the outlet points of neighboring high alpine subbasins. Various precipitation interpolation methods did not allow to explain systematic mismatches, and unknown subsurface hydrological processes were concluded as the underlying reason. We introduce a new method that allows to describe the unknown subsurface boundary fluxes, and account for them in the hydrological model. This is achieved by an artificial neural network approach (ANN, where four input variables are taken to calculate the unknown subsurface storage conditions. This was first developed for the high Alpine subbasin Königsseer Ache to improve the monthly water balance. We explicitly derive the algebraic transfer function of an artificial neural net to calculate the missing boundary fluxes. The result of the ANN is then implemented in the groundwater module of the hydrological model as boundary flux, and considered during the consecutive model

  20. Improved diagnostic accuracy of Alzheimer's disease by combining regional cortical thickness and default mode network functional connectivity: Validated in the Alzheimer's disease neuroimaging initiative set

    International Nuclear Information System (INIS)

    Park, Ji Eun; Park, Bum Woo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Jung; Oh, Joo Young; Shim, Woo Hyun; Lee, Jae Hong; Roh, Jee Hoon

    2017-01-01

    To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal (p < 0.001) and supramarginal gyrus (p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease

  1. Classifying the Diversity of Bus Mapping Systems

    Science.gov (United States)

    Said, Mohd Shahmy Mohd; Forrest, David

    2018-05-01

    This study represents the first stage of an investigation into understanding the nature of different approaches to mapping bus routes and bus network, and how they may best be applied in different public transport situations. In many cities, bus services represent an important facet of easing traffic congestion and reducing pollution. However, with the entrenched car culture in many countries, persuading people to change their mode of transport is a major challenge. To promote this modal shift, people need to know what services are available and where (and when) they go. Bus service maps provide an invaluable element of providing suitable public transport information, but are often overlooked by transport planners, and are under-researched by cartographers. The method here consists of the creation of a map evaluation form and performing assessment of published bus networks maps. The analyses were completed by a combination of quantitative and qualitative data analysis of various aspects of cartographic design and classification. This paper focuses on the resulting classification, which is illustrated by a series of examples. This classification will facilitate more in depth investigations into the details of cartographic design for such maps and help direct areas for user evaluation.

  2. A Topic Model Approach to Representing and Classifying Football Plays

    KAUST Repository

    Varadarajan, Jagannadan

    2013-09-09

    We address the problem of modeling and classifying American Football offense teams’ plays in video, a challenging example of group activity analysis. Automatic play classification will allow coaches to infer patterns and tendencies of opponents more ef- ficiently, resulting in better strategy planning in a game. We define a football play as a unique combination of player trajectories. To this end, we develop a framework that uses player trajectories as inputs to MedLDA, a supervised topic model. The joint maximiza- tion of both likelihood and inter-class margins of MedLDA in learning the topics allows us to learn semantically meaningful play type templates, as well as, classify different play types with 70% average accuracy. Furthermore, this method is extended to analyze individual player roles in classifying each play type. We validate our method on a large dataset comprising 271 play clips from real-world football games, which will be made publicly available for future comparisons.

  3. Structural and functional abnormalities of default mode network in minimal hepatic encephalopathy: a study combining DTI and fMRI.

    Directory of Open Access Journals (Sweden)

    Rongfeng Qi

    Full Text Available BACKGROUND AND PURPOSE: Live failure can cause brain edema and aberrant brain function in cirrhotic patients. In particular, decreased functional connectivity within the brain default-mode network (DMN has been recently reported in overt hepatic encephalopathy (HE patients. However, so far, little is known about the connectivity among the DMN in the minimal HE (MHE, the mildest form of HE. Here, we combined diffusion tensor imaging (DTI and resting-state functional MRI (rs-fMRI to test our hypothesis that both structural and functional connectivity within the DMN were disturbed in MHE. MATERIALS AND METHODS: Twenty MHE patients and 20 healthy controls participated in the study. We explored the changes of structural (path length, tracts count, fractional anisotropy [FA] and mean diffusivity [MD] derived from DTI tractography and functional (temporal correlation coefficient derived from rs-fMRI connectivity of the DMN in MHE patients. Pearson correlation analysis was performed between the structural/functional indices and venous blood ammonia levels/neuropsychological tests scores of patients. All thresholds were set at P<0.05, Bonferroni corrected. RESULTS: Compared to the healthy controls, MHE patients showed both decreased FA and increased MD in the tract connecting the posterior cingulate cortex/precuneus (PCC/PCUN to left parahippocampal gyrus (PHG, and decreased functional connectivity between the PCC/PCUN and left PHG, and medial prefrontal cortex (MPFC. MD values of the tract connecting PCC/PCUN to the left PHG positively correlated to the ammonia levels, the temporal correlation coefficients between the PCC/PCUN and the MPFC showed positive correlation to the digital symbol tests scores of patients. CONCLUSION: MHE patients have both disturbed structural and functional connectivity within the DMN. The decreased functional connectivity was also detected between some regions without abnormal structural connectivity, suggesting that the

  4. Comparative efficacy of inhaled corticosteroid and long-acting beta agonist combinations in preventing COPD exacerbations: a Bayesian network meta-analysis.

    Science.gov (United States)

    Oba, Yuji; Lone, Nazir A

    2014-01-01

    A combination therapy with inhaled corticosteroid (ICS) and a long-acting beta agonist (LABA) is recommended in severe chronic obstructive pulmonary disease (COPD) patients experiencing frequent exacerbations. Currently, there are five ICS/LABA combination products available on the market. The purpose of this study was to systematically review the efficacy of various ICS/LABA combinations with a network meta-analysis. Several databases and manufacturer's websites were searched for relevant clinical trials. Randomized control trials, at least 12 weeks duration, comparing an ICS/LABA combination with active control or placebo were included. Moderate and severe exacerbations were chosen as the outcome assessment criteria. The primary analyses were conducted with a Bayesian Markov chain Monte Carlo method. Most of the ICS/LABA combinations reduced moderate-to-severe exacerbations as compared with placebo and LABA, but none of them reduced severe exacerbations. However, many studies excluded patients receiving long-term oxygen therapy. Moderate-dose ICS was as effective as high-dose ICS in reducing exacerbations when combined with LABA. ICS/LABA combinations had a class effect with regard to the prevention of COPD exacerbations. Moderate-dose ICS/LABA combination therapy would be sufficient for COPD patients when indicated. The efficacy of ICS/LABA combination therapy appeared modest and had no impact in reducing severe exacerbations. Further studies are needed to evaluate the efficacy of ICS/LABA combination therapy in severely affected COPD patients requiring long-term oxygen therapy.

  5. Comparing classifiers for pronunciation error detection

    NARCIS (Netherlands)

    Strik, H.; Truong, K.; Wet, F. de; Cucchiarini, C.

    2007-01-01

    Providing feedback on pronunciation errors in computer assisted language learning systems requires that pronunciation errors be detected automatically. In the present study we compare four types of classifiers that can be used for this purpose: two acoustic-phonetic classifiers (one of which employs

  6. Feature extraction for dynamic integration of classifiers

    NARCIS (Netherlands)

    Pechenizkiy, M.; Tsymbal, A.; Puuronen, S.; Patterson, D.W.

    2007-01-01

    Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. In this paper, we present an algorithm for the dynamic integration of classifiers in the space of extracted features (FEDIC). It is based on the technique

  7. Speaker emotion recognition: from classical classifiers to deep neural networks

    Science.gov (United States)

    Mezghani, Eya; Charfeddine, Maha; Nicolas, Henri; Ben Amar, Chokri

    2018-04-01

    Speaker emotion recognition is considered among the most challenging tasks in recent years. In fact, automatic systems for security, medicine or education can be improved when considering the speech affective state. In this paper, a twofold approach for speech emotion classification is proposed. At the first side, a relevant set of features is adopted, and then at the second one, numerous supervised training techniques, involving classic methods as well as deep learning, are experimented. Experimental results indicate that deep architecture can improve classification performance on two affective databases, the Berlin Dataset of Emotional Speech and the SAVEE Dataset Surrey Audio-Visual Expressed Emotion.

  8. Evaluating Machine Learning Classifiers for Hybrid Network Intrusion Detection Systems

    Science.gov (United States)

    2015-03-26

    and the value-focused method. Comparing results from the two evaluation methods, fallacies are revealed with 2 of the 5 notional weighting schemes...for them, because of their relentless support, love , and encouragement. I give a sincere thank you to my research advisor, Dr. Robert Mills, for his...though Ad- aBoost.BayesNet dominated the traditional PR space using a single curve approach. This evaluation fallacy has not been demonstrated prior to

  9. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  10. Deconvolution When Classifying Noisy Data Involving Transformations.

    Science.gov (United States)

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  11. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2012-01-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  12. Detection of microaneurysms in retinal images using an ensemble classifier

    Directory of Open Access Journals (Sweden)

    M.M. Habib

    2017-01-01

    Full Text Available This paper introduces, and reports on the performance of, a novel combination of algorithms for automated microaneurysm (MA detection in retinal images. The presence of MAs in retinal images is a pathognomonic sign of Diabetic Retinopathy (DR which is one of the leading causes of blindness amongst the working age population. An extensive survey of the literature is presented and current techniques in the field are summarised. The proposed technique first detects an initial set of candidates using a Gaussian Matched Filter and then classifies this set to reduce the number of false positives. A Tree Ensemble classifier is used with a set of 70 features (the most commons features in the literature. A new set of 32 MA groundtruth images (with a total of 256 labelled MAs based on images from the MESSIDOR dataset is introduced as a public dataset for benchmarking MA detection algorithms. We evaluate our algorithm on this dataset as well as another public dataset (DIARETDB1 v2.1 and compare it against the best available alternative. Results show that the proposed classifier is superior in terms of eliminating false positive MA detection from the initial set of candidates. The proposed method achieves an ROC score of 0.415 compared to 0.2636 achieved by the best available technique. Furthermore, results show that the classifier model maintains consistent performance across datasets, illustrating the generalisability of the classifier and that overfitting does not occur.

  13. Comparison of Classifier Architectures for Online Neural Spike Sorting.

    Science.gov (United States)

    Saeed, Maryam; Khan, Amir Ali; Kamboh, Awais Mehmood

    2017-04-01

    High-density, intracranial recordings from micro-electrode arrays need to undergo Spike Sorting in order to associate the recorded neuronal spikes to particular neurons. This involves spike detection, feature extraction, and classification. To reduce the data transmission and power requirements, on-chip real-time processing is becoming very popular. However, high computational resources are required for classifiers in on-chip spike-sorters, making scalability a great challenge. In this review paper, we analyze several popular classifiers to propose five new hardware architectures using the off-chip training with on-chip classification approach. These include support vector classification, fuzzy C-means classification, self-organizing maps classification, moving-centroid K-means classification, and Cosine distance classification. The performance of these architectures is analyzed in terms of accuracy and resource requirement. We establish that the neural networks based Self-Organizing Maps classifier offers the most viable solution. A spike sorter based on the Self-Organizing Maps classifier, requires only 7.83% of computational resources of the best-reported spike sorter, hierarchical adaptive means, while offering a 3% better accuracy at 7 dB SNR.

  14. Parallel consensual neural networks.

    Science.gov (United States)

    Benediktsson, J A; Sveinsson, J R; Ersoy, O K; Swain, P H

    1997-01-01

    A new type of a neural-network architecture, the parallel consensual neural network (PCNN), is introduced and applied in classification/data fusion of multisource remote sensing and geographic data. The PCNN architecture is based on statistical consensus theory and involves using stage neural networks with transformed input data. The input data are transformed several times and the different transformed data are used as if they were independent inputs. The independent inputs are first classified using the stage neural networks. The output responses from the stage networks are then weighted and combined to make a consensual decision. In this paper, optimization methods are used in order to weight the outputs from the stage networks. Two approaches are proposed to compute the data transforms for the PCNN, one for binary data and another for analog data. The analog approach uses wavelet packets. The experimental results obtained with the proposed approach show that the PCNN outperforms both a conjugate-gradient backpropagation neural network and conventional statistical methods in terms of overall classification accuracy of test data.

  15. A CLASSIFIER SYSTEM USING SMOOTH GRAPH COLORING

    Directory of Open Access Journals (Sweden)

    JORGE FLORES CRUZ

    2017-01-01

    Full Text Available Unsupervised classifiers allow clustering methods with less or no human intervention. Therefore it is desirable to group the set of items with less data processing. This paper proposes an unsupervised classifier system using the model of soft graph coloring. This method was tested with some classic instances in the literature and the results obtained were compared with classifications made with human intervention, yielding as good or better results than supervised classifiers, sometimes providing alternative classifications that considers additional information that humans did not considered.

  16. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  17. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... amount of training data to cover sufficient biological variability. Learning methods scaling badly with number of training data points cannot be used in such scenarios. This may restrict the usage of many powerful classifiers having excellent generalization ability. We propose a cascaded classifier which...

  18. Security Enrichment in Intrusion Detection System Using Classifier Ensemble

    Directory of Open Access Journals (Sweden)

    Uma R. Salunkhe

    2017-01-01

    Full Text Available In the era of Internet and with increasing number of people as its end users, a large number of attack categories are introduced daily. Hence, effective detection of various attacks with the help of Intrusion Detection Systems is an emerging trend in research these days. Existing studies show effectiveness of machine learning approaches in handling Intrusion Detection Systems. In this work, we aim to enhance detection rate of Intrusion Detection System by using machine learning technique. We propose a novel classifier ensemble based IDS that is constructed using hybrid approach which combines data level and feature level approach. Classifier ensembles combine the opinions of different experts and improve the intrusion detection rate. Experimental results show the improved detection rates of our system compared to reference technique.

  19. Prediction of phenotypic susceptibility to antiretroviral drugs using physiochemical properties of the primary enzymatic structure combined with artificial neural networks

    DEFF Research Database (Denmark)

    Kjaer, J; Høj, L; Fox, Z

    2008-01-01

    OBJECTIVES: Genotypic interpretation systems extrapolate observed associations in datasets to predict viral susceptibility to antiretroviral drugs (ARVs) for given isolates. We aimed to develop and validate an approach using artificial neural networks (ANNs) that employ descriptors...

  20. Frequency of victimization experiences and well-being among online, offline and combined victims on social online network sites of German children and adolescents

    Directory of Open Access Journals (Sweden)

    Michael eGlüer

    2015-12-01

    Full Text Available Victimization is associated with negative developmental outcomes in childhood and adolescence. However, previous studies have provided mixed results regarding the association between offline and online victimization and indicators of social, psychological, and somatic well-being. In this study, we investigated 1,906 German children and adolescents (grades 5 to 10, mean age = 13.9; SD = 2.1 with and without offline or online victimization experiences who participated in a social online network (SNS. Online questionnaires were used to assess previous victimization (offline, online, combined, and without, somatic and psychological symptoms, self-esteem, and social self-concept (social competence, resistance to peer influence, esteem by others. In total, 1,362 (71.4% children and adolescents reported being a member of at least one social online network, and 377 students (28.8% reported previous victimization. Most children and adolescents had offline victimization experiences (17.5%, whereas 2.7% reported online victimization, and 8.6% reported combined experiences. Girls reported more online and combined victimization, and boys reported more offline victimization. The type of victimization (offline, online, combined was associated with increased reports of psychological and somatic symptoms, lower self-esteem and esteem by others, and lower resistance to peer influences. The effects were comparable for the groups with offline and online victimization. They were, however, increased in the combined group in comparison to victims with offline experiences alone.

  1. Exploring patterns of alteration in Alzheimer’s disease brain networks: a combined structural and functional connectomics analysis

    Directory of Open Access Journals (Sweden)

    Fulvia Palesi

    2016-09-01

    Full Text Available Alzheimer’s disease (AD is a neurodegenerative disorder characterized by a severe derangement of cognitive functions, primarily memory, in elderly subjects. As far as the functional impairment is concerned, growing evidence supports the disconnection syndrome hypothesis. Recent investigations using fMRI have revealed a generalized alteration of resting state networks in patients affected by AD and mild cognitive impairment (MCI. However, it was unclear whether the changes in functional connectivity were accompanied by corresponding structural network changes. In this work, we have developed a novel structural/functional connectomic approach: resting state fMRI was used to identify the functional cortical network nodes and diffusion MRI to reconstruct the fiber tracts to give a weight to internodal subcortical connections. Then, local and global efficiency were determined for different networks, exploring specific alterations of integration and segregation patterns in AD and MCI patients compared to healthy controls (HC. In the default mode network (DMN, that was the most affected, axonal loss and reduced axonal integrity appeared to compromise both local and global efficiency along posterior-anterior connections. In the basal ganglia network (BGN, disruption of white matter integrity implied that main alterations occurred in local microstructure. In the anterior insular network (AIN, neuronal loss probably subtended a compromised communication with the insular cortex. Cognitive performance, evaluated by neuropsychological examinations, revealed a dependency on integration and segregation of brain networks. These findings are indicative of the fact that cognitive deficits in AD could be associated not only with cortical alterations (revealed by fMRI but also with subcortical alterations (revealed by diffusion MRI that extend beyond the areas primarily damaged by neurodegeneration, towards the support of an emerging concept of AD as a

  2. High speed VLSI neural network for high energy physics

    NARCIS (Netherlands)

    Masa, P.; Masa, P.; Hoen, K.; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    A CMOS neural network IC is discussed which was designed for very high speed applications. The parallel architecture, analog computing and digital weight storage provides unprecedented computing speed combined with ease of use. The circuit classifies up to 70 dimensional vectors within 20

  3. Precise deformation measurement of prestressed concrete beam during a strain test using the combination of intersection photogrammetry and micro-network measurement

    Science.gov (United States)

    Urban, Rudolf; Braun, Jaroslav; Štroner, Martin

    2015-05-01

    The prestressed thin-walled concrete elements enable the bridge a relatively large span. These structures are advantageous in economic and environmental way due to their thickness and lower consumption of materials. The bending moments can be effectively influenced by using the pre-stress. The experiment was done to monitor deformation of the under load. During the experiment the discrete points were monitored. To determine a large number of points, the intersection photogrammetry combined with precise micro-network were chosen. Keywords:

  4. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  5. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  6. Consistency Analysis of Nearest Subspace Classifier

    OpenAIRE

    Wang, Yi

    2015-01-01

    The Nearest subspace classifier (NSS) finds an estimation of the underlying subspace within each class and assigns data points to the class that corresponds to its nearest subspace. This paper mainly studies how well NSS can be generalized to new samples. It is proved that NSS is strongly consistent under certain assumptions. For completeness, NSS is evaluated through experiments on various simulated and real data sets, in comparison with some other linear model based classifiers. It is also ...

  7. Network structure and travel time perception.

    Science.gov (United States)

    Parthasarathi, Pavithra; Levinson, David; Hochmair, Hartwig

    2013-01-01

    The purpose of this research is to test the systematic variation in the perception of travel time among travelers and relate the variation to the underlying street network structure. Travel survey data from the Twin Cities metropolitan area (which includes the cities of Minneapolis and St. Paul) is used for the analysis. Travelers are classified into two groups based on the ratio of perceived and estimated commute travel time. The measures of network structure are estimated using the street network along the identified commute route. T-test comparisons are conducted to identify statistically significant differences in estimated network measures between the two traveler groups. The combined effect of these estimated network measures on travel time is then analyzed using regression models. The results from the t-test and regression analyses confirm the influence of the underlying network structure on the perception of travel time.

  8. Combining Community Engagement and Scientific Approaches in Next-Generation Monitor Siting: The Case of the Imperial County Community Air Network

    Directory of Open Access Journals (Sweden)

    Michelle Wong

    2018-03-01

    Full Text Available Air pollution continues to be a global public health threat, and the expanding availability of small, low-cost air sensors has led to increased interest in both personal and crowd-sourced air monitoring. However, to date, few low-cost air monitoring networks have been developed with the scientific rigor or continuity needed to conduct public health surveillance and inform policy. In Imperial County, California, near the U.S./Mexico border, we used a collaborative, community-engaged process to develop a community air monitoring network that attains the scientific rigor required for research, while also achieving community priorities. By engaging community residents in the project design, monitor siting processes, data dissemination, and other key activities, the resulting air monitoring network data are relevant, trusted, understandable, and used by community residents. Integration of spatial analysis and air monitoring best practices into the network development process ensures that the data are reliable and appropriate for use in research activities. This combined approach results in a community air monitoring network that is better able to inform community residents, support research activities, guide public policy, and improve public health. Here we detail the monitor siting process and outline the advantages and challenges of this approach.

  9. Credit scoring using ensemble of various classifiers on reduced feature set

    Directory of Open Access Journals (Sweden)

    Dahiya Shashi

    2015-01-01

    Full Text Available Credit scoring methods are widely used for evaluating loan applications in financial and banking institutions. Credit score identifies if applicant customers belong to good risk applicant group or a bad risk applicant group. These decisions are based on the demographic data of the customers, overall business by the customer with bank, and loan payment history of the loan applicants. The advantages of using credit scoring models include reducing the cost of credit analysis, enabling faster credit decisions and diminishing possible risk. Many statistical and machine learning techniques such as Logistic Regression, Support Vector Machines, Neural Networks and Decision tree algorithms have been used independently and as hybrid credit scoring models. This paper proposes an ensemble based technique combining seven individual models to increase the classification accuracy. Feature selection has also been used for selecting important attributes for classification. Cross classification was conducted using three data partitions. German credit dataset having 1000 instances and 21 attributes is used in the present study. The results of the experiments revealed that the ensemble model yielded a very good accuracy when compared to individual models. In all three different partitions, the ensemble model was able to classify more than 80% of the loan customers as good creditors correctly. Also, for 70:30 partition there was a good impact of feature selection on the accuracy of classifiers. The results were improved for almost all individual models including the ensemble model.

  10. Stacking machine learning classifiers to identify Higgs bosons at the LHC

    International Nuclear Information System (INIS)

    Alves, A.

    2017-01-01

    Machine learning (ML) algorithms have been employed in the problem of classifying signal and background events with high accuracy in particle physics. In this paper, we compare the performance of a widespread ML technique, namely, stacked generalization , against the results of two state-of-art algorithms: (1) a deep neural network (DNN) in the task of discovering a new neutral Higgs boson and (2) a scalable machine learning system for tree boosting, in the Standard Model Higgs to tau leptons channel, both at the 8 TeV LHC. In a cut-and-count analysis, stacking three algorithms performed around 16% worse than DNN but demanding far less computation efforts, however, the same stacking outperforms boosted decision trees. Using the stacked classifiers in a multivariate statistical analysis (MVA), on the other hand, significantly enhances the statistical significance compared to cut-and-count in both Higgs processes, suggesting that combining an ensemble of simpler and faster ML algorithms with MVA tools is a better approach than building a complex state-of-art algorithm for cut-and-count.

  11. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  12. Pixel Classification of SAR ice images using ANFIS-PSO Classifier

    Directory of Open Access Journals (Sweden)

    G. Vasumathi

    2016-12-01

    Full Text Available Synthetic Aperture Radar (SAR is playing a vital role in taking extremely high resolution radar images. It is greatly used to monitor the ice covered ocean regions. Sea monitoring is important for various purposes which includes global climate systems and ship navigation. Classification on the ice infested area gives important features which will be further useful for various monitoring process around the ice regions. Main objective of this paper is to classify the SAR ice image that helps in identifying the regions around the ice infested areas. In this paper three stages are considered in classification of SAR ice images. It starts with preprocessing in which the speckled SAR ice images are denoised using various speckle removal filters; comparison is made on all these filters to find the best filter in speckle removal. Second stage includes segmentation in which different regions are segmented using K-means and watershed segmentation algorithms; comparison is made between these two algorithms to find the best in segmenting SAR ice images. The last stage includes pixel based classification which identifies and classifies the segmented regions using various supervised learning classifiers. The algorithms includes Back propagation neural networks (BPN, Fuzzy Classifier, Adaptive Neuro Fuzzy Inference Classifier (ANFIS classifier and proposed ANFIS with Particle Swarm Optimization (PSO classifier; comparison is made on all these classifiers to propose which classifier is best suitable for classifying the SAR ice image. Various evaluation metrics are performed separately at all these three stages.

  13. Detection of Oil Chestnuts Infected by Blue Mold Using Near-Infrared Hyperspectral Imaging Combined with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Lei Feng

    2018-06-01

    Full Text Available Mildew damage is a major reason for chestnut poor quality and yield loss. In this study, a near-infrared hyperspectral imaging system in the 874–1734 nm spectral range was applied to detect the mildew damage to chestnuts caused by blue mold. Principal component analysis (PCA scored images were firstly employed to qualitatively and intuitively distinguish moldy chestnuts from healthy chestnuts. Spectral data were extracted from the hyperspectral images. A successive projections algorithm (SPA was used to select 12 optimal wavelengths. Artificial neural networks, including back propagation neural network (BPNN, evolutionary neural network (ENN, extreme learning machine (ELM, general regression neural network (GRNN and radial basis neural network (RBNN were used to build models using the full spectra and optimal wavelengths to distinguish moldy chestnuts. BPNN and ENN models using full spectra and optimal wavelengths obtained satisfactory performances, with classification accuracies all surpassing 99%. The results indicate the potential for the rapid and non-destructive detection of moldy chestnuts by hyperspectral imaging, which would help to develop online detection system for healthy and blue mold infected chestnuts.

  14. Localizing genes to cerebellar layers by classifying ISH images.

    Directory of Open Access Journals (Sweden)

    Lior Kirsch

    Full Text Available Gene expression controls how the brain develops and functions. Understanding control processes in the brain is particularly hard since they involve numerous types of neurons and glia, and very little is known about which genes are expressed in which cells and brain layers. Here we describe an approach to detect genes whose expression is primarily localized to a specific brain layer and apply it to the mouse cerebellum. We learn typical spatial patterns of expression from a few markers that are known to be localized to specific layers, and use these patterns to predict localization for new genes. We analyze images of in-situ hybridization (ISH experiments, which we represent using histograms of local binary patterns (LBP and train image classifiers and gene classifiers for four layers of the cerebellum: the Purkinje, granular, molecular and white matter layer. On held-out data, the layer classifiers achieve accuracy above 94% (AUC by representing each image at multiple scales and by combining multiple image scores into a single gene-level decision. When applied to the full mouse genome, the classifiers predict specific layer localization for hundreds of new genes in the Purkinje and granular layers. Many genes localized to the Purkinje layer are likely to be expressed in astrocytes, and many others are involved in lipid metabolism, possibly due to the unusual size of Purkinje cells.

  15. An ensemble self-training protein interaction article classifier.

    Science.gov (United States)

    Chen, Yifei; Hou, Ping; Manderick, Bernard

    2014-01-01

    Protein-protein interaction (PPI) is essential to understand the fundamental processes governing cell biology. The mining and curation of PPI knowledge are critical for analyzing proteomics data. Hence it is desired to classify articles PPI-related or not automatically. In order to build interaction article classification systems, an annotated corpus is needed. However, it is usually the case that only a small number of labeled articles can be obtained manually. Meanwhile, a large number of unlabeled articles are available. By combining ensemble learning and semi-supervised self-training, an ensemble self-training interaction classifier called EST_IACer is designed to classify PPI-related articles based on a small number of labeled articles and a large number of unlabeled articles. A biological background based feature weighting strategy is extended using the category information from both labeled and unlabeled data. Moreover, a heuristic constraint is put forward to select optimal instances from unlabeled data to improve the performance further. Experiment results show that the EST_IACer can classify the PPI related articles effectively and efficiently.

  16. Deep learning classifier with optical coherence tomography images for early dental caries detection

    Science.gov (United States)

    Karimian, Nima; Salehi, Hassan S.; Mahdian, Mina; Alnajjar, Hisham; Tadinada, Aditya

    2018-02-01

    Dental caries is a microbial disease that results in localized dissolution of the mineral content of dental tissue. Despite considerable decline in the incidence of dental caries, it remains a major health problem in many societies. Early detection of incipient lesions at initial stages of demineralization can result in the implementation of non-surgical preventive approaches to reverse the demineralization process. In this paper, we present a novel approach combining deep convolutional neural networks (CNN) and optical coherence tomography (OCT) imaging modality for classification of human oral tissues to detect early dental caries. OCT images of oral tissues with various densities were input to a CNN classifier to determine variations in tissue densities resembling the demineralization process. The CNN automatically learns a hierarchy of increasingly complex features and a related classifier directly from training data sets. The initial CNN layer parameters were randomly selected. The training set is split into minibatches, with 10 OCT images per batch. Given a batch of training patches, the CNN employs two convolutional and pooling layers to extract features and then classify each patch based on the probabilities from the SoftMax classification layer (output-layer). Afterward, the CNN calculates the error between the classification result and the reference label, and then utilizes the backpropagation process to fine-tune all the layer parameters to minimize this error using batch gradient descent algorithm. We validated our proposed technique on ex-vivo OCT images of human oral tissues (enamel, cortical-bone, trabecular-bone, muscular-tissue, and fatty-tissue), which attested to effectiveness of our proposed method.

  17. Fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave and free-space-optics architecture with an adaptive diversity combining technique.

    Science.gov (United States)

    Zhang, Junwen; Wang, Jing; Xu, Yuming; Xu, Mu; Lu, Feng; Cheng, Lin; Yu, Jianjun; Chang, Gee-Kung

    2016-05-01

    We propose and experimentally demonstrate a novel fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave (MMW) and free-space-optics (FSO) architecture using an adaptive combining technique. Both 60 GHz MMW and FSO links are demonstrated and fully integrated with optical fibers in a scalable and cost-effective backhaul system setup. Joint signal processing with an adaptive diversity combining technique (ADCT) is utilized at the receiver side based on a maximum ratio combining algorithm. Mobile backhaul transportation of 4-Gb/s 16 quadrature amplitude modulation frequency-division multiplexing (QAM-OFDM) data is experimentally demonstrated and tested under various weather conditions synthesized in the lab. Performance improvement in terms of reduced error vector magnitude (EVM) and enhanced link reliability are validated under fog, rain, and turbulence conditions.

  18. Effects of the distribution density of a biomass combined heat and power plant network on heat utilisation efficiency in village-town systems.

    Science.gov (United States)

    Zhang, Yifei; Kang, Jian

    2017-11-01

    The building of biomass combined heat and power (CHP) plants is an effective means of developing biomass energy because they can satisfy demands for winter heating and electricity consumption. The purpose of this study was to analyse the effect of the distribution density of a biomass CHP plant network on heat utilisation efficiency in a village-town system. The distribution density is determined based on the heat transmission threshold, and the heat utilisation efficiency is determined based on the heat demand distribution, heat output efficiency, and heat transmission loss. The objective of this study was to ascertain the optimal value for the heat transmission threshold using a multi-scheme comparison based on an analysis of these factors. To this end, a model of a biomass CHP plant network was built using geographic information system tools to simulate and generate three planning schemes with different heat transmission thresholds (6, 8, and 10 km) according to the heat demand distribution. The heat utilisation efficiencies of these planning schemes were then compared by calculating the gross power, heat output efficiency, and heat transmission loss of the biomass CHP plant for each scenario. This multi-scheme comparison yielded the following results: when the heat transmission threshold was low, the distribution density of the biomass CHP plant network was high and the biomass CHP plants tended to be relatively small. In contrast, when the heat transmission threshold was high, the distribution density of the network was low and the biomass CHP plants tended to be relatively large. When the heat transmission threshold was 8 km, the distribution density of the biomass CHP plant network was optimised for efficient heat utilisation. To promote the development of renewable energy sources, a planning scheme for a biomass CHP plant network that maximises heat utilisation efficiency can be obtained using the optimal heat transmission threshold and the nonlinearity

  19. Resilience to climate change in a cross-scale tourism governance context: a combined quantitative-qualitative network analysis

    Directory of Open Access Journals (Sweden)

    Tobias Luthe

    2016-03-01

    Full Text Available Social systems in mountain regions are exposed to a number of disturbances, such as climate change. Calls for conceptual and practical approaches on how to address climate change have been taken up in the literature. The resilience concept as a comprehensive theory-driven approach to address climate change has only recently increased in importance. Limited research has been undertaken concerning tourism and resilience from a network governance point of view. We analyze tourism supply chain networks with regard to resilience to climate change at the municipal governance scale of three Alpine villages. We compare these with a planned destination management organization (DMO as a governance entity of the same three municipalities on the regional scale. Network measures are analyzed via a quantitative social network analysis (SNA focusing on resilience from a tourism governance point of view. Results indicate higher resilience of the regional DMO because of a more flexible and diverse governance structure, more centralized steering of fast collective action, and improved innovative capacity, because of higher modularity and better core-periphery integration. Interpretations of quantitative results have been qualitatively validated by interviews and a workshop. We conclude that adaptation of tourism-dependent municipalities to gradual climate change should be dealt with at a regional governance scale and adaptation to sudden changes at a municipal scale. Overall, DMO building at a regional scale may enhance the resilience of tourism destinations, if the municipalities are well integrated.

  20. Integrative analysis of kinase networks in TRAIL-induced apoptosis provides a source of potential targets for combination therapy

    DEFF Research Database (Denmark)

    So, Jonathan; Pasculescu, Adrian; Dai, Anna Y.

    2015-01-01

    phosphoproteomics. With these protein interaction maps, we modeled information flow through the networks and identified apoptosis-modifying kinases that are highly connected to regulated substrates downstream of TRAIL. The results of this analysis provide a resource of potential targets for the development of TRAIL...

  1. Self-organizing map classifier for stressed speech recognition

    Science.gov (United States)

    Partila, Pavol; Tovarek, Jaromir; Voznak, Miroslav

    2016-05-01

    This paper presents a method for detecting speech under stress using Self-Organizing Maps. Most people who are exposed to stressful situations can not adequately respond to stimuli. Army, police, and fire department occupy the largest part of the environment that are typical of an increased number of stressful situations. The role of men in action is controlled by the control center. Control commands should be adapted to the psychological state of a man in action. It is known that the psychological changes of the human body are also reflected physiologically, which consequently means the stress effected speech. Therefore, it is clear that the speech stress recognizing system is required in the security forces. One of the possible classifiers, which are popular for its flexibility, is a self-organizing map. It is one type of the artificial neural networks. Flexibility means independence classifier on the character of the input data. This feature is suitable for speech processing. Human Stress can be seen as a kind of emotional state. Mel-frequency cepstral coefficients, LPC coefficients, and prosody features were selected for input data. These coefficients were selected for their sensitivity to emotional changes. The calculation of the parameters was performed on speech recordings, which can be divided into two classes, namely the stress state recordings and normal state recordings. The benefit of the experiment is a method using SOM classifier for stress speech detection. Results showed the advantage of this method, which is input data flexibility.

  2. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  3. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  4. Reinforcement Learning Based Artificial Immune Classifier

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  5. Combined effect of CVR and penetration of DG in the voltage profile and losses of lowvoltage secondary distribution networks

    Science.gov (United States)

    Bokhari, Abdullah

    Demarcations between traditional distribution power systems and distributed generation (DG) architectures are increasingly evolving as higher DG penetration is introduced in the system. The concerns in existing electric power systems (EPSs) to accommodate less restrictive interconnection policies while maintaining reliability and performance of power delivery have been the major challenge for DG growth. In this dissertation, the work is aimed to study power quality, energy saving and losses in a low voltage distributed network under various DG penetration cases. Simulation platform suite that includes electric power system, distributed generation and ZIP load models is implemented to determine the impact of DGs on power system steady state performance and the voltage profile of the customers/loads in the network under the voltage reduction events. The investigation designed to test the DG impact on power system starting with one type of DG, then moves on multiple DG types distributed in a random case and realistic/balanced case. The functionality of the proposed DG interconnection is designed to meet the basic requirements imposed by the various interconnection standards, most notably IEEE 1547, public service commission, and local utility regulation. It is found that implementation of DGs on the low voltage secondary network would improve customer's voltage profile, system losses and significantly provide energy savings and economics for utilities. In a network populated with DGs, utility would have a uniform voltage profile at the customers end as the voltage profile becomes more concentrated around targeted voltage level. The study further reinforced the concept that the behavior of DG in distributed network would improve voltage regulation as certain percentage reduction on utility side would ensure uniform percentage reduction seen by all customers and reduce number of voltage violations.

  6. Classifying sows' activity types from acceleration patterns

    DEFF Research Database (Denmark)

    Cornou, Cecile; Lundbye-Christensen, Søren

    2008-01-01

    An automated method of classifying sow activity using acceleration measurements would allow the individual sow's behavior to be monitored throughout the reproductive cycle; applications for detecting behaviors characteristic of estrus and farrowing or to monitor illness and welfare can be foreseen....... This article suggests a method of classifying five types of activity exhibited by group-housed sows. The method involves the measurement of acceleration in three dimensions. The five activities are: feeding, walking, rooting, lying laterally and lying sternally. Four time series of acceleration (the three...

  7. Data characteristics that determine classifier performance

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2006-11-01

    Full Text Available available at [11]. The kNN uses a LinearNN nearest neighbour search algorithm with an Euclidean distance metric [8]. The optimal k value is determined by performing 10-fold cross-validation. An optimal k value between 1 and 10 is used for Experiments 1... classifiers. 10-fold cross-validation is used to evaluate and compare the performance of the classifiers on the different data sets. 3.1. Artificial data generation Multivariate Gaussian distributions are used to generate artificial data sets. We use d...

  8. A Customizable Text Classifier for Text Mining

    Directory of Open Access Journals (Sweden)

    Yun-liang Zhang

    2007-12-01

    Full Text Available Text mining deals with complex and unstructured texts. Usually a particular collection of texts that is specified to one or more domains is necessary. We have developed a customizable text classifier for users to mine the collection automatically. It derives from the sentence category of the HNC theory and corresponding techniques. It can start with a few texts, and it can adjust automatically or be adjusted by user. The user can also control the number of domains chosen and decide the standard with which to choose the texts based on demand and abundance of materials. The performance of the classifier varies with the user's choice.

  9. A survey of decision tree classifier methodology

    Science.gov (United States)

    Safavian, S. R.; Landgrebe, David

    1991-01-01

    Decision tree classifiers (DTCs) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps the most important feature of DTCs is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issues. After considering potential advantages of DTCs over single-state classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.

  10. A Novel Approach for Multi Class Fault Diagnosis in Induction Machine Based on Statistical Time Features and Random Forest Classifier

    Science.gov (United States)

    Sonje, M. Deepak; Kundu, P.; Chowdhury, A.

    2017-08-01

    Fault diagnosis and detection is the important area in health monitoring of electrical machines. This paper proposes the recently developed machine learning classifier for multi class fault diagnosis in induction machine. The classification is based on random forest (RF) algorithm. Initially, stator currents are acquired from the induction machine under various conditions. After preprocessing the currents, fourteen statistical time features are estimated for each phase of the current. These parameters are considered as inputs to the classifier. The main scope of the paper is to evaluate effectiveness of RF classifier for individual and mixed fault diagnosis in induction machine. The stator, rotor and mixed faults (stator and rotor faults) are classified using the proposed classifier. The obtained performance measures are compared with the multilayer perceptron neural network (MLPNN) classifier. The results show the much better performance measures and more accurate than MLPNN classifier. For demonstration of planned fault diagnosis algorithm, experimentally obtained results are considered to build the classifier more practical.

  11. Evaluation of LDA Ensembles Classifiers for Brain Computer Interface

    International Nuclear Information System (INIS)

    Arjona, Cristian; Pentácolo, José; Gareis, Iván; Atum, Yanina; Gentiletti, Gerardo; Acevedo, Rubén; Rufiner, Leonardo

    2011-01-01

    The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.

  12. Network Simulation solution of free convective flow from a vertical cone with combined effect of non- uniform surface heat flux and heat generation or absorption

    Science.gov (United States)

    Immanuel, Y.; Pullepu, Bapuji; Sambath, P.

    2018-04-01

    A two dimensional mathematical model is formulated for the transitive laminar free convective, incompressible viscous fluid flow over vertical cone with variable surface heat flux combined with the effects of heat generation and absorption is considered . using a powerful computational method based on thermoelectric analogy called Network Simulation Method (NSM0, the solutions of governing nondimensionl coupled, unsteady and nonlinear partial differential conservation equations of the flow that are obtained. The numerical technique is always stable and convergent which establish high efficiency and accuracy by employing network simulator computer code Pspice. The effects of velocity and temperature profiles have been analyzed for various factors, namely Prandtl number Pr, heat flux power law exponent n and heat generation/absorption parameter Δ are analyzed graphically.

  13. Cortical sensorimotor alterations classify clinical phenotype and putative genotype of spasmodic dysphonia

    Science.gov (United States)

    Battistella, Giovanni; Fuertinger, Stefan; Fleysher, Lazar; Ozelius, Laurie J.; Simonyan, Kristina

    2017-01-01

    Background Spasmodic dysphonia (SD), or laryngeal dystonia, is a task-specific isolated focal dystonia of unknown causes and pathophysiology. Although functional and structural abnormalities have been described in this disorder, the influence of its different clinical phenotypes and genotypes remains scant, making it difficult to explain SD pathophysiology and to identify potential biomarkers. Methods We used a combination of independent component analysis and linear discriminant analysis of resting-state functional MRI data to investigate brain organization in different SD phenotypes (abductor vs. adductor type) and putative genotypes (familial vs. sporadic cases) and to characterize neural markers for genotype/phenotype categorization. Results We found abnormal functional connectivity within sensorimotor and frontoparietal networks in SD patients compared to healthy individuals as well as phenotype- and genotype-distinct alterations of these networks, involving primary somatosensory, premotor and parietal cortices. The linear discriminant analysis achieved 71% accuracy classifying SD and healthy individuals using connectivity measures in the left inferior parietal and sensorimotor cortex. When categorizing between different forms of SD, the combination of measures from left inferior parietal, premotor and right sensorimotor cortices achieved 81% discriminatory power between familial and sporadic SD cases, whereas the combination of measures from the right superior parietal, primary somatosensory and premotor cortices led to 71% accuracy in the classification of adductor and abductor SD forms. Conclusions Our findings present the first effort to identify and categorize isolated focal dystonia based on its brain functional connectivity profile, which may have a potential impact on the future development of biomarkers for this rare disorder. PMID:27346568

  14. Cortical sensorimotor alterations classify clinical phenotype and putative genotype of spasmodic dysphonia.

    Science.gov (United States)

    Battistella, G; Fuertinger, S; Fleysher, L; Ozelius, L J; Simonyan, K

    2016-10-01

    Spasmodic dysphonia (SD), or laryngeal dystonia, is a task-specific isolated focal dystonia of unknown causes and pathophysiology. Although functional and structural abnormalities have been described in this disorder, the influence of its different clinical phenotypes and genotypes remains scant, making it difficult to explain SD pathophysiology and to identify potential biomarkers. We used a combination of independent component analysis and linear discriminant analysis of resting-state functional magnetic resonance imaging data to investigate brain organization in different SD phenotypes (abductor versus adductor type) and putative genotypes (familial versus sporadic cases) and to characterize neural markers for genotype/phenotype categorization. We found abnormal functional connectivity within sensorimotor and frontoparietal networks in patients with SD compared with healthy individuals as well as phenotype- and genotype-distinct alterations of these networks, involving primary somatosensory, premotor and parietal cortices. The linear discriminant analysis achieved 71% accuracy classifying SD and healthy individuals using connectivity measures in the left inferior parietal and sensorimotor cortices. When categorizing between different forms of SD, the combination of measures from the left inferior parietal, premotor and right sensorimotor cortices achieved 81% discriminatory power between familial and sporadic SD cases, whereas the combination of measures from the right superior parietal, primary somatosensory and premotor cortices led to 71% accuracy in the classification of adductor and abductor SD forms. Our findings present the first effort to identify and categorize isolated focal dystonia based on its brain functional connectivity profile, which may have a potential impact on the future development of biomarkers for this rare disorder. © 2016 EAN.

  15. Generic Black-Box End-to-End Attack Against State of the Art API Call Based Malware Classifiers

    OpenAIRE

    Rosenberg, Ishai; Shabtai, Asaf; Rokach, Lior; Elovici, Yuval

    2017-01-01

    In this paper, we present a black-box attack against API call based machine learning malware classifiers, focusing on generating adversarial sequences combining API calls and static features (e.g., printable strings) that will be misclassified by the classifier without affecting the malware functionality. We show that this attack is effective against many classifiers due to the transferability principle between RNN variants, feed forward DNNs, and traditional machine learning classifiers such...

  16. High-throughput profiling of signaling networks identifies mechanism-based combination therapy to eliminate microenvironmental resistance in acute myeloid leukemia.

    Science.gov (United States)

    Zeng, Zhihong; Liu, Wenbin; Tsao, Twee; Qiu, YiHua; Zhao, Yang; Samudio, Ismael; Sarbassov, Dos D; Kornblau, Steven M; Baggerly, Keith A; Kantarjian, Hagop M; Konopleva, Marina; Andreeff, Michael

    2017-09-01

    The bone marrow microenvironment is known to provide a survival advantage to residual acute myeloid leukemia cells, possibly contributing to disease recurrence. The mechanisms by which stroma in the microenvironment regulates leukemia survival remain largely unknown. Using reverse-phase protein array technology, we profiled 53 key protein molecules in 11 signaling pathways in 20 primary acute myeloid leukemia samples and two cell lines, aiming to understand stroma-mediated signaling modulation in response to the targeted agents temsirolimus (MTOR), ABT737 (BCL2/BCL-XL), and Nutlin-3a (MDM2), and to identify the effective combination therapy targeting acute myeloid leukemia in the context of the leukemia microenvironment. Stroma reprogrammed signaling networks and modified the sensitivity of acute myeloid leukemia samples to all three targeted inhibitors. Stroma activated AKT at Ser473 in the majority of samples treated with single-agent ABT737 or Nutlin-3a. This survival mechanism was partially abrogated by concomitant treatment with temsirolimus plus ABT737 or Nutlin-3a. Mapping the signaling networks revealed that combinations of two inhibitors increased the number of affected proteins in the targeted pathways and in multiple parallel signaling, translating into facilitated cell death. These results demonstrated that a mechanism-based selection of combined inhibitors can be used to guide clinical drug selection and tailor treatment regimens to eliminate microenvironment-mediated resistance in acute myeloid leukemia. Copyright© 2017 Ferrata Storti Foundation.

  17. Determination of Elastic and Dissipative Properties of Material Using Combination of FEM and Complex Artificial Neural Networks

    Science.gov (United States)

    Soloviev, A. N.; Giang, N. D. T.; Chang, S.-H.

    This paper describes the application of complex artificial neural networks (CANN) in the inverse identification problem of the elastic and dissipative properties of solids. Additional information for the inverse problem serves the components of the displacement vector measured on the body boundary, which performs harmonic oscillations at the first resonant frequency. The process of displacement measurement in this paper is simulated using calculation of finite element (FE) software ANSYS. In the shown numerical example, we focus on the accurate identification of elastic modulus and quality of material depending on the number of measurement points and their locations as well as on the architecture of neural network and time of the training process, which is conducted by using algorithms RProp, QuickProp.

  18. Moves on the Street: Classifying Crime Hotspots Using Aggregated Anonymized Data on People Dynamics.

    Science.gov (United States)

    Bogomolov, Andrey; Lepri, Bruno; Staiano, Jacopo; Letouzé, Emmanuel; Oliver, Nuria; Pianesi, Fabio; Pentland, Alex

    2015-09-01

    The wealth of information provided by real-time streams of data has paved the way for life-changing technological advancements, improving the quality of life of people in many ways, from facilitating knowledge exchange to self-understanding and self-monitoring. Moreover, the analysis of anonymized and aggregated large-scale human behavioral data offers new possibilities to understand global patterns of human behavior and helps decision makers tackle problems of societal importance. In this article, we highlight the potential societal benefits derived from big data applications with a focus on citizen safety and crime prevention. First, we introduce the emergent new research area of big data for social good. Next, we detail a case study tackling the problem of crime hotspot classification, that is, the classification of which areas in a city are more likely to witness crimes based on past data. In the proposed approach we use demographic information along with human mobility characteristics as derived from anonymized and aggregated mobile network data. The hypothesis that aggregated human behavioral data captured from the mobile network infrastructure, in combination with basic demographic information, can be used to predict crime is supported by our findings. Our models, built on and evaluated against real crime data from London, obtain accuracy of almost 70% when classifying whether a specific area in the city will be a crime hotspot or not in the following month.

  19. A stereo-compound hybrid microscope for combined intracellular and optical recording of invertebrate neural network activity

    OpenAIRE

    Frost, William N.; Wang, Jean; Brandon, Christopher J.

    2007-01-01

    Optical recording studies of invertebrate neural networks with voltage-sensitive dyes seldom employ conventional intracellular electrodes. This may in part be due to the traditional reliance on compound microscopes for such work. While such microscopes have high light-gathering power, they do not provide depth of field, making working with sharp electrodes difficult. Here we describe a hybrid microscope design, with switchable compound and stereo objectives, that eases the use of conventional...

  20. Combined Rate and Power Allocation with Link Scheduling in Wireless Data Packet Relay Networks with Fading Channels

    OpenAIRE

    Subhrakanti Dey; Minyi Huang

    2007-01-01

    We consider a joint rate and power control problem in a wireless data traffic relay network with fading channels. The optimization problem is formulated in terms of power and rate selection, and link transmission scheduling. The objective is to seek high aggregate utility of the relay node when taking into account buffer load management and power constraints. The optimal solution for a single transmitting source is computed by a two-layer dynamic programming algorithm which leads to optimal ...