WorldWideScience

Sample records for network classifiers combined

  1. Hybrid classifiers methods of data, knowledge, and classifier combination

    CERN Document Server

    Wozniak, Michal

    2014-01-01

    This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.

  2. Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets.

    Science.gov (United States)

    McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne

    2018-04-01

    Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Combining multiple classifiers for age classification

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-11-01

    Full Text Available classifier is also developed by using an SVM to predict posterior class probabilities using two different types of classifier outputs; gender classification results and regression age estimates. The authors show that for combining posterior probabilities...

  4. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    based classifier combination is the simplest method in which final decision is that class for which maximum (greater than N/2) participating classifier vote, where N is the number of classifiers. 3.2b Decision templates: The method based on decision template, (Kuncheva et al 2001) firstly creates DT for each class using ...

  5. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... Feature selection based classifier combination approach for handwritten Devanagari numeral recognition. Pratibha Singh Ajay Verma ... ensemble of classifiers. The main contribution of the proposed method is that, the method gives quite efficient results utilizing only 10% patterns of the available dataset.

  6. Artificial neural networks for classifying olfactory signals.

    Science.gov (United States)

    Linder, R; Pöppl, S J

    2000-01-01

    For practical applications, artificial neural networks have to meet several requirements: Mainly they should learn quick, classify accurate and behave robust. Programs should be user-friendly and should not need the presence of an expert for fine tuning diverse learning parameters. The present paper demonstrates an approach using an oversized network topology, adaptive propagation (APROP), a modified error function, and averaging outputs of four networks described for the first time. As an example, signals from different semiconductor gas sensors of an electronic nose were classified. The electronic nose smelt different types of edible oil with extremely different a-priori-probabilities. The fully-specified neural network classifier fulfilled the above mentioned demands. The new approach will be helpful not only for classifying olfactory signals automatically but also in many other fields in medicine, e.g. in data mining from medical databases.

  7. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    3.2c Dempster-Shafer rule based classifier combination: Dempster–Shafer (DS) method is based on the evidence theory, proposed by Glen Shafer as a way to represent cognitive knowledge. Here the probability is obtained using belief function instead of using the Bayesian distribution. Prob- ability values are assigned to a ...

  8. Neural Network Classifiers for Local Wind Prediction.

    Science.gov (United States)

    Kretzschmar, Ralf; Eckert, Pierre; Cattani, Daniel; Eggimann, Fritz

    2004-05-01

    This paper evaluates the quality of neural network classifiers for wind speed and wind gust prediction with prediction lead times between +1 and +24 h. The predictions were realized based on local time series and model data. The selection of appropriate input features was initiated by time series analysis and completed by empirical comparison of neural network classifiers trained on several choices of input features. The selected input features involved day time, yearday, features from a single wind observation device at the site of interest, and features derived from model data. The quality of the resulting classifiers was benchmarked against persistence for two different sites in Switzerland. The neural network classifiers exhibited superior quality when compared with persistence judged on a specific performance measure, hit and false-alarm rates.

  9. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  10. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...

  11. Combining classifiers for robust PICO element detection

    Directory of Open Access Journals (Sweden)

    Grad Roland

    2010-05-01

    Full Text Available Abstract Background Formulating a clinical information need in terms of the four atomic parts which are Population/Problem, Intervention, Comparison and Outcome (known as PICO elements facilitates searching for a precise answer within a large medical citation database. However, using PICO defined items in the information retrieval process requires a search engine to be able to detect and index PICO elements in the collection in order for the system to retrieve relevant documents. Methods In this study, we tested multiple supervised classification algorithms and their combinations for detecting PICO elements within medical abstracts. Using the structural descriptors that are embedded in some medical abstracts, we have automatically gathered large training/testing data sets for each PICO element. Results Combining multiple classifiers using a weighted linear combination of their prediction scores achieves promising results with an f-measure score of 86.3% for P, 67% for I and 56.6% for O. Conclusions Our experiments on the identification of PICO elements showed that the task is very challenging. Nevertheless, the performance achieved by our identification method is competitive with previously published results and shows that this task can be achieved with a high accuracy for the P element but lower ones for I and O elements.

  12. Ensemble of classifiers based network intrusion detection system performance bound

    CSIR Research Space (South Africa)

    Mkuzangwe, Nenekazi NP

    2017-11-01

    Full Text Available This paper provides a performance bound of a network intrusion detection system (NIDS) that uses an ensemble of classifiers. Currently researchers rely on implementing the ensemble of classifiers based NIDS before they can determine the performance...

  13. Multiple classifier fusion in probabilistic neural networks

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Kittler, J.; Pudil, Pavel; Somol, Petr

    2002-01-01

    Roč. 5, č. 7 (2002), s. 221-233 ISSN 1433-7541 R&D Projects: GA ČR GA402/01/0981 Institutional research plan: CEZ:AV0Z1075907 Keywords : EM algorithm * information preserving transform * multiple classifier fusion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.667, year: 2002

  14. Revealing effective classifiers through network comparison

    Science.gov (United States)

    Gallos, Lazaros K.; Fefferman, Nina H.

    2014-11-01

    The ability to compare complex systems can provide new insight into the fundamental nature of the processes captured, in ways that are otherwise inaccessible to observation. Here, we introduce the n-tangle method to directly compare two networks for structural similarity, based on the distribution of edge density in network subgraphs. We demonstrate that this method can efficiently introduce comparative analysis into network science and opens the road for many new applications. For example, we show how the construction of a “phylogenetic tree” across animal taxa according to their social structure can reveal commonalities in the behavioral ecology of the populations, or how students create similar networks according to the University size. Our method can be expanded to study many additional properties, such as network classification, changes during time evolution, convergence of growth models, and detection of structural changes during damage.

  15. Neural network classifier of attacks in IP telephony

    Science.gov (United States)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  16. One pass learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance. Copyright

  17. Classifying emotion in Twitter using Bayesian network

    Science.gov (United States)

    Surya Asriadie, Muhammad; Syahrul Mubarok, Mohamad; Adiwijaya

    2018-03-01

    Language is used to express not only facts, but also emotions. Emotions are noticeable from behavior up to the social media statuses written by a person. Analysis of emotions in a text is done in a variety of media such as Twitter. This paper studies classification of emotions on twitter using Bayesian network because of its ability to model uncertainty and relationships between features. The result is two models based on Bayesian network which are Full Bayesian Network (FBN) and Bayesian Network with Mood Indicator (BNM). FBN is a massive Bayesian network where each word is treated as a node. The study shows the method used to train FBN is not very effective to create the best model and performs worse compared to Naive Bayes. F1-score for FBN is 53.71%, while for Naive Bayes is 54.07%. BNM is proposed as an alternative method which is based on the improvement of Multinomial Naive Bayes and has much lower computational complexity compared to FBN. Even though it’s not better compared to FBN, the resulting model successfully improves the performance of Multinomial Naive Bayes. F1-Score for Multinomial Naive Bayes model is 51.49%, while for BNM is 52.14%.

  18. Classifying Radio Galaxies with the Convolutional Neural Network

    Energy Technology Data Exchange (ETDEWEB)

    Aniyan, A. K.; Thorat, K. [Department of Physics and Electronics, Rhodes University, Grahamstown (South Africa)

    2017-06-01

    We present the application of a deep machine learning technique to classify radio images of extended sources on a morphological basis using convolutional neural networks (CNN). In this study, we have taken the case of the Fanaroff–Riley (FR) class of radio galaxies as well as radio galaxies with bent-tailed morphology. We have used archival data from the Very Large Array (VLA)—Faint Images of the Radio Sky at Twenty Centimeters survey and existing visually classified samples available in the literature to train a neural network for morphological classification of these categories of radio sources. Our training sample size for each of these categories is ∼200 sources, which has been augmented by rotated versions of the same. Our study shows that CNNs can classify images of the FRI and FRII and bent-tailed radio galaxies with high accuracy (maximum precision at 95%) using well-defined samples and a “fusion classifier,” which combines the results of binary classifications, while allowing for a mechanism to find sources with unusual morphologies. The individual precision is highest for bent-tailed radio galaxies at 95% and is 91% and 75% for the FRI and FRII classes, respectively, whereas the recall is highest for FRI and FRIIs at 91% each, while the bent-tailed class has a recall of 79%. These results show that our results are comparable to that of manual classification, while being much faster. Finally, we discuss the computational and data-related challenges associated with the morphological classification of radio galaxies with CNNs.

  19. Classifying Radio Galaxies with the Convolutional Neural Network

    Science.gov (United States)

    Aniyan, A. K.; Thorat, K.

    2017-06-01

    We present the application of a deep machine learning technique to classify radio images of extended sources on a morphological basis using convolutional neural networks (CNN). In this study, we have taken the case of the Fanaroff-Riley (FR) class of radio galaxies as well as radio galaxies with bent-tailed morphology. We have used archival data from the Very Large Array (VLA)—Faint Images of the Radio Sky at Twenty Centimeters survey and existing visually classified samples available in the literature to train a neural network for morphological classification of these categories of radio sources. Our training sample size for each of these categories is ˜200 sources, which has been augmented by rotated versions of the same. Our study shows that CNNs can classify images of the FRI and FRII and bent-tailed radio galaxies with high accuracy (maximum precision at 95%) using well-defined samples and a “fusion classifier,” which combines the results of binary classifications, while allowing for a mechanism to find sources with unusual morphologies. The individual precision is highest for bent-tailed radio galaxies at 95% and is 91% and 75% for the FRI and FRII classes, respectively, whereas the recall is highest for FRI and FRIIs at 91% each, while the bent-tailed class has a recall of 79%. These results show that our results are comparable to that of manual classification, while being much faster. Finally, we discuss the computational and data-related challenges associated with the morphological classification of radio galaxies with CNNs.

  20. Classifying Radio Galaxies with the Convolutional Neural Network

    International Nuclear Information System (INIS)

    Aniyan, A. K.; Thorat, K.

    2017-01-01

    We present the application of a deep machine learning technique to classify radio images of extended sources on a morphological basis using convolutional neural networks (CNN). In this study, we have taken the case of the Fanaroff–Riley (FR) class of radio galaxies as well as radio galaxies with bent-tailed morphology. We have used archival data from the Very Large Array (VLA)—Faint Images of the Radio Sky at Twenty Centimeters survey and existing visually classified samples available in the literature to train a neural network for morphological classification of these categories of radio sources. Our training sample size for each of these categories is ∼200 sources, which has been augmented by rotated versions of the same. Our study shows that CNNs can classify images of the FRI and FRII and bent-tailed radio galaxies with high accuracy (maximum precision at 95%) using well-defined samples and a “fusion classifier,” which combines the results of binary classifications, while allowing for a mechanism to find sources with unusual morphologies. The individual precision is highest for bent-tailed radio galaxies at 95% and is 91% and 75% for the FRI and FRII classes, respectively, whereas the recall is highest for FRI and FRIIs at 91% each, while the bent-tailed class has a recall of 79%. These results show that our results are comparable to that of manual classification, while being much faster. Finally, we discuss the computational and data-related challenges associated with the morphological classification of radio galaxies with CNNs.

  1. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    Directory of Open Access Journals (Sweden)

    Shehzad Khalid

    2014-01-01

    Full Text Available We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.

  2. Principal component analysis coupled with artificial neural networks--a combined technique classifying small molecular structures using a concatenated spectral database.

    Science.gov (United States)

    Gosav, Steluţa; Praisler, Mirela; Birsa, Mihail Lucian

    2011-01-01

    In this paper we present several expert systems that predict the class identity of the modeled compounds, based on a preprocessed spectral database. The expert systems were built using Artificial Neural Networks (ANN) and are designed to predict if an unknown compound has the toxicological activity of amphetamines (stimulant and hallucinogen), or whether it is a nonamphetamine. In attempts to circumvent the laws controlling drugs of abuse, new chemical structures are very frequently introduced on the black market. They are obtained by slightly modifying the controlled molecular structures by adding or changing substituents at various positions on the banned molecules. As a result, no substance similar to those forming a prohibited class may be used nowadays, even if it has not been specifically listed. Therefore, reliable, fast and accessible systems capable of modeling and then identifying similarities at molecular level, are highly needed for epidemiological, clinical, and forensic purposes. In order to obtain the expert systems, we have preprocessed a concatenated spectral database, representing the GC-FTIR (gas chromatography-Fourier transform infrared spectrometry) and GC-MS (gas chromatography-mass spectrometry) spectra of 103 forensic compounds. The database was used as input for a Principal Component Analysis (PCA). The scores of the forensic compounds on the main principal components (PCs) were then used as inputs for the ANN systems. We have built eight PC-ANN systems (principal component analysis coupled with artificial neural network) with a different number of input variables: 15 PCs, 16 PCs, 17 PCs, 18 PCs, 19 PCs, 20 PCs, 21 PCs and 22 PCs. The best expert system was found to be the ANN network built with 18 PCs, which accounts for an explained variance of 77%. This expert system has the best sensitivity (a rate of classification C = 100% and a rate of true positives TP = 100%), as well as a good selectivity (a rate of true negatives TN = 92.77%). A

  3. A convolutional neural network neutrino event classifier

    International Nuclear Information System (INIS)

    Aurisano, A.; Sousa, A.; Radovic, A.; Vahle, P.; Rocco, D.; Pawloski, G.; Himmel, A.; Niner, E.; Messier, M.D.; Psihas, F.

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  4. Multiple-instance learning as a classifier combining problem

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M. J.; Duin, Robert P. W.

    2013-01-01

    In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the ass......In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL...... with the assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification...

  5. A network-based approach to classify the three domains of life

    Directory of Open Access Journals (Sweden)

    Mueller Laurin AJ

    2011-10-01

    Full Text Available Abstract Background Identifying group-specific characteristics in metabolic networks can provide better insight into evolutionary developments. Here, we present an approach to classify the three domains of life using topological information about the underlying metabolic networks. These networks have been shown to share domain-independent structural similarities, which pose a special challenge for our endeavour. We quantify specific structural information by using topological network descriptors to classify this set of metabolic networks. Such measures quantify the structural complexity of the underlying networks. In this study, we use such measures to capture domain-specific structural features of the metabolic networks to classify the data set. So far, it has been a challenging undertaking to examine what kind of structural complexity such measures do detect. In this paper, we apply two groups of topological network descriptors to metabolic networks and evaluate their classification performance. Moreover, we combine the two groups to perform a feature selection to estimate the structural features with the highest classification ability in order to optimize the classification performance. Results By combining the two groups, we can identify seven topological network descriptors that show a group-specific characteristic by ANOVA. A multivariate analysis using feature selection and supervised machine learning leads to a reasonable classification performance with a weighted F-score of 83.7% and an accuracy of 83.9%. We further demonstrate that our approach outperforms alternative methods. Also, our results reveal that entropy-based descriptors show the highest classification ability for this set of networks. Conclusions Our results show that these particular topological network descriptors are able to capture domain-specific structural characteristics for classifying metabolic networks between the three domains of life.

  6. Reconstructing Curvilinear Networks Using Path Classifiers and Integer Programming.

    Science.gov (United States)

    Turetken, Engin; Benmansour, Fethallah; Andres, Bjoern; Glowacki, Przemyslaw; Pfister, Hanspeter; Fua, Pascal

    2016-12-01

    We propose a novel approach to automated delineation of curvilinear structures that form complex and potentially loopy networks. By representing the image data as a graph of potential paths, we first show how to weight these paths using discriminatively-trained classifiers that are both robust and generic enough to be applied to very different imaging modalities. We then present an Integer Programming approach to finding the optimal subset of paths, subject to structural and topological constraints that eliminate implausible solutions. Unlike earlier approaches that assume a tree topology for the networks, ours explicitly models the fact that the networks may contain loops, and can reconstruct both cyclic and acyclic ones. We demonstrate the effectiveness of our approach on a variety of challenging datasets including aerial images of road networks and micrographs of neural arbors, and show that it outperforms state-of-the-art techniques.

  7. Transforming Musical Signals through a Genre Classifying Convolutional Neural Network

    Science.gov (United States)

    Geng, S.; Ren, G.; Ogihara, M.

    2017-05-01

    Convolutional neural networks (CNNs) have been successfully applied on both discriminative and generative modeling for music-related tasks. For a particular task, the trained CNN contains information representing the decision making or the abstracting process. One can hope to manipulate existing music based on this 'informed' network and create music with new features corresponding to the knowledge obtained by the network. In this paper, we propose a method to utilize the stored information from a CNN trained on musical genre classification task. The network was composed of three convolutional layers, and was trained to classify five-second song clips into five different genres. After training, randomly selected clips were modified by maximizing the sum of outputs from the network layers. In addition to the potential of such CNNs to produce interesting audio transformation, more information about the network and the original music could be obtained from the analysis of the generated features since these features indicate how the network 'understands' the music.

  8. A critical evaluation of network and pathway based classifiers for outcome prediction in breast cancer

    NARCIS (Netherlands)

    C. Staiger (Christine); S. Cadot; R Kooter; M. Dittrich (Marcus); T. Müller (Tobias); G.W. Klau (Gunnar); L.F.A. Wessels (Lodewyk)

    2011-01-01

    htmlabstractRecently, several classifiers that combine primary tumor data, like gene expression data, and secondary data sources, such as protein-protein interaction networks, have been proposed for predicting outcome in breast cancer. In these approaches, new composite features are typically

  9. A Critical Evaluation of Network and Pathway-Based Classifiers for Outcome Prediction in Breast Cancer

    NARCIS (Netherlands)

    C. Staiger (Christine); S. Cadot; R Kooter; M. Dittrich (Marcus); T. Müller (Tobias); G.W. Klau (Gunnar); L.F.A. Wessels (Lodewyk)

    2012-01-01

    htmlabstractRecently, several classifiers that combine primary tumor data, like gene expression data, and secondary data sources, such as protein-protein interaction networks, have been proposed for predicting outcome in breast cancer. In these approaches, new composite features are typically

  10. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  11. Combining Biometric Fractal Pattern and Particle Swarm Optimization-Based Classifier for Fingerprint Recognition

    Directory of Open Access Journals (Sweden)

    Chia-Hung Lin

    2010-01-01

    Full Text Available This paper proposes combining the biometric fractal pattern and particle swarm optimization (PSO-based classifier for fingerprint recognition. Fingerprints have arch, loop, whorl, and accidental morphologies, and embed singular points, resulting in the establishment of fingerprint individuality. An automatic fingerprint identification system consists of two stages: digital image processing (DIP and pattern recognition. DIP is used to convert to binary images, refine out noise, and locate the reference point. For binary images, Katz's algorithm is employed to estimate the fractal dimension (FD from a two-dimensional (2D image. Biometric features are extracted as fractal patterns using different FDs. Probabilistic neural network (PNN as a classifier performs to compare the fractal patterns among the small-scale database. A PSO algorithm is used to tune the optimal parameters and heighten the accuracy. For 30 subjects in the laboratory, the proposed classifier demonstrates greater efficiency and higher accuracy in fingerprint recognition.

  12. Classifying images using restricted Boltzmann machines and convolutional neural networks

    Science.gov (United States)

    Zhao, Zhijun; Xu, Tongde; Dai, Chenyu

    2017-07-01

    To improve the feature recognition ability of deep model transfer learning, we propose a hybrid deep transfer learning method for image classification based on restricted Boltzmann machines (RBM) and convolutional neural networks (CNNs). It integrates learning abilities of two models, which conducts subject classification by exacting structural higher-order statistics features of images. While the method transfers the trained convolutional neural networks to the target datasets, fully-connected layers can be replaced by restricted Boltzmann machine layers; then the restricted Boltzmann machine layers and Softmax classifier are retrained, and BP neural network can be used to fine-tuned the hybrid model. The restricted Boltzmann machine layers has not only fully integrated the whole feature maps, but also learns the statistical features of target datasets in the view of the biggest logarithmic likelihood, thus removing the effects caused by the content differences between datasets. The experimental results show that the proposed method has improved the accuracy of image classification, outperforming other methods on Pascal VOC2007 and Caltech101 datasets.

  13. Learning Bayesian network classifiers for credit scoring using Markov Chain Monte Carlo search

    NARCIS (Netherlands)

    Baesens, B.; Egmont-Petersen, M.; Castelo, R.; Vanthienen, J.

    2001-01-01

    In this paper, we will evaluate the power and usefulness of Bayesian network classifiers for credit scoring. Various types of Bayesian network classifiers will be evaluated and contrasted including unrestricted Bayesian network classifiers learnt using Markov Chain Monte Carlo (MCMC) search.

  14. Generating prior probabilities for classifiers of brain tumours using belief networks

    Directory of Open Access Journals (Sweden)

    Arvanitis Theodoros N

    2007-09-01

    Full Text Available Abstract Background Numerous methods for classifying brain tumours based on magnetic resonance spectra and imaging have been presented in the last 15 years. Generally, these methods use supervised machine learning to develop a classifier from a database of cases for which the diagnosis is already known. However, little has been published on developing classifiers based on mixed modalities, e.g. combining imaging information with spectroscopy. In this work a method of generating probabilities of tumour class from anatomical location is presented. Methods The method of "belief networks" is introduced as a means of generating probabilities that a tumour is any given type. The belief networks are constructed using a database of paediatric tumour cases consisting of data collected over five decades; the problems associated with using this data are discussed. To verify the usefulness of the networks, an application of the method is presented in which prior probabilities were generated and combined with a classification of tumours based solely on MRS data. Results Belief networks were constructed from a database of over 1300 cases. These can be used to generate a probability that a tumour is any given type. Networks are presented for astrocytoma grades I and II, astrocytoma grades III and IV, ependymoma, pineoblastoma, primitive neuroectodermal tumour (PNET, germinoma, medulloblastoma, craniopharyngioma and a group representing rare tumours, "other". Using the network to generate prior probabilities for classification improves the accuracy when compared with generating prior probabilities based on class prevalence. Conclusion Bayesian belief networks are a simple way of using discrete clinical information to generate probabilities usable in classification. The belief network method can be robust to incomplete datasets. Inclusion of a priori knowledge is an effective way of improving classification of brain tumours by non-invasive methods.

  15. Monitoring industrial facilities using principles of integration of fiber classifier and local sensor networks

    Science.gov (United States)

    Korotaev, Valery V.; Denisov, Victor M.; Rodrigues, Joel J. P. C.; Serikova, Mariya G.; Timofeev, Andrey V.

    2015-05-01

    The paper deals with the creation of integrated monitoring systems. They combine fiber-optic classifiers and local sensor networks. These systems allow for the monitoring of complex industrial objects. Together with adjacent natural objects, they form the so-called geotechnical systems. An integrated monitoring system may include one or more spatially continuous fiber-optic classifiers based on optic fiber and one or more arrays of discrete measurement sensors, which are usually combined in sensor networks. Fiber-optic classifiers are already widely used for the control of hazardous extended objects (oil and gas pipelines, railways, high-rise buildings, etc.). To monitor local objects, discrete measurement sensors are generally used (temperature, pressure, inclinometers, strain gauges, accelerometers, sensors measuring the composition of impurities in the air, and many others). However, monitoring complex geotechnical systems require a simultaneous use of continuous spatially distributed sensors based on fiber-optic cable and connected local discrete sensors networks. In fact, we are talking about integration of the two monitoring methods. This combination provides an additional way to create intelligent monitoring systems. Modes of operation of intelligent systems can automatically adapt to changing environmental conditions. For this purpose, context data received from one sensor (e.g., optical channel) may be used to change modes of work of other sensors within the same monitoring system. This work also presents experimental results of the prototype of the integrated monitoring system.

  16. MAMMOGRAMS ANALYSIS USING SVM CLASSIFIER IN COMBINED TRANSFORMS DOMAIN

    Directory of Open Access Journals (Sweden)

    B.N. Prathibha

    2011-02-01

    Full Text Available Breast cancer is a primary cause of mortality and morbidity in women. Reports reveal that earlier the detection of abnormalities, better the improvement in survival. Digital mammograms are one of the most effective means for detecting possible breast anomalies at early stages. Digital mammograms supported with Computer Aided Diagnostic (CAD systems help the radiologists in taking reliable decisions. The proposed CAD system extracts wavelet features and spectral features for the better classification of mammograms. The Support Vector Machines classifier is used to analyze 206 mammogram images from Mias database pertaining to the severity of abnormality, i.e., benign and malign. The proposed system gives 93.14% accuracy for discrimination between normal-malign and 87.25% accuracy for normal-benign samples and 89.22% accuracy for benign-malign samples. The study reveals that features extracted in hybrid transform domain with SVM classifier proves to be a promising tool for analysis of mammograms.

  17. Using Bayesian networks in the construction of a bi-level multi-classifier. A case study using intensive care unit patients data.

    Science.gov (United States)

    Sierra, B; Serrano, N; Larrañaga, P; Plasencia, E J; Inza, I; Jiménez, J J; Revuelta, P; Mora, M L

    2001-06-01

    Combining the predictions of a set of classifiers has shown to be an effective way to create composite classifiers that are more accurate than any of the component classifiers. There are many methods for combining the predictions given by component classifiers. We introduce a new method that combine a number of component classifiers using a Bayesian network as a classifier system given the component classifiers predictions. Component classifiers are standard machine learning classification algorithms, and the Bayesian network structure is learned using a genetic algorithm that searches for the structure that maximises the classification accuracy given the predictions of the component classifiers. Experimental results have been obtained on a datafile of cases containing information about ICU patients at Canary Islands University Hospital. The accuracy obtained using the presented new approach statistically improve those obtained using standard machine learning methods.

  18. A Study of Different Classifier Combination Approaches for Handwritten Indic Script Recognition

    Directory of Open Access Journals (Sweden)

    Anirban Mukhopadhyay

    2018-02-01

    Full Text Available Script identification is an essential step in document image processing especially when the environment is multi-script/multilingual. Till date researchers have developed several methods for the said problem. For this kind of complex pattern recognition problem, it is always difficult to decide which classifier would be the best choice. Moreover, it is also true that different classifiers offer complementary information about the patterns to be classified. Therefore, combining classifiers, in an intelligent way, can be beneficial compared to using any single classifier. Keeping these facts in mind, in this paper, information provided by one shape based and two texture based features are combined using classifier combination techniques for script recognition (word-level purpose from the handwritten document images. CMATERdb8.4.1 contains 7200 handwritten word samples belonging to 12 Indic scripts (600 per script and the database is made freely available at https://code.google.com/p/cmaterdb/. The word samples from the mentioned database are classified based on the confidence scores provided by Multi-Layer Perceptron (MLP classifier. Major classifier combination techniques including majority voting, Borda count, sum rule, product rule, max rule, Dempster-Shafer (DS rule of combination and secondary classifiers are evaluated for this pattern recognition problem. Maximum accuracy of 98.45% is achieved with an improvement of 7% over the best performing individual classifier being reported on the validation set.

  19. Variants of the Borda count method for combining ranked classifier hypotheses

    NARCIS (Netherlands)

    van Erp, Merijn; Schomaker, Lambert; Schomaker, Lambert; Vuurpijl, Louis

    2000-01-01

    The Borda count is a simple yet effective method of combining rankings. In pattern recognition, classifiers are often able to return a ranked set of results. Several experiments have been conducted to test the ability of the Borda count and two variant methods to combine these ranked classifier

  20. Lung Nodule Image Classification Based on Local Difference Pattern and Combined Classifier.

    Science.gov (United States)

    Mao, Keming; Deng, Zhuofu

    2016-01-01

    This paper proposes a novel lung nodule classification method for low-dose CT images. The method includes two stages. First, Local Difference Pattern (LDP) is proposed to encode the feature representation, which is extracted by comparing intensity difference along circular regions centered at the lung nodule. Then, the single-center classifier is trained based on LDP. Due to the diversity of feature distribution for different class, the training images are further clustered into multiple cores and the multicenter classifier is constructed. The two classifiers are combined to make the final decision. Experimental results on public dataset show the superior performance of LDP and the combined classifier.

  1. Lung Nodule Image Classification Based on Local Difference Pattern and Combined Classifier

    Directory of Open Access Journals (Sweden)

    Keming Mao

    2016-01-01

    Full Text Available This paper proposes a novel lung nodule classification method for low-dose CT images. The method includes two stages. First, Local Difference Pattern (LDP is proposed to encode the feature representation, which is extracted by comparing intensity difference along circular regions centered at the lung nodule. Then, the single-center classifier is trained based on LDP. Due to the diversity of feature distribution for different class, the training images are further clustered into multiple cores and the multicenter classifier is constructed. The two classifiers are combined to make the final decision. Experimental results on public dataset show the superior performance of LDP and the combined classifier.

  2. Complex network approach to classifying classical piano compositions

    Science.gov (United States)

    Xin, Chen; Zhang, Huishu; Huang, Jiping

    2016-10-01

    Complex network has been regarded as a useful tool handling systems with vague interactions. Hence, numerous applications have arised. In this paper we construct complex networks for 770 classical piano compositions of Mozart, Beethoven and Chopin based on musical note pitches and lengths. We find prominent distinctions among network edges of different composers. Some stylized facts can be explained by such parameters of network structures and topologies. Further, we propose two classification methods for music styles and genres according to the discovered distinctions. These methods are easy to implement and the results are sound. This work suggests that complex network could be a decent way to analyze the characteristics of musical notes, since it could provide a deep view into understanding of the relationships among notes in musical compositions and evidence for classification of different composers, styles and genres of music.

  3. Combining classifiers generated by multi-gene genetic programming for protein fold recognition using genetic algorithm.

    Science.gov (United States)

    Bardsiri, Mahshid Khatibi; Eftekhari, Mahdi; Mousavi, Reza

    2015-01-01

    In this study the problem of protein fold recognition, that is a classification task, is solved via a hybrid of evolutionary algorithms namely multi-gene Genetic Programming (GP) and Genetic Algorithm (GA). Our proposed method consists of two main stages and is performed on three datasets taken from the literature. Each dataset contains different feature groups and classes. In the first step, multi-gene GP is used for producing binary classifiers based on various feature groups for each class. Then, different classifiers obtained for each class are combined via weighted voting so that the weights are determined through GA. At the end of the first step, there is a separate binary classifier for each class. In the second stage, the obtained binary classifiers are combined via GA weighting in order to generate the overall classifier. The final obtained classifier is superior to the previous works found in the literature in terms of classification accuracy.

  4. A Combined Stable Isotope And Machine Learning Approach To Quantify And Classify Of Nitrate Pollution Sources

    Science.gov (United States)

    Boeckx, P. F.; Xue, D.; De Baets, B.

    2011-12-01

    Stable isotope analyses of NO3- (δ15N and δ18O) are widely used to determine the sources of nitrate pollution in water. The objective of our study was (1) to quantify NO3- sources in surface water and to classify surface waters in NO3- pollution classes via a combined stable isotope and machine learning approach; and (2) to assess a decision tree model with physicochemical data for retrieving the latter classification. A logical approach has been followed: (1) 2-year monthly sampling of 30 sampling points from different river basins in Belgium, which were classified into 5 different NO3- pollution classes using experts knowledge (Agriculture (A), Agriculture with groundwater compensation (AGC), Combination of agriculture and horticulture (AH), Greenhouses in an agricultural area (G) and Households (H)); (2) estimating proportional NO3- source contribution per NO3- pollution class by applying a Bayesian isotopic mixing model (SIAR) for measured isotopic data of NO3-; (3) re-classifying the 30 sampling points into NO3- pollution classes via a k-means clustering of the SIAR outputs; and (4) building a decision tree model using physicochemical data to retrieve expert knowledge and k-means clustering classification. SIAR successfully estimated proportional contribution ranges of five potential NO3- sources: NO3- in precipitation, NO3- in fertilizer, NH4+ in fertilizer and precipitation, manure and sewage and soil N. For classes A, AGC, AH and H in winter manure and sewage were major (40 - 60%), NO3- in precipitation minor (pollution classes were optimal for both winter and summer. Thus the 30 sampling points were divided into four classes: classes A, class AGC, class G, and class H (class AH was not retained). Finally, a decision tree model built on physicochemical data using expert classification labels or k-means clustering labels could retrieve ca. 70% of the nitrate pollution classes in both cases. The later suggests that physicochemical data could be applied to

  5. Combining binary classifiers to improve tree species discrimination at leaf level

    CSIR Research Space (South Africa)

    Dastile, X

    2012-11-01

    Full Text Available , direct 7-class prediction results in high misclassification rates. We therefore construct binary classifiers for all possible binary classification problems and combine them using Error Correcting Output Codes (ECOC) to form a 7-class predictor. ECOC...

  6. Strategies for Transporting Data Between Classified and Unclassified Networks

    Science.gov (United States)

    2016-03-01

    Sustainment Logistics CPC System mission command (S2MC) 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18...analyze the national enterprise data portal (NEDP), a foundational component of the sustainment system mission command. The analysis focused on...Taps,” Network Performance Channel GmbH, Langen, Germany , July 2011. 7. United States Navy, “Tactical Command System – MIP,” Defense Technical

  7. Use of artificial neural networks and geographic objects for classifying remote sensing imagery

    Directory of Open Access Journals (Sweden)

    Pedro Resende Silva

    2014-06-01

    Full Text Available The aim of this study was to develop a methodology for mapping land use and land cover in the northern region of Minas Gerais state, where, in addition to agricultural land, the landscape is dominated by native cerrado, deciduous forests, and extensive areas of vereda. Using forest inventory data, as well as RapidEye, Landsat TM and MODIS imagery, three specific objectives were defined: 1 to test use of image segmentation techniques for an object-based classification encompassing spectral, spatial and temporal information, 2 to test use of high spatial resolution RapidEye imagery combined with Landsat TM time series imagery for capturing the effects of seasonality, and 3 to classify data using Artificial Neural Networks. Using MODIS time series and forest inventory data, time signatures were extracted from the dominant vegetation formations, enabling selection of the best periods of the year to be represented in the classification process. Objects created with the segmentation of RapidEye images, along with the Landsat TM time series images, were classified by ten different Multilayer Perceptron network architectures. Results showed that the methodology in question meets both the purposes of this study and the characteristics of the local plant life. With excellent accuracy values for native classes, the study showed the importance of a well-structured database for classification and the importance of suitable image segmentation to meet specific purposes.

  8. Neural Networks Classifier for Data Selection in Statistical Machine Translation

    Directory of Open Access Journals (Sweden)

    Peris Álvaro

    2017-06-01

    Full Text Available Corpora are precious resources, as they allow for a proper estimation of statistical machine translation models. Data selection is a variant of the domain adaptation field, aimed to extract those sentences from an out-of-domain corpus that are the most useful to translate a different target domain. We address the data selection problem in statistical machine translation as a classification task. We present a new method, based on neural networks, able to deal with monolingual and bilingual corpora. Empirical results show that our data selection method provides slightly better translation quality, compared to a state-of-the-art method (cross-entropy, requiring substantially less data. Moreover, the results obtained are coherent across different language pairs, demonstrating the robustness of our proposal.

  9. Feature selection for Bayesian network classifiers using the MDL-FS score

    NARCIS (Netherlands)

    Drugan, Madalina M.; Wiering, Marco A.

    When constructing a Bayesian network classifier from data, the more or less redundant features included in a dataset may bias the classifier and as a consequence may result in a relatively poor classification accuracy. In this paper, we study the problem of selecting appropriate subsets of features

  10. 76 FR 63811 - Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and...

    Science.gov (United States)

    2011-10-13

    ... Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and... classified national security information (classified information) on computer networks, it is hereby ordered as follows: Section 1. Policy. Our Nation's security requires classified information to be shared...

  11. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    2015-01-01

    Full Text Available Maximum likelihood classifier (MLC and support vector machines (SVM are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.

  12. Improving Biochemical Named Entity Recognition Using PSO Classifier Selection and Bayesian Combination Methods.

    Science.gov (United States)

    Akkasi, Abbas; Varoglu, Ekrem

    2017-01-01

    Named Entity Recognition (NER) is a basic step for large number of consequent text mining tasks in the biochemical domain. Increasing the performance of such recognition systems is of high importance and always poses a challenge. In this study, a new community based decision making system is proposed which aims at increasing the efficiency of NER systems in the chemical/drug name context. Particle Swarm Optimization (PSO) algorithm is chosen as the expert selection strategy along with the Bayesian combination method to merge the outputs of the selected classifiers as well as evaluate the fitness of the selected candidates. The proposed system performs in two steps. The first step focuses on creating various numbers of baseline classifiers for NER with different features sets using the Conditional Random Fields (CRFs). The second step involves the selection and efficient combination of the classifiers using PSO and Bayesisan combination. Two comprehensive corpora from BioCreative events, namely ChemDNER and CEMP, are used for the experiments conducted. Results show that the ensemble of classifiers selected by means of the proposed approach perform better than the single best classifier as well as ensembles formed using other popular selection/combination strategies for both corpora. Furthermore, the proposed method outperforms the best performing system at the Biocreative IV ChemDNER track by achieving an F-score of 87.95 percent.

  13. Combined Approach of PNN and Time-Frequency as the Classifier for Power System Transient Problems

    Directory of Open Access Journals (Sweden)

    Aslam Pervez Memon

    2013-04-01

    Full Text Available The transients in power system cause serious disturbances in the reliability, safety and economy of the system. The transient signals possess the nonstationary characteristics in which the frequency as well as varying time information is compulsory for the analysis. Hence, it is vital, first to detect and classify the type of transient fault and then to mitigate them. This article proposes time-frequency and FFNN (Feedforward Neural Network approach for the classification of power system transients problems. In this work it is suggested that all the major categories of transients are simulated, de-noised, and decomposed with DWT (Discrete Wavelet and MRA (Multiresolution Analysis algorithm and then distinctive features are extracted to get optimal vector as input for training of PNN (Probabilistic Neural Network classifier. The simulation results of proposed approach prove their simplicity, accurateness and effectiveness for the automatic detection and classification of PST (Power System Transient types

  14. An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data

    Directory of Open Access Journals (Sweden)

    Evangelos Stromatias

    2017-06-01

    Full Text Available This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77% and Poker-DVS (100% real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.

  15. The use of hyperspectral data for tree species discrimination: Combining binary classifiers

    CSIR Research Space (South Africa)

    Dastile, X

    2010-11-01

    Full Text Available ). A review on the combination of binary classifiers in multiclass problems. Springer science and Business Media B.V [7] Dietterich T.G and Bakiri G.(1995). Solving Multiclass Learning Problem via Error-Correcting Output Codes. AI Access Foundation...

  16. Automatic construction of a recurrent neural network based classifier for vehicle passage detection

    Science.gov (United States)

    Burnaev, Evgeny; Koptelov, Ivan; Novikov, German; Khanipov, Timur

    2017-03-01

    Recurrent Neural Networks (RNNs) are extensively used for time-series modeling and prediction. We propose an approach for automatic construction of a binary classifier based on Long Short-Term Memory RNNs (LSTM-RNNs) for detection of a vehicle passage through a checkpoint. As an input to the classifier we use multidimensional signals of various sensors that are installed on the checkpoint. Obtained results demonstrate that the previous approach to handcrafting a classifier, consisting of a set of deterministic rules, can be successfully replaced by an automatic RNN training on an appropriately labelled data.

  17. Prediction models in the design of neural network based ECG classifiers: A neural network and genetic programming approach

    Directory of Open Access Journals (Sweden)

    Smith Ann E

    2002-01-01

    Full Text Available Abstract Background Classification of the electrocardiogram using Neural Networks has become a widely used method in recent years. The efficiency of these classifiers depends upon a number of factors including network training. Unfortunately, there is a shortage of evidence available to enable specific design choices to be made and as a consequence, many designs are made on the basis of trial and error. In this study we develop prediction models to indicate the point at which training should stop for Neural Network based Electrocardiogram classifiers in order to ensure maximum generalisation. Methods Two prediction models have been presented; one based on Neural Networks and the other on Genetic Programming. The inputs to the models were 5 variable training parameters and the output indicated the point at which training should stop. Training and testing of the models was based on the results from 44 previously developed bi-group Neural Network classifiers, discriminating between Anterior Myocardial Infarction and normal patients. Results Our results show that both approaches provide close fits to the training data; p = 0.627 and p = 0.304 for the Neural Network and Genetic Programming methods respectively. For unseen data, the Neural Network exhibited no significant differences between actual and predicted outputs (p = 0.306 while the Genetic Programming method showed a marginally significant difference (p = 0.047. Conclusions The approaches provide reverse engineering solutions to the development of Neural Network based Electrocardiogram classifiers. That is given the network design and architecture, an indication can be given as to when training should stop to obtain maximum network generalisation.

  18. A Robust Text Classifier Based on Denoising Deep Neural Network in the Analysis of Big Data

    Directory of Open Access Journals (Sweden)

    Wulamu Aziguli

    2017-01-01

    Full Text Available Text classification has always been an interesting issue in the research area of natural language processing (NLP. While entering the era of big data, a good text classifier is critical to achieving NLP for scientific big data analytics. With the ever-increasing size of text data, it has posed important challenges in developing effective algorithm for text classification. Given the success of deep neural network (DNN in analyzing big data, this article proposes a novel text classifier using DNN, in an effort to improve the computational performance of addressing big text data with hybrid outliers. Specifically, through the use of denoising autoencoder (DAE and restricted Boltzmann machine (RBM, our proposed method, named denoising deep neural network (DDNN, is able to achieve significant improvement with better performance of antinoise and feature extraction, compared to the traditional text classification algorithms. The simulations on benchmark datasets verify the effectiveness and robustness of our proposed text classifier.

  19. FERAL : Network-based classifier with application to breast cancer outcome prediction

    NARCIS (Netherlands)

    Allahyar, A.; De Ridder, J.

    2015-01-01

    Motivation: Breast cancer outcome prediction based on gene expression profiles is an important strategy for personalize patient care. To improve performance and consistency of discovered markers of the initial molecular classifiers, network-based outcome prediction methods (NOPs) have been proposed.

  20. Feature extraction using convolutional neural network for classifying breast density in mammographic images

    Science.gov (United States)

    Thomaz, Ricardo L.; Carneiro, Pedro C.; Patrocinio, Ana C.

    2017-03-01

    Breast cancer is the leading cause of death for women in most countries. The high levels of mortality relate mostly to late diagnosis and to the direct proportionally relationship between breast density and breast cancer development. Therefore, the correct assessment of breast density is important to provide better screening for higher risk patients. However, in modern digital mammography the discrimination among breast densities is highly complex due to increased contrast and visual information for all densities. Thus, a computational system for classifying breast density might be a useful tool for aiding medical staff. Several machine-learning algorithms are already capable of classifying small number of classes with good accuracy. However, machinelearning algorithms main constraint relates to the set of features extracted and used for classification. Although well-known feature extraction techniques might provide a good set of features, it is a complex task to select an initial set during design of a classifier. Thus, we propose feature extraction using a Convolutional Neural Network (CNN) for classifying breast density by a usual machine-learning classifier. We used 307 mammographic images downsampled to 260x200 pixels to train a CNN and extract features from a deep layer. After training, the activation of 8 neurons from a deep fully connected layer are extracted and used as features. Then, these features are feedforward to a single hidden layer neural network that is cross-validated using 10-folds to classify among four classes of breast density. The global accuracy of this method is 98.4%, presenting only 1.6% of misclassification. However, the small set of samples and memory constraints required the reuse of data in both CNN and MLP-NN, therefore overfitting might have influenced the results even though we cross-validated the network. Thus, although we presented a promising method for extracting features and classifying breast density, a greater database is

  1. Classifying Algorithm Based on a Fuzzy Neural network for the control of a Network Attached Optical Jukebox

    Science.gov (United States)

    Liu, Xuan; Jia, Hui-Bo; Cheng, Ming

    2006-11-01

    A new analytical method for improving the performance of a network attached optical jukebox is presented by means of artificial neural networks. Through analyzing operation (request) process in this system, the mathematics model and algorithm are built for this storage system, and then a classified method based on artificial neural networks for this system is proposed. Simulation results testified the feasibility and validity of the proposed method that it could overcome the drawbacks of the frequent I/O operation and provide an effective way for using the Network Attached Optical Jukebox.

  2. Zooniverse: Combining Human and Machine Classifiers for the Big Survey Era

    Science.gov (United States)

    Fortson, Lucy; Wright, Darryl; Beck, Melanie; Lintott, Chris; Scarlata, Claudia; Dickinson, Hugh; Trouille, Laura; Willi, Marco; Laraia, Michael; Boyer, Amy; Veldhuis, Marten; Zooniverse

    2018-01-01

    Many analyses of astronomical data sets, ranging from morphological classification of galaxies to identification of supernova candidates, have relied on humans to classify data into distinct categories. Crowdsourced galaxy classifications via the Galaxy Zoo project provided a solution that scaled visual classification for extant surveys by harnessing the combined power of thousands of volunteers. However, the much larger data sets anticipated from upcoming surveys will require a different approach. Automated classifiers using supervised machine learning have improved considerably over the past decade but their increasing sophistication comes at the expense of needing ever more training data. Crowdsourced classification by human volunteers is a critical technique for obtaining these training data. But several improvements can be made on this zeroth order solution. Efficiency gains can be achieved by implementing a “cascade filtering” approach whereby the task structure is reduced to a set of binary questions that are more suited to simpler machines while demanding lower cognitive loads for humans.Intelligent subject retirement based on quantitative metrics of volunteer skill and subject label reliability also leads to dramatic improvements in efficiency. We note that human and machine classifiers may retire subjects differently leading to trade-offs in performance space. Drawing on work with several Zooniverse projects including Galaxy Zoo and Supernova Hunter, we will present recent findings from experiments that combine cohorts of human and machine classifiers. We show that the most efficient system results when appropriate subsets of the data are intelligently assigned to each group according to their particular capabilities.With sufficient online training, simple machines can quickly classify “easy” subjects, leaving more difficult (and discovery-oriented) tasks for volunteers. We also find humans achieve higher classification purity while samples

  3. Training neural network classifiers for medical decision making: the effects of imbalanced datasets on classification performance.

    Science.gov (United States)

    Mazurowski, Maciej A; Habas, Piotr A; Zurada, Jacek M; Lo, Joseph Y; Baker, Jay A; Tourassi, Georgia D

    2008-01-01

    This study investigates the effect of class imbalance in training data when developing neural network classifiers for computer-aided medical diagnosis. The investigation is performed in the presence of other characteristics that are typical among medical data, namely small training sample size, large number of features, and correlations between features. Two methods of neural network training are explored: classical backpropagation (BP) and particle swarm optimization (PSO) with clinically relevant training criteria. An experimental study is performed using simulated data and the conclusions are further validated on real clinical data for breast cancer diagnosis. The results show that classifier performance deteriorates with even modest class imbalance in the training data. Further, it is shown that BP is generally preferable over PSO for imbalanced training data especially with small data sample and large number of features. Finally, it is shown that there is no clear preference between oversampling and no compensation approach and some guidance is provided regarding a proper selection.

  4. Autoregressive Integrated Adaptive Neural Networks Classifier for EEG-P300 Classification

    Directory of Open Access Journals (Sweden)

    Demi Soetraprawata

    2013-06-01

    Full Text Available Brain Computer Interface has a potency to be applied in mechatronics apparatus and vehicles in the future. Compared to the other techniques, EEG is the most preferred for BCI designs. In this paper, a new adaptive neural network classifier of different mental activities from EEG-based P300 signals is proposed. To overcome the over-training that is caused by noisy and non-stationary data, the EEG signals are filtered and extracted using autoregressive models before passed to the adaptive neural networks classifier. To test the improvement in the EEG classification performance with the proposed method, comparative experiments were conducted using Bayesian Linear Discriminant Analysis. The experiment results show that the all subjects achieve a classification accuracy of 100%.

  5. ELHnet: a convolutional neural network for classifying cochlear endolymphatic hydrops imaged with optical coherence tomography.

    Science.gov (United States)

    Liu, George S; Zhu, Michael H; Kim, Jinkyung; Raphael, Patrick; Applegate, Brian E; Oghalai, John S

    2017-10-01

    Detection of endolymphatic hydrops is important for diagnosing Meniere's disease, and can be performed non-invasively using optical coherence tomography (OCT) in animal models as well as potentially in the clinic. Here, we developed ELHnet, a convolutional neural network to classify endolymphatic hydrops in a mouse model using learned features from OCT images of mice cochleae. We trained ELHnet on 2159 training and validation images from 17 mice, using only the image pixels and observer-determined labels of endolymphatic hydrops as the inputs. We tested ELHnet on 37 images from 37 mice that were previously not used, and found that the neural network correctly classified 34 of the 37 mice. This demonstrates an improvement in performance from previous work on computer-aided classification of endolymphatic hydrops. To the best of our knowledge, this is the first deep CNN designed for endolymphatic hydrops classification.

  6. Covert Network Analysis for Key Player Detection and Event Prediction Using a Hybrid Classifier

    Directory of Open Access Journals (Sweden)

    Wasi Haider Butt

    2014-01-01

    attraction of researchers and practitioners to design systems which can detect main members which are actually responsible for such kind of events. In this paper, we present a novel method to predict key players from a covert network by applying a hybrid framework. The proposed system calculates certain centrality measures for each node in the network and then applies novel hybrid classifier for detection of key players. Our system also applies anomaly detection to predict any terrorist activity in order to help law enforcement agencies to destabilize the involved network. As a proof of concept, the proposed framework has been implemented and tested using different case studies including two publicly available datasets and one local network.

  7. Covert network analysis for key player detection and event prediction using a hybrid classifier.

    Science.gov (United States)

    Butt, Wasi Haider; Akram, M Usman; Khan, Shoab A; Javed, Muhammad Younus

    2014-01-01

    National security has gained vital importance due to increasing number of suspicious and terrorist events across the globe. Use of different subfields of information technology has also gained much attraction of researchers and practitioners to design systems which can detect main members which are actually responsible for such kind of events. In this paper, we present a novel method to predict key players from a covert network by applying a hybrid framework. The proposed system calculates certain centrality measures for each node in the network and then applies novel hybrid classifier for detection of key players. Our system also applies anomaly detection to predict any terrorist activity in order to help law enforcement agencies to destabilize the involved network. As a proof of concept, the proposed framework has been implemented and tested using different case studies including two publicly available datasets and one local network.

  8. Protein Secondary Structure Prediction Using AutoEncoder Network and Bayes Classifier

    Science.gov (United States)

    Wang, Leilei; Cheng, Jinyong

    2018-03-01

    Protein secondary structure prediction is belong to bioinformatics,and it's important in research area. In this paper, we propose a new prediction way of protein using bayes classifier and autoEncoder network. Our experiments show some algorithms including the construction of the model, the classification of parameters and so on. The data set is a typical CB513 data set for protein. In terms of accuracy, the method is the cross validation based on the 3-fold. Then we can get the Q3 accuracy. Paper results illustrate that the autoencoder network improved the prediction accuracy of protein secondary structure.

  9. Patient Specific Seizure Prediction System Using Hilbert Spectrum and Bayesian Networks Classifiers

    OpenAIRE

    Ozdemir, Nilufer; Yildirim, Esen

    2014-01-01

    The aim of this paper is to develop an automated system for epileptic seizure prediction from intracranial EEG signals based on Hilbert-Huang transform (HHT) and Bayesian classifiers. Proposed system includes decomposition of the signals into intrinsic mode functions for obtaining features and use of Bayesian networks with correlation based feature selection for binary classification of preictal and interictal recordings. The system was trained and tested on Freiburg EEG database. 58 hours of...

  10. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.

    Directory of Open Access Journals (Sweden)

    QingJun Song

    Full Text Available Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB algorithm plus Support vector machine (SVM is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.

  11. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.

    Science.gov (United States)

    Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan

    2017-01-01

    Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.

  12. Algorithms for Image Analysis and Combination of Pattern Classifiers with Application to Medical Diagnosis

    Science.gov (United States)

    Georgiou, Harris

    2009-10-01

    Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.

  13. Classifying the Perceptual Interpretations of a Bistable Image Using EEG and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Alexander E. Hramov

    2017-12-01

    Full Text Available In order to classify different human brain states related to visual perception of ambiguous images, we use an artificial neural network (ANN to analyze multichannel EEG. The classifier built on the basis of a multilayer perceptron achieves up to 95% accuracy in classifying EEG patterns corresponding to two different interpretations of the Necker cube. The important feature of our classifier is that trained on one subject it can be used for the classification of EEG traces of other subjects. This result suggests the existence of common features in the EEG structure associated with distinct interpretations of bistable objects. We firmly believe that the significance of our results is not limited to visual perception of the Necker cube images; the proposed experimental approach and developed computational technique based on ANN can also be applied to study and classify different brain states using neurophysiological data recordings. This may give new directions for future research in the field of cognitive and pathological brain activity, and for the development of brain-computer interfaces.

  14. Biomolecular Network-Based Synergistic Drug Combination Discovery

    Directory of Open Access Journals (Sweden)

    Xiangyi Li

    2016-01-01

    Full Text Available Drug combination is a powerful and promising approach for complex disease therapy such as cancer and cardiovascular disease. However, the number of synergistic drug combinations approved by the Food and Drug Administration is very small. To bridge the gap between urgent need and low yield, researchers have constructed various models to identify synergistic drug combinations. Among these models, biomolecular network-based model is outstanding because of its ability to reflect and illustrate the relationships among drugs, disease-related genes, therapeutic targets, and disease-specific signaling pathways as a system. In this review, we analyzed and classified models for synergistic drug combination prediction in recent decade according to their respective algorithms. Besides, we collected useful resources including databases and analysis tools for synergistic drug combination prediction. It should provide a quick resource for computational biologists who work with network medicine or synergistic drug combination designing.

  15. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    Science.gov (United States)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  16. Combination of Classifiers Identifies Fungal-Specific Activation of Lysosome Genes in Human Monocytes

    Directory of Open Access Journals (Sweden)

    João P. Leonor Fernandes Saraiva

    2017-11-01

    Full Text Available Blood stream infections can be caused by several pathogens such as viruses, fungi and bacteria and can cause severe clinical complications including sepsis. Delivery of appropriate and quick treatment is mandatory. However, it requires a rapid identification of the invading pathogen. The current gold standard for pathogen identification relies on blood cultures and these methods require a long time to gain the needed diagnosis. The use of in situ experiments attempts to identify pathogen specific immune responses but these often lead to heterogeneous biomarkers due to the high variability in methods and materials used. Using gene expression profiles for machine learning is a developing approach to discriminate between types of infection, but also shows a high degree of inconsistency. To produce consistent gene signatures, capable of discriminating fungal from bacterial infection, we have employed Support Vector Machines (SVMs based on Mixed Integer Linear Programming (MILP. Combining classifiers by joint optimization constraining them to the same set of discriminating features increased the consistency of our biomarker list independently of leukocyte-type or experimental setup. Our gene signature showed an enrichment of genes of the lysosome pathway which was not uncovered by the use of independent classifiers. Moreover, our results suggest that the lysosome genes are specifically induced in monocytes. Real time qPCR of the identified lysosome-related genes confirmed the distinct gene expression increase in monocytes during fungal infections. Concluding, our combined classifier approach presented increased consistency and was able to “unmask” signaling pathways of less-present immune cells in the used datasets.

  17. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    2016-01-01

    Full Text Available The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN. To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs and intuitionistic fuzzy cross-entropy (IFCE with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories.

  18. ELM BASED CAD SYSTEM TO CLASSIFY MAMMOGRAMS BY THE COMBINATION OF CLBP AND CONTOURLET

    Directory of Open Access Journals (Sweden)

    S Venkatalakshmi

    2017-05-01

    Full Text Available Breast cancer is a serious life threat to the womanhood, worldwide. Mammography is the promising screening tool, which can show the abnormality being detected. However, the physicians find it difficult to detect the affected regions, as the size of microcalcifications is very small. Hence it would be better, if a CAD system can accompany the physician in detecting the malicious regions. Taking this as a challenge, this paper presents a CAD system for mammogram classification which is proven to be accurate and reliable. The entire work is decomposed into four different stages and the outcome of a phase is passed as the input of the following phase. Initially, the mammogram is pre-processed by adaptive median filter and the segmentation is done by GHFCM. The features are extracted by combining the texture feature descriptors Completed Local Binary Pattern (CLBP and contourlet to frame the feature sets. In the training phase, Extreme Learning Machine (ELM is trained with the feature sets. During the testing phase, the ELM can classify between normal, malignant and benign type of cancer. The performance of the proposed approach is analysed by varying the classifier, feature extractors and parameters of the feature extractor. From the experimental analysis, it is evident that the proposed work outperforms the analogous techniques in terms of accuracy, sensitivity and specificity.

  19. Intelligent Recognition of Lung Nodule Combining Rule-based and C-SVM Classifiers

    Directory of Open Access Journals (Sweden)

    Bin Li

    2012-02-01

    Full Text Available Computer-aided detection(CAD system for lung nodules plays the important role in the diagnosis of lung cancer. In this paper, an improved intelligent recognition method of lung nodule in HRCT combing rule-based and cost-sensitive support vector machine(C-SVM classifiers is proposed for detecting both solid nodules and ground-glass opacity(GGO nodules(part solid and nonsolid. This method consists of several steps. Firstly, segmentation of regions of interest(ROIs, including pulmonary parenchyma and lung nodule candidates, is a difficult task. On one side, the presence of noise lowers the visibility of low-contrast objects. On the other side, different types of nodules, including small nodules, nodules connecting to vasculature or other structures, part-solid or nonsolid nodules, are complex, noisy, weak edge or difficult to define the boundary. In order to overcome the difficulties of obvious boundary-leak and slow evolvement speed problem in segmentatioin of weak edge, an overall segmentation method is proposed, they are: the lung parenchyma is extracted based on threshold and morphologic segmentation method; the image denoising and enhancing is realized by nonlinear anisotropic diffusion filtering(NADF method; candidate pulmonary nodules are segmented by the improved C-V level set method, in which the segmentation result of EM-based fuzzy threshold method is used as the initial contour of active contour model and a constrained energy term is added into the PDE of level set function. Then, lung nodules are classified by using the intelligent classifiers combining rules and C-SVM. Rule-based classification is first used to remove easily dismissible nonnodule objects, then C-SVM classification are used to further classify nodule candidates and reduce the number of false positive(FP objects. In order to increase the efficiency of SVM, an improved training method is used to train SVM, which uses the grid search method to search the optimal

  20. Intelligent Recognition of Lung Nodule Combining Rule-based and C-SVM Classifiers

    Directory of Open Access Journals (Sweden)

    Bin Li

    2011-10-01

    Full Text Available Computer-aided detection(CAD system for lung nodules plays the important role in the diagnosis of lung cancer. In this paper, an improved intelligent recognition method of lung nodule in HRCT combing rule-based and costsensitive support vector machine(C-SVM classifiers is proposed for detecting both solid nodules and ground-glass opacity(GGO nodules(part solid and nonsolid. This method consists of several steps. Firstly, segmentation of regions of interest(ROIs, including pulmonary parenchyma and lung nodule candidates, is a difficult task. On one side, the presence of noise lowers the visibility of low-contrast objects. On the other side, different types of nodules, including small nodules, nodules connecting to vasculature or other structures, part-solid or nonsolid nodules, are complex, noisy, weak edge or difficult to define the boundary. In order to overcome the difficulties of obvious boundary-leak and slow evolvement speed problem in segmentatioin of weak edge, an overall segmentation method is proposed, they are: the lung parenchyma is extracted based on threshold and morphologic segmentation method; the image denoising and enhancing is realized by nonlinear anisotropic diffusion filtering(NADF method;candidate pulmonary nodules are segmented by the improved C-V level set method, in which the segmentation result of EM-based fuzzy threshold method is used as the initial contour of active contour model and a constrained energy term is added into the PDE of level set function. Then, lung nodules are classified by using the intelligent classifiers combining rules and C-SVM. Rule-based classification is first used to remove easily dismissible nonnodule objects, then C-SVM classification are used to further classify nodule candidates and reduce the number of false positive(FP objects. In order to increase the efficiency of SVM, an improved training method is used to train SVM, which uses the grid search method to search the optimal parameters

  1. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Science.gov (United States)

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  2. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    Full Text Available Recent advances in training deep (multi-layer architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM approach, which also enables a very rapid training time (∼ 10 minutes. Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  3. Multilayer cellular neural network and fuzzy C-mean classifiers: comparison and performance analysis

    Science.gov (United States)

    Trujillo San-Martin, Maite; Hlebarov, Vejen; Sadki, Mustapha

    2004-11-01

    Neural Networks and Fuzzy systems are considered two of the most important artificial intelligent algorithms which provide classification capabilities obtained through different learning schemas which capture knowledge and process it according to particular rule-based algorithms. These methods are especially suited to exploit the tolerance for uncertainty and vagueness in cognitive reasoning. By applying these methods with some relevant knowledge-based rules extracted using different data analysis tools, it is possible to obtain a robust classification performance for a wide range of applications. This paper will focus on non-destructive testing quality control systems, in particular, the study of metallic structures classification according to the corrosion time using a novel cellular neural network architecture, which will be explained in detail. Additionally, we will compare these results with the ones obtained using the Fuzzy C-means clustering algorithm and analyse both classifiers according to its classification capabilities.

  4. Hierarchical Wireless Multimedia Sensor Networks for Collaborative Hybrid Semi-Supervised Classifier Learning

    Directory of Open Access Journals (Sweden)

    Liang Ding

    2007-11-01

    Full Text Available Wireless multimedia sensor networks (WMSN have recently emerged as one ofthe most important technologies, driven by the powerful multimedia signal acquisition andprocessing abilities. Target classification is an important research issue addressed in WMSN,which has strict requirement in robustness, quickness and accuracy. This paper proposes acollaborative semi-supervised classifier learning algorithm to achieve durative onlinelearning for support vector machine (SVM based robust target classification. The proposedalgorithm incrementally carries out the semi-supervised classifier learning process inhierarchical WMSN, with the collaboration of multiple sensor nodes in a hybrid computingparadigm. For decreasing the energy consumption and improving the performance, somemetrics are introduced to evaluate the effectiveness of the samples in specific sensor nodes,and a sensor node selection strategy is also proposed to reduce the impact of inevitablemissing detection and false detection. With the ant optimization routing, the learningprocess is implemented with the selected sensor nodes, which can decrease the energyconsumption. Experimental results demonstrate that the collaborative hybrid semi-supervised classifier learning algorithm can effectively implement target classification inhierarchical WMSN. It has outstanding performance in terms of energy efficiency and timecost, which verifies the effectiveness of the sensor nodes selection and ant optimizationrouting.

  5. Segment convolutional neural networks (Seg-CNNs) for classifying relations in clinical notes.

    Science.gov (United States)

    Luo, Yuan; Cheng, Yu; Uzuner, Özlem; Szolovits, Peter; Starren, Justin

    2018-01-01

    We propose Segment Convolutional Neural Networks (Seg-CNNs) for classifying relations from clinical notes. Seg-CNNs use only word-embedding features without manual feature engineering. Unlike typical CNN models, relations between 2 concepts are identified by simultaneously learning separate representations for text segments in a sentence: preceding, concept1, middle, concept2, and succeeding. We evaluate Seg-CNN on the i2b2/VA relation classification challenge dataset. We show that Seg-CNN achieves a state-of-the-art micro-average F-measure of 0.742 for overall evaluation, 0.686 for classifying medical problem-treatment relations, 0.820 for medical problem-test relations, and 0.702 for medical problem-medical problem relations. We demonstrate the benefits of learning segment-level representations. We show that medical domain word embeddings help improve relation classification. Seg-CNNs can be trained quickly for the i2b2/VA dataset on a graphics processing unit (GPU) platform. These results support the use of CNNs computed over segments of text for classifying medical relations, as they show state-of-the-art performance while requiring no manual feature engineering. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Performance Analysis and Optimization for Cognitive Radio Networks with Classified Secondary Users and Impatient Packets

    Directory of Open Access Journals (Sweden)

    Yuan Zhao

    2017-01-01

    Full Text Available A cognitive radio network with classified Secondary Users (SUs is considered. There are two types of SU packets, namely, SU1 packets and SU2 packets, in the system. The SU1 packets have higher priority than the SU2 packets. Considering the diversity of the SU packets and the real-time need of the interrupted SU packets, a novel spectrum allocation strategy with classified SUs and impatient packets is proposed. Based on the number of PU packets, SU1 packets, and SU2 packets in the system, by modeling the queue dynamics of the networks users as a three-dimensional discrete-time Markov chain, the transition probability matrix of the Markov chain is given. Then with the steady-state analysis, some important performance measures of the SU2 packets are derived to show the system performance with numerical results. Specially, in order to optimize the system actions of the SU2 packets, the individually optimal strategy and the socially optimal strategy for the SU2 packets are demonstrated. Finally, a pricing mechanism is provided to oblige the SU2 packets to follow the socially optimal strategy.

  7. Classifying Sources Influencing Indoor Air Quality (IAQ Using Artificial Neural Network (ANN

    Directory of Open Access Journals (Sweden)

    Shaharil Mad Saad

    2015-05-01

    Full Text Available Monitoring indoor air quality (IAQ is deemed important nowadays. A sophisticated IAQ monitoring system which could classify the source influencing the IAQ is definitely going to be very helpful to the users. Therefore, in this paper, an IAQ monitoring system has been proposed with a newly added feature which enables the system to identify the sources influencing the level of IAQ. In order to achieve this, the data collected has been trained with artificial neural network or ANN—a proven method for pattern recognition. Basically, the proposed system consists of sensor module cloud (SMC, base station and service-oriented client. The SMC contain collections of sensor modules that measure the air quality data and transmit the captured data to base station through wireless network. The IAQ monitoring system is also equipped with IAQ Index and thermal comfort index which could tell the users about the room’s conditions. The results showed that the system is able to measure the level of air quality and successfully classify the sources influencing IAQ in various environments like ambient air, chemical presence, fragrance presence, foods and beverages and human activity.

  8. Classifying Sources Influencing Indoor Air Quality (IAQ) Using Artificial Neural Network (ANN).

    Science.gov (United States)

    Saad, Shaharil Mad; Andrew, Allan Melvin; Shakaff, Ali Yeon Md; Saad, Abdul Rahman Mohd; Kamarudin, Azman Muhamad Yusof; Zakaria, Ammar

    2015-05-20

    Monitoring indoor air quality (IAQ) is deemed important nowadays. A sophisticated IAQ monitoring system which could classify the source influencing the IAQ is definitely going to be very helpful to the users. Therefore, in this paper, an IAQ monitoring system has been proposed with a newly added feature which enables the system to identify the sources influencing the level of IAQ. In order to achieve this, the data collected has been trained with artificial neural network or ANN--a proven method for pattern recognition. Basically, the proposed system consists of sensor module cloud (SMC), base station and service-oriented client. The SMC contain collections of sensor modules that measure the air quality data and transmit the captured data to base station through wireless network. The IAQ monitoring system is also equipped with IAQ Index and thermal comfort index which could tell the users about the room's conditions. The results showed that the system is able to measure the level of air quality and successfully classify the sources influencing IAQ in various environments like ambient air, chemical presence, fragrance presence, foods and beverages and human activity.

  9. Classifying region of interests from mammograms with breast cancer into BIRADS using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Estefanía D. Avalos-Rivera

    2017-05-01

    Full Text Available Breast cancer is one of the most common cancers among female diseases all over the world. Early diagnosis and treatment is particularly important in reducing the mortality rate. This research is focused on the prevention of breast cancer, therefore it is important to detect micro-calcifications (MCs which are a sign of early stage breast cancer. Micro-calcifications are tiny deposits of calcium which are visible on mammograms as they present as tiny white spots. A computer-aided diagnosis system (CAD is created with the development of computer technology that way radiologists are aided improving their diagnostics while using CAD as a second reader. We are aiming to classify into BIRADS 2, 3 and 4 which are the stages when the cancer can be prevented and a fourth category called No lesion which are veins and tissue that our high pass Gaussian filter detects. This research focuses on classification using ANN (Artificial Neural Network. Experimenting with the categories to classify into using ANN, the results were the following: into the four mentioned before an overall accuracy of 71% was obtained, then joining categories BIRADS 2 and 3 into one and classifying into 3 categories gave an 80% of accuracy. Joining this two categories was the result of analizing the ROC curve and observation of the ROI images of the MCs as the regions measured are very alike in this two categories and variation is that MCs are more present in BIRADS 3 than in BIRADS 2. Data matrix was reduced using PCA (Principal Component Analysis but it did not gave better results so it was discarded as the ANN accuracy to classify was reduced to a 69.8%.

  10. Combined Heuristic Attack Strategy on Complex Networks

    Directory of Open Access Journals (Sweden)

    Marek Šimon

    2017-01-01

    Full Text Available Usually, the existence of a complex network is considered an advantage feature and efforts are made to increase its robustness against an attack. However, there exist also harmful and/or malicious networks, from social ones like spreading hoax, corruption, phishing, extremist ideology, and terrorist support up to computer networks spreading computer viruses or DDoS attack software or even biological networks of carriers or transport centers spreading disease among the population. New attack strategy can be therefore used against malicious networks, as well as in a worst-case scenario test for robustness of a useful network. A common measure of robustness of networks is their disintegration level after removal of a fraction of nodes. This robustness can be calculated as a ratio of the number of nodes of the greatest remaining network component against the number of nodes in the original network. Our paper presents a combination of heuristics optimized for an attack on a complex network to achieve its greatest disintegration. Nodes are deleted sequentially based on a heuristic criterion. Efficiency of classical attack approaches is compared to the proposed approach on Barabási-Albert, scale-free with tunable power-law exponent, and Erdős-Rényi models of complex networks and on real-world networks. Our attack strategy results in a faster disintegration, which is counterbalanced by its slightly increased computational demands.

  11. Improved Bevirimat resistance prediction by combination of structural and sequence-based classifiers

    Directory of Open Access Journals (Sweden)

    Dybowski J Nikolaj

    2011-11-01

    Full Text Available Abstract Background Maturation inhibitors such as Bevirimat are a new class of antiretroviral drugs that hamper the cleavage of HIV-1 proteins into their functional active forms. They bind to these preproteins and inhibit their cleavage by the HIV-1 protease, resulting in non-functional virus particles. Nevertheless, there exist mutations in this region leading to resistance against Bevirimat. Highly specific and accurate tools to predict resistance to maturation inhibitors can help to identify patients, who might benefit from the usage of these new drugs. Results We tested several methods to improve Bevirimat resistance prediction in HIV-1. It turned out that combining structural and sequence-based information in classifier ensembles led to accurate and reliable predictions. Moreover, we were able to identify the most crucial regions for Bevirimat resistance computationally, which are in line with experimental results from other studies. Conclusions Our analysis demonstrated the use of machine learning techniques to predict HIV-1 resistance against maturation inhibitors such as Bevirimat. New maturation inhibitors are already under development and might enlarge the arsenal of antiretroviral drugs in the future. Thus, accurate prediction tools are very useful to enable a personalized therapy.

  12. Evaluation of the Diagnostic Power of Thermography in Breast Cancer Using Bayesian Network Classifiers

    Science.gov (United States)

    Nicandro, Cruz-Ramírez; Efrén, Mezura-Montes; María Yaneli, Ameca-Alducin; Enrique, Martín-Del-Campo-Mena; Héctor Gabriel, Acosta-Mesa; Nancy, Pérez-Castro; Alejandro, Guerra-Hernández; Guillermo de Jesús, Hoyos-Rivera; Rocío Erandi, Barrientos-Martínez

    2013-01-01

    Breast cancer is one of the leading causes of death among women worldwide. There are a number of techniques used for diagnosing this disease: mammography, ultrasound, and biopsy, among others. Each of these has well-known advantages and disadvantages. A relatively new method, based on the temperature a tumor may produce, has recently been explored: thermography. In this paper, we will evaluate the diagnostic power of thermography in breast cancer using Bayesian network classifiers. We will show how the information provided by the thermal image can be used in order to characterize patients suspected of having cancer. Our main contribution is the proposal of a score, based on the aforementioned information, that could help distinguish sick patients from healthy ones. Our main results suggest the potential of this technique in such a goal but also show its main limitations that have to be overcome to consider it as an effective diagnosis complementary tool. PMID:23762182

  13. Deep convolutional neural network for classifying Fusarium wilt of radish from unmanned aerial vehicles

    Science.gov (United States)

    Ha, Jin Gwan; Moon, Hyeonjoon; Kwak, Jin Tae; Hassan, Syed Ibrahim; Dang, Minh; Lee, O. New; Park, Han Yong

    2017-10-01

    Recently, unmanned aerial vehicles (UAVs) have gained much attention. In particular, there is a growing interest in utilizing UAVs for agricultural applications such as crop monitoring and management. We propose a computerized system that is capable of detecting Fusarium wilt of radish with high accuracy. The system adopts computer vision and machine learning techniques, including deep learning, to process the images captured by UAVs at low altitudes and to identify the infected radish. The whole radish field is first segmented into three distinctive regions (radish, bare ground, and mulching film) via a softmax classifier and K-means clustering. Then, the identified radish regions are further classified into healthy radish and Fusarium wilt of radish using a deep convolutional neural network (CNN). In identifying radish, bare ground, and mulching film from a radish field, we achieved an accuracy of ≥97.4%. In detecting Fusarium wilt of radish, the CNN obtained an accuracy of 93.3%. It also outperformed the standard machine learning algorithm, obtaining 82.9% accuracy. Therefore, UAVs equipped with computational techniques are promising tools for improving the quality and efficiency of agriculture today.

  14. Combining morphological analysis and Bayesian networks for ...

    African Journals Online (AJOL)

    ... how these two computer aided methods may be combined to better facilitate modelling procedures. A simple example is presented, concerning a recent application in the field of environmental decision support. Keywords: Morphological analysis, Bayesian networks, strategic decision support. ORiON Vol. 23 (2) 2007: pp.

  15. Gas chimney detection based on improving the performance of combined multilayer perceptron and support vector classifier

    NARCIS (Netherlands)

    Hashemi, H.; Tax, D.M.J.; Duin, R.P.W.; Javaherian, A.; De Groot, P.

    2008-01-01

    Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a

  16. Comparative study of artificial neural network and multivariate methods to classify Spanish DO rose wines.

    Science.gov (United States)

    Pérez-Magariño, S; Ortega-Heras, M; González-San José, M L; Boger, Z

    2004-04-19

    Classical multivariate analysis techniques such as factor analysis and stepwise linear discriminant analysis and artificial neural networks method (ANN) have been applied to the classification of Spanish denomination of origin (DO) rose wines according to their geographical origin. Seventy commercial rose wines from four different Spanish DO (Ribera del Duero, Rioja, Valdepeñas and La Mancha) and two successive vintages were studied. Nineteen different variables were measured in these wines. The stepwise linear discriminant analyses (SLDA) model selected 10 variables obtaining a global percentage of correct classification of 98.8% and of global prediction of 97.3%. The ANN model selected seven variables, five of which were also selected by the SLDA model, and it gave a 100% of correct classification for training and prediction. So, both models can be considered satisfactory and acceptable, being the selected variables useful to classify and differentiate these wines by their origin. Furthermore, the casual index analysis gave information that can be easily explained from an enological point of view.

  17. An Unobtrusive Fall Detection and Alerting System Based on Kalman Filter and Bayes Network Classifier.

    Science.gov (United States)

    He, Jian; Bai, Shuang; Wang, Xiaoyi

    2017-06-16

    Falls are one of the main health risks among the elderly. A fall detection system based on inertial sensors can automatically detect fall event and alert a caregiver for immediate assistance, so as to reduce injuries causing by falls. Nevertheless, most inertial sensor-based fall detection technologies have focused on the accuracy of detection while neglecting quantization noise caused by inertial sensor. In this paper, an activity model based on tri-axial acceleration and gyroscope is proposed, and the difference between activities of daily living (ADLs) and falls is analyzed. Meanwhile, a Kalman filter is proposed to preprocess the raw data so as to reduce noise. A sliding window and Bayes network classifier are introduced to develop a wearable fall detection system, which is composed of a wearable motion sensor and a smart phone. The experiment shows that the proposed system distinguishes simulated falls from ADLs with a high accuracy of 95.67%, while sensitivity and specificity are 99.0% and 95.0%, respectively. Furthermore, the smart phone can issue an alarm to caregivers so as to provide timely and accurate help for the elderly, as soon as the system detects a fall.

  18. Automatic Assessing of Tremor Severity Using Nonlinear Dynamics, Artificial Neural Networks and Neuro-Fuzzy Classifier

    Directory of Open Access Journals (Sweden)

    GEMAN, O.

    2014-02-01

    Full Text Available Neurological diseases like Alzheimer, epilepsy, Parkinson's disease, multiple sclerosis and other dementias influence the lives of patients, their families and society. Parkinson's disease (PD is a neurodegenerative disease that occurs due to loss of dopamine, a neurotransmitter and slow destruction of neurons. Brain area affected by progressive destruction of neurons is responsible for controlling movements, and patients with PD reveal rigid and uncontrollable gestures, postural instability, small handwriting and tremor. Commercial activity-promoting gaming systems such as the Nintendo Wii and Xbox Kinect can be used as tools for tremor, gait or other biomedical signals acquisitions. They also can aid for rehabilitation in clinical settings. This paper emphasizes the use of intelligent optical sensors or accelerometers in biomedical signal acquisition, and of the specific nonlinear dynamics parameters or fuzzy logic in Parkinson's disease tremor analysis. Nowadays, there is no screening test for early detection of PD. So, we investigated a method to predict PD, based on the image processing of the handwriting belonging to a candidate of PD. For classification and discrimination between healthy people and PD people we used Artificial Neural Networks (Radial Basis Function - RBF and Multilayer Perceptron - MLP and an Adaptive Neuro-Fuzzy Classifier (ANFC. In general, the results may be expressed as a prognostic (risk degree to contact PD.

  19. Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction.

    Science.gov (United States)

    Kumudha, P; Venkatesan, R

    Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets.

  20. Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction

    Directory of Open Access Journals (Sweden)

    P. Kumudha

    2016-01-01

    Full Text Available Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN and the novel adaptive dimensional biogeography based optimization (ADBBO model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets.

  1. WAVELET ANALYSIS AND NEURAL NETWORK CLASSIFIERS TO DETECT MID-SAGITTAL SECTIONS FOR NUCHAL TRANSLUCENCY MEASUREMENT

    Directory of Open Access Journals (Sweden)

    Giuseppa Sciortino

    2016-04-01

    Full Text Available We propose a methodology to support the physician in the automatic identification of mid-sagittal sections of the fetus in ultrasound videos acquired during the first trimester of pregnancy. A good mid-sagittal section is a key requirement to make the correct measurement of nuchal translucency which is one of the main marker for screening of chromosomal defects such as trisomy 13, 18 and 21. NT measurement is beyond the scope of this article. The proposed methodology is mainly based on wavelet analysis and neural network classifiers to detect the jawbone and on radial symmetry analysis to detect the choroid plexus. Those steps allow to identify the frames which represent correct mid-sagittal sections to be processed. The performance of the proposed methodology was analyzed on 3000 random frames uniformly extracted from 10 real clinical ultrasound videos. With respect to a ground-truth provided by an expert physician, we obtained a true positive, a true negative and a balanced accuracy equal to 87.26%, 94.98% and 91.12% respectively.

  2. NeBcon: protein contact map prediction using neural network training coupled with naïve Bayes classifiers.

    Science.gov (United States)

    He, Baoji; Mortuza, S M; Wang, Yanting; Shen, Hong-Bin; Zhang, Yang

    2017-08-01

    Recent CASP experiments have witnessed exciting progress on folding large-size non-humongous proteins with the assistance of co-evolution based contact predictions. The success is however anecdotal due to the requirement of the contact prediction methods for the high volume of sequence homologs that are not available to most of the non-humongous protein targets. Development of efficient methods that can generate balanced and reliable contact maps for different type of protein targets is essential to enhance the success rate of the ab initio protein structure prediction. We developed a new pipeline, NeBcon, which uses the naïve Bayes classifier (NBC) theorem to combine eight state of the art contact methods that are built from co-evolution and machine learning approaches. The posterior probabilities of the NBC model are then trained with intrinsic structural features through neural network learning for the final contact map prediction. NeBcon was tested on 98 non-redundant proteins, which improves the accuracy of the best co-evolution based meta-server predictor by 22%; the magnitude of the improvement increases to 45% for the hard targets that lack sequence and structural homologs in the databases. Detailed data analysis showed that the major contribution to the improvement is due to the optimized NBC combination of the complementary information from both co-evolution and machine learning predictions. The neural network training also helps to improve the coupling of the NBC posterior probability and the intrinsic structural features, which were found particularly important for the proteins that do not have sufficient number of homologous sequences to derive reliable co-evolution profiles. On-line server and standalone package of the program are available at http://zhanglab.ccmb.med.umich.edu/NeBcon/ . zhng@umich.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For

  3. Influence of Acoustic Feedback on the Learning Strategies of Neural Network-Based Sound Classifiers in Digital Hearing Aids

    Directory of Open Access Journals (Sweden)

    Lorena Álvarez

    2009-01-01

    Full Text Available Sound classifiers embedded in digital hearing aids are usually designed by using sound databases that do not include the distortions associated to the feedback that often occurs when these devices have to work at high gain and low gain margin to oscillation. The consequence is that the classifier learns inappropriate sound patterns. In this paper we explore the feasibility of using different sound databases (generated according to 18 configurations of real patients, and a variety of learning strategies for neural networks in the effort of reducing the probability of erroneous classification. The experimental work basically points out that the proposed methods assist the neural network-based classifier in reducing its error probability in more than 18%. This helps enhance the elderly user's comfort: the hearing aid automatically selects, with higher success probability, the program that is best adapted to the changing acoustic environment the user is facing.

  4. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers.

    Science.gov (United States)

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-04-15

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject "at rest"). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing "signal" (brain activity) can be distinguished form the "noise" components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX ("FMRIB's ICA-based X-noiseifier"), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original

  5. A Novel HMM Distributed Classifier for the Detection of Gait Phases by Means of a Wearable Inertial Sensor Network

    Directory of Open Access Journals (Sweden)

    Juri Taborri

    2014-09-01

    Full Text Available In this work, we decided to apply a hierarchical weighted decision, proposed and used in other research fields, for the recognition of gait phases. The developed and validated novel distributed classifier is based on hierarchical weighted decision from outputs of scalar Hidden Markov Models (HMM applied to angular velocities of foot, shank, and thigh. The angular velocities of ten healthy subjects were acquired via three uni-axial gyroscopes embedded in inertial measurement units (IMUs during one walking task, repeated three times, on a treadmill. After validating the novel distributed classifier and scalar and vectorial classifiers-already proposed in the literature, with a cross-validation, classifiers were compared for sensitivity, specificity, and computational load for all combinations of the three targeted anatomical segments. Moreover, the performance of the novel distributed classifier in the estimation of gait variability in terms of mean time and coefficient of variation was evaluated. The highest values of specificity and sensitivity (>0.98 for the three classifiers examined here were obtained when the angular velocity of the foot was processed. Distributed and vectorial classifiers reached acceptable values (>0.95 when the angular velocity of shank and thigh were analyzed. Distributed and scalar classifiers showed values of computational load about 100 times lower than the one obtained with the vectorial classifier. In addition, distributed classifiers showed an excellent reliability for the evaluation of mean time and a good/excellent reliability for the coefficient of variation. In conclusion, due to the better performance and the small value of computational load, the here proposed novel distributed classifier can be implemented in the real-time application of gait phases recognition, such as to evaluate gait variability in patients or to control active orthoses for the recovery of mobility of lower limb joints.

  6. Deep Convolutional Neural Network Analysis of Flow Imaging Microscopy Data to Classify Subvisible Particles in Protein Formulations.

    Science.gov (United States)

    Calderon, Christopher P; Daniels, Austin L; Randolph, Theodore W

    2018-04-01

    Flow-imaging microscopy (FIM) is commonly used to characterize subvisible particles in therapeutic protein formulations. Although pharmaceutical companies often collect large repositories of FIM images of protein therapeutic products, current state-of-the-art methods for analyzing these images rely on low-dimensional lists of "morphological features" to characterize particles that ignore much of the information encoded in the existing image databases. Deep convolutional neural networks (sometimes referred to as "CNNs or ConvNets") have demonstrated the ability to extract predictive information from raw macroscopic image data without requiring the selection or specification of "morphological features" in a variety of tasks. However, the inherent heterogeneity of protein therapeutics and optical phenomena associated with subvisible FIM particle measurements introduces new challenges regarding the application of ConvNets to FIM image analysis. We demonstrate a supervised learning technique leveraging ConvNets to extract information from raw images in order to predict the process conditions or stress states (freeze-thawing, mechanical shaking, etc.) that produced a variety of different protein particles. We demonstrate that our new classifier, in combination with a "data pooling" strategy, can nearly perfectly differentiate between protein formulations in a variety of scenarios of relevance to protein therapeutics quality control and process monitoring using as few as 20 particles imaged via FIM. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  7. Time-aware service-classified spectrum defragmentation algorithm for flex-grid optical networks

    Science.gov (United States)

    Qiu, Yang; Xu, Jing

    2018-01-01

    By employing sophisticated routing and spectrum assignment (RSA) algorithms together with a finer spectrum granularity (namely frequency slot) in resource allocation procedures, flex-grid optical networks can accommodate diverse kinds of services with high spectrum-allocation flexibility and resource-utilization efficiency. However, the continuity and the contiguity constraints in spectrum allocation procedures may always induce some isolated, small-sized, and unoccupied spectral blocks (known as spectrum fragments) in flex-grid optical networks. Although these spectrum fragments are left unoccupied, they can hardly be utilized by the subsequent service requests directly because of their spectral characteristics and the constraints in spectrum allocation. In this way, the existence of spectrum fragments may exhaust the available spectrum resources for a coming service request and thus worsens the networking performance. Therefore, many reactive defragmentation algorithms have been proposed to handle the fragmented spectrum resources via re-optimizing the routing paths and the spectrum resources for the existing services. But the routing-path and the spectrum-resource re-optimization in reactive defragmentation algorithms may possibly disrupt the traffic of the existing services and require extra components. By comparison, some proactive defragmentation algorithms (e.g. fragmentation-aware algorithms) were proposed to suppress spectrum fragments from their generation instead of handling the fragmented spectrum resources. Although these proactive defragmentation algorithms induced no traffic disruption and required no extra components, they always left the generated spectrum fragments unhandled, which greatly affected their efficiency in spectrum defragmentation. In this paper, by comprehensively considering the characteristics of both the reactive and the proactive defragmentation algorithms, we proposed a time-aware service-classified (TASC) spectrum

  8. COMPOSITE MATERIALS' CONDITION CLASSIFIER BASED ON NEURAL NETWORK OF ADAPTIVE RESONANCE THEORY

    Directory of Open Access Journals (Sweden)

    В. Єременко

    2012-04-01

    Full Text Available In this article proposed to use a modified neural network Fuzzy-ART for classification of thetechnical condition of composite materials. This neural network is used as a part of nondestructivetesting system to perform diagnosis of composite materials and provides cluster analysis andclassification of units under test. The advantage of the described neural network and the system ingeneral is its flexible architecture, high performance and high reliability of data processing

  9. Using Unsupervised Learning to Improve the Naive Bayes Classifier for Wireless Sensor Networks

    NARCIS (Netherlands)

    Zwartjes, G.J.; Havinga, Paul J.M.; Smit, Gerardus Johannes Maria; Hurink, Johann L.

    2012-01-01

    Online processing is essential for many sensor network applications. Sensor nodes can sample far more data than what can practically be transmitted using state of the art sensor network radios. Online processing, however, is complicated due to limited resources of individual nodes. The naive Bayes

  10. Classifying fibromyalgia patients according to severity: the combined index of severity in fibromyalgia.

    Science.gov (United States)

    Rivera, J; Vallejo, M A; Offenbächer, M

    2014-12-01

    The aim of this study was to establish the cutoff points in the Combined Index of Fibromyalgia Severity (ICAF) questionnaire which allow classification of patients by severity and to evaluate its application in the clinical practice. The cutoff points were calculated using the area under the ROC curve in two cohorts of patients. Three visits, basal, fourth month and 15th month, were considered. The external criterion for grading severity was the number of drugs consumed by the patient. Sequential changes were calculated and compared. Correlations with drug consumption and comparisons of severity between patients with different types of coping were also calculated. Correlation between the number of drugs and the ICAF total score was significant. Three cutoff points were established: absence of Fibromyalgia (FM), 50, with the following distribution of severity: absence in 0.4 %, mild in 18.7 %, moderate in 32.5 % and severe in 48.4 % of the patients. There were significant differences between groups. The treatment under daily clinical conditions showed a significant improvement of the patients which was maintained at the end of follow-up. There was a 17 % reduction in the severe category. The patients with more passive coping factor showed highest punctuations in the remaining scores and were more prevalent in the severe category. The patients with a predominance of the emotional factor showed a better response at the end of follow-up. The established cutoff points allow the classification of FM patients by severity, to know the prognostic and to predict the response to the treatment.

  11. Graphic Symbol Recognition using Graph Based Signature and Bayesian Network Classifier

    OpenAIRE

    Luqman, Muhammad Muzzamil; Brouard, Thierry; Ramel, Jean-Yves

    2010-01-01

    We present a new approach for recognition of complex graphic symbols in technical documents. Graphic symbol recognition is a well known challenge in the field of document image analysis and is at heart of most graphic recognition systems. Our method uses structural approach for symbol representation and statistical classifier for symbol recognition. In our system we represent symbols by their graph based signatures: a graphic symbol is vectorized and is converted to an attributed relational g...

  12. Detecting Cyber-Attacks on Wireless Mobile Networks Using Multicriterion Fuzzy Classifier with Genetic Attribute Selection

    Directory of Open Access Journals (Sweden)

    El-Sayed M. El-Alfy

    2015-01-01

    Full Text Available With the proliferation of wireless and mobile network infrastructures and capabilities, a wide range of exploitable vulnerabilities emerges due to the use of multivendor and multidomain cross-network services for signaling and transport of Internet- and wireless-based data. Consequently, the rates and types of cyber-attacks have grown considerably and current security countermeasures for protecting information and communication may be no longer sufficient. In this paper, we investigate a novel methodology based on multicriterion decision making and fuzzy classification that can provide a viable second-line of defense for mitigating cyber-attacks. The proposed approach has the advantage of dealing with various types and sizes of attributes related to network traffic such as basic packet headers, content, and time. To increase the effectiveness and construct optimal models, we augmented the proposed approach with a genetic attribute selection strategy. This allows efficient and simpler models which can be replicated at various network components to cooperatively detect and report malicious behaviors. Using three datasets covering a variety of network attacks, the performance enhancements due to the proposed approach are manifested in terms of detection errors and model construction times.

  13. A modular neural network classifier for the recognition of occluded characters in automatic license plate reading

    NARCIS (Netherlands)

    Nijhuis, JAG; Broersma, A; Spaanenburg, L; Ruan, D; Dhondt, P; Kerre, EE

    2002-01-01

    Occlusion is the most common reason for lowered recognition yield in free-flow license-plate reading systems. (Non-)occluded characters can readily be learned in separate neural networks but not together. Even a small proportion of occluded characters in the training set will already significantly

  14. An Estimation of QoS for Classified Based Approach and Nonclassified Based Approach of Wireless Agriculture Monitoring Network Using a Network Model

    Directory of Open Access Journals (Sweden)

    Ismail Ahmedy

    2017-01-01

    Full Text Available Wireless Sensor Network (WSN can facilitate the process of monitoring the crops through agriculture monitoring network. However, it is challenging to implement the agriculture monitoring network in large scale and large distributed area. Typically, a large and dense network as a form of multihop network is used to establish communication between source and destination. This network continuously monitors the crops without sensitivity classification that can lead to message collision and packets drop. Retransmissions of drop messages can increase the energy consumption and delay. Therefore, to ensure a high quality of service (QoS, we propose an agriculture monitoring network that monitors the crops based on their sensitivity conditions wherein the crops with higher sensitivity are monitored constantly, while less sensitive crops are monitored occasionally. This approach selects a set of nodes rather than utilizing all the nodes in the network which reduces the power consumption in each node and network delay. The QoS of the proposed classified based approach is compared with the nonclassified approach in two scenarios; the backoff periods are changed in the first scenario while the numbers of nodes are changed in the second scenario. The simulation results demonstrate that the proposed approach outperforms the nonclassified approach on different test scenarios.

  15. Multi-categorical deep learning neural network to classify retinal images: A pilot study employing small database.

    Science.gov (United States)

    Choi, Joon Yul; Yoo, Tae Keun; Seo, Jeong Gi; Kwak, Jiyong; Um, Terry Taewoong; Rim, Tyler Hyungtaek

    2017-01-01

    Deep learning emerges as a powerful tool for analyzing medical images. Retinal disease detection by using computer-aided diagnosis from fundus image has emerged as a new method. We applied deep learning convolutional neural network by using MatConvNet for an automated detection of multiple retinal diseases with fundus photographs involved in STructured Analysis of the REtina (STARE) database. Dataset was built by expanding data on 10 categories, including normal retina and nine retinal diseases. The optimal outcomes were acquired by using a random forest transfer learning based on VGG-19 architecture. The classification results depended greatly on the number of categories. As the number of categories increased, the performance of deep learning models was diminished. When all 10 categories were included, we obtained results with an accuracy of 30.5%, relative classifier information (RCI) of 0.052, and Cohen's kappa of 0.224. Considering three integrated normal, background diabetic retinopathy, and dry age-related macular degeneration, the multi-categorical classifier showed accuracy of 72.8%, 0.283 RCI, and 0.577 kappa. In addition, several ensemble classifiers enhanced the multi-categorical classification performance. The transfer learning incorporated with ensemble classifier of clustering and voting approach presented the best performance with accuracy of 36.7%, 0.053 RCI, and 0.225 kappa in the 10 retinal diseases classification problem. First, due to the small size of datasets, the deep learning techniques in this study were ineffective to be applied in clinics where numerous patients suffering from various types of retinal disorders visit for diagnosis and treatment. Second, we found that the transfer learning incorporated with ensemble classifiers can improve the classification performance in order to detect multi-categorical retinal diseases. Further studies should confirm the effectiveness of algorithms with large datasets obtained from hospitals.

  16. Analysis of the Earth's magnetosphere states using the algorithm of adaptive construction of hierarchical neural network classifiers

    Science.gov (United States)

    Dolenko, Sergey; Svetlov, Vsevolod; Isaev, Igor; Myagkova, Irina

    2017-10-01

    This paper presents analysis of the results of clusterization of the array of increases in the flux of relativistic electrons in the outer radiation belt of the Earth by two clustering algorithms. One of them is the algorithm for adaptive construction of hierarchical neural network classifiers developed by the authors, applied in clustering mode; the other one is the well-known k-means clusterization algorithm. The obtained clusters are analysed from the point of view of their possible matching to characteristic types of events, the partitions obtained by both methods are compared with each other.

  17. Picasso: A Modular Framework for Visualizing the Learning Process of Neural Network Image Classifiers

    Directory of Open Access Journals (Sweden)

    Ryan Henderson

    2017-09-01

    Full Text Available Picasso is a free open-source (Eclipse Public License web application written in Python for rendering standard visualizations useful for analyzing convolutional neural networks. Picasso ships with occlusion maps and saliency maps, two visualizations which help reveal issues that evaluation metrics like loss and accuracy might hide: for example, learning a proxy classification task. Picasso works with the Tensorflow deep learning framework, and Keras (when the model can be loaded into the Tensorflow backend. Picasso can be used with minimal configuration by deep learning researchers and engineers alike across various neural network architectures. Adding new visualizations is simple: the user can specify their visualization code and HTML template separately from the application code.

  18. Crystal surface analysis using matrix textural features classified by a Probabilistic Neural Network

    International Nuclear Information System (INIS)

    Sawyer, C.R.; Quach, V.T.; Nason, D.; van den Berg, L.

    1991-01-01

    A system is under development in which surface quality of a growing bulk mercuric iodide crystal is monitored by video camera at regular intervals for early detection of growth irregularities. Mercuric iodide single crystals are employed in radiation detectors. A microcomputer system is used for image capture and processing. The digitized image is divided into multiple overlappings subimage and features are extracted from each subimage based on statistical measures of the gray tone distribution, according to the method of Haralick [1]. Twenty parameters are derived from each subimage and presented to a Probabilistic Neural Network (PNN) [2] for classification. This number of parameters was found to be optimal for the system. The PNN is a hierarchical, feed-forward network that can be rapidly reconfigured as additional training data become available. Training data is gathered by reviewing digital images of many crystals during their growth cycle and compiling two sets of images, those with and without irregularities. 6 refs., 4 figs

  19. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    Science.gov (United States)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  20. Improving computer-aided diagnosis of interstitial disease in chest radiographs by combining one-class and two-class classifiers.

    NARCIS (Netherlands)

    Arzhaeva, Y.; Tax, D.; Van Ginneken, B.

    2006-01-01

    In this paper we compare and combine two distinct pattern classification approaches to the automated detection of regions with interstitial abnormalities in frontal chest radiographs. Standard two-class classifiers and recently developed one-class classifiers are considered. The one-class problem is

  1. Robust Template Decomposition without Weight Restriction for Cellular Neural Networks Implementing Arbitrary Boolean Functions Using Support Vector Classifiers

    Directory of Open Access Journals (Sweden)

    Yih-Lon Lin

    2013-01-01

    Full Text Available If the given Boolean function is linearly separable, a robust uncoupled cellular neural network can be designed as a maximal margin classifier. On the other hand, if the given Boolean function is linearly separable but has a small geometric margin or it is not linearly separable, a popular approach is to find a sequence of robust uncoupled cellular neural networks implementing the given Boolean function. In the past research works using this approach, the control template parameters and thresholds are restricted to assume only a given finite set of integers, and this is certainly unnecessary for the template design. In this study, we try to remove this restriction. Minterm- and maxterm-based decomposition algorithms utilizing the soft margin and maximal margin support vector classifiers are proposed to design a sequence of robust templates implementing an arbitrary Boolean function. Several illustrative examples are simulated to demonstrate the efficiency of the proposed method by comparing our results with those produced by other decomposition methods with restricted weights.

  2. Generalized Network Psychometrics : Combining Network and Latent Variable Models

    NARCIS (Netherlands)

    Epskamp, S.; Rhemtulla, M.; Borsboom, D.

    2017-01-01

    We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between

  3. A Neural Network Classifier Model for Forecasting Safety Behavior at Workplaces

    Directory of Open Access Journals (Sweden)

    Fakhradin Ghasemi

    2017-07-01

    Full Text Available The construction industry is notorious for having an unacceptable rate of fatal accidents. Unsafe behavior has been recognized as the main cause of most accidents occurring at workplaces, particularly construction sites. Having a predictive model of safety behavior can be helpful in preventing construction accidents. The aim of the present study was to build a predictive model of unsafe behavior using the Artificial Neural Network approach. A brief literature review was conducted on factors affecting safe behavior at workplaces and nine factors were selected to be included in the study. Data were gathered using a validated questionnaire from several construction sites. Multilayer perceptron approach was utilized for constructing the desired neural network. Several models with various architectures were tested to find the best one. Sensitivity analysis was conducted to find the most influential factors. The model with one hidden layer containing fourteen hidden neurons demonstrated the best performance (Sum of Squared Errors=6.73. The error rate of the model was approximately 21 percent. The results of sensitivity analysis showed that safety attitude, safety knowledge, supportive environment, and management commitment had the highest effects on safety behavior, while the effects from resource allocation and perceived work pressure were identified to be lower than those of others. The complex nature of human behavior at workplaces and the presence of many influential factors make it difficult to achieve a model with perfect performance.

  4. Learning sparse models for a dynamic Bayesian network classifier of protein secondary structure

    Directory of Open Access Journals (Sweden)

    Bilmes Jeff

    2011-05-01

    Full Text Available Abstract Background Protein secondary structure prediction provides insight into protein function and is a valuable preliminary step for predicting the 3D structure of a protein. Dynamic Bayesian networks (DBNs and support vector machines (SVMs have been shown to provide state-of-the-art performance in secondary structure prediction. As the size of the protein database grows, it becomes feasible to use a richer model in an effort to capture subtle correlations among the amino acids and the predicted labels. In this context, it is beneficial to derive sparse models that discourage over-fitting and provide biological insight. Results In this paper, we first show that we are able to obtain accurate secondary structure predictions. Our per-residue accuracy on a well established and difficult benchmark (CB513 is 80.3%, which is comparable to the state-of-the-art evaluated on this dataset. We then introduce an algorithm for sparsifying the parameters of a DBN. Using this algorithm, we can automatically remove up to 70-95% of the parameters of a DBN while maintaining the same level of predictive accuracy on the SD576 set. At 90% sparsity, we are able to compute predictions three times faster than a fully dense model evaluated on the SD576 set. We also demonstrate, using simulated data, that the algorithm is able to recover true sparse structures with high accuracy, and using real data, that the sparse model identifies known correlation structure (local and non-local related to different classes of secondary structure elements. Conclusions We present a secondary structure prediction method that employs dynamic Bayesian networks and support vector machines. We also introduce an algorithm for sparsifying the parameters of the dynamic Bayesian network. The sparsification approach yields a significant speed-up in generating predictions, and we demonstrate that the amino acid correlations identified by the algorithm correspond to several known features of

  5. Muon Neutrino Disappearance in NOvA with a Deep Convolutional Neural Network Classifier

    Energy Technology Data Exchange (ETDEWEB)

    Rocco, Dominick Rosario [Minnesota U.

    2016-03-01

    The NuMI Off-axis Neutrino Appearance Experiment (NOvA) is designed to study neutrino oscillation in the NuMI (Neutrinos at the Main Injector) beam. NOvA observes neutrino oscillation using two detectors separated by a baseline of 810 km; a 14 kt Far Detector in Ash River, MN and a functionally identical 0.3 kt Near Detector at Fermilab. The experiment aims to provide new measurements of Δm2 and θ23 and has potential to determine the neutrino mass hierarchy as well as observe CP violation in the neutrino sector. Essential to these analyses is the classification of neutrino interaction events in NOvA detectors. Raw detector output from NOvA is interpretable as a pair of images which provide orthogonal views of particle interactions. A recent advance in the field of computer vision is the advent of convolutional neural networks, which have delivered top results in the latest image recognition contests. This work presents an approach novel to particle physics analysis in which a convolutional neural network is used for classification of particle interactions. The approach has been demonstrated to improve the signal efficiency and purity of the event selection, and thus physics sensitivity. Early NOvA data has been analyzed (2.74×1020 POT, 14 kt equivalent) to provide new best- fit measurements of sin2(θ23) = 0.43 (with a statistically-degenerate compliment near 0.60) and |Δm2 | = 2.48 × 10-3 eV2.

  6. Classifying U.S. Army Military Occupational Specialties using the Occupational Information Network.

    Science.gov (United States)

    Gadermann, Anne M; Heeringa, Steven G; Stein, Murray B; Ursano, Robert J; Colpe, Lisa J; Fullerton, Carol S; Gilman, Stephen E; Gruber, Michael J; Nock, Matthew K; Rosellini, Anthony J; Sampson, Nancy A; Schoenbaum, Michael; Zaslavsky, Alan M; Kessler, Ronald C

    2014-07-01

    To derive job condition scales for future studies of the effects of job conditions on soldier health and job functioning across Army Military Occupation Specialties (MOSs) and Areas of Concentration (AOCs) using Department of Labor (DoL) Occupational Information Network (O*NET) ratings. A consolidated administrative dataset was created for the "Army Study to Assess Risk and Resilience in Servicemembers" (Army STARRS) containing all soldiers on active duty between 2004 and 2009. A crosswalk between civilian occupations and MOS/AOCs (created by DoL and the Defense Manpower Data Center) was augmented to assign scores on all 246 O*NET dimensions to each soldier in the dataset. Principal components analysis was used to summarize these dimensions. Three correlated components explained the majority of O*NET dimension variance: "physical demands" (20.9% of variance), "interpersonal complexity" (17.5%), and "substantive complexity" (15.0%). Although broadly consistent with civilian studies, several discrepancies were found with civilian results reflecting potentially important differences in the structure of job conditions in the Army versus the civilian labor force. Principal components scores for these scales provide a parsimonious characterization of key job conditions that can be used in future studies of the effects of MOS/AOC job conditions on diverse outcomes. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.

  7. Classifying and profiling Social Networking Site users: a latent segmentation approach.

    Science.gov (United States)

    Alarcón-del-Amo, María-del-Carmen; Lorenzo-Romero, Carlota; Gómez-Borja, Miguel-Ángel

    2011-09-01

    Social Networking Sites (SNSs) have showed an exponential growth in the last years. The first step for an efficient use of SNSs stems from an understanding of the individuals' behaviors within these sites. In this research, we have obtained a typology of SNS users through a latent segmentation approach, based on the frequency by which users perform different activities within the SNSs, sociodemographic variables, experience in SNSs, and dimensions related to their interaction patterns. Four different segments have been obtained. The "introvert" and "novel" users are the more occasional. They utilize SNSs mainly to communicate with friends, although "introverts" are more passive users. The "versatile" user performs different activities, although occasionally. Finally, the "expert-communicator" performs a greater variety of activities with a higher frequency. They tend to perform some marketing-related activities such as commenting on ads or gathering information about products and brands as well as commenting ads. The companies can take advantage of these segmentation schemes in different ways: first, by tracking and monitoring information interchange between users regarding their products and brands. Second, they should match the SNS users' profiles with their market targets to use SNSs as marketing tools. Finally, for most business, the expert users could be interesting opinion leaders and potential brand influencers.

  8. A neural network model which combines unsupervised and supervised learning.

    Science.gov (United States)

    Hsieh, K R; Chen, W T

    1993-01-01

    A neural network that combines unsupervised and supervised learning for pattern recognition is proposed. The network is a hierarchical self-organization map, which is trained by unsupervised learning at first. When the network fails to recognize similar patterns, supervised learning is applied to teach the network to give different scaling factors for different features so as to discriminate similar patterns. Simulation results show that the model obtains good generalization capability as well as sharp discrimination between similar patterns.

  9. Combination of the Manifold Dimensionality Reduction Methods with Least Squares Support vector machines for Classifying the Species of Sorghum Seeds

    Science.gov (United States)

    Chen, Y. M.; Lin, P.; He, J. Q.; He, Y.; Li, X.L.

    2016-01-01

    This study was carried out for rapid and noninvasive determination of the class of sorghum species by using the manifold dimensionality reduction (MDR) method and the nonlinear regression method of least squares support vector machines (LS-SVM) combing with the mid-infrared spectroscopy (MIRS) techniques. The methods of Durbin and Run test of augmented partial residual plot (APaRP) were performed to diagnose the nonlinearity of the raw spectral data. The nonlinear MDR methods of isometric feature mapping (ISOMAP), local linear embedding, laplacian eigenmaps and local tangent space alignment, as well as the linear MDR methods of principle component analysis and metric multidimensional scaling were employed to extract the feature variables. The extracted characteristic variables were utilized as the input of LS-SVM and established the relationship between the spectra and the target attributes. The mean average precision (MAP) scores and prediction accuracy were respectively used to evaluate the performance of models. The prediction results showed that the ISOMAP-LS-SVM model obtained the best classification performance, where the MAP scores and prediction accuracy were 0.947 and 92.86%, respectively. It can be concluded that the ISOMAP-LS-SVM model combined with the MIRS technique has the potential of classifying the species of sorghum in a reasonable accuracy. PMID:26817580

  10. Combination of the Manifold Dimensionality Reduction Methods with Least Squares Support vector machines for Classifying the Species of Sorghum Seeds.

    Science.gov (United States)

    Chen, Y M; Lin, P; He, J Q; He, Y; Li, X L

    2016-01-28

    This study was carried out for rapid and noninvasive determination of the class of sorghum species by using the manifold dimensionality reduction (MDR) method and the nonlinear regression method of least squares support vector machines (LS-SVM) combing with the mid-infrared spectroscopy (MIRS) techniques. The methods of Durbin and Run test of augmented partial residual plot (APaRP) were performed to diagnose the nonlinearity of the raw spectral data. The nonlinear MDR methods of isometric feature mapping (ISOMAP), local linear embedding, laplacian eigenmaps and local tangent space alignment, as well as the linear MDR methods of principle component analysis and metric multidimensional scaling were employed to extract the feature variables. The extracted characteristic variables were utilized as the input of LS-SVM and established the relationship between the spectra and the target attributes. The mean average precision (MAP) scores and prediction accuracy were respectively used to evaluate the performance of models. The prediction results showed that the ISOMAP-LS-SVM model obtained the best classification performance, where the MAP scores and prediction accuracy were 0.947 and 92.86%, respectively. It can be concluded that the ISOMAP-LS-SVM model combined with the MIRS technique has the potential of classifying the species of sorghum in a reasonable accuracy.

  11. Convolutional Neural Networks with Batch Normalization for Classifying Hi-hat, Snare, and Bass Percussion Sound Samples

    DEFF Research Database (Denmark)

    Gajhede, Nicolai; Beck, Oliver; Purwins, Hendrik

    2016-01-01

    After having revolutionized image and speech processing, convolu- tional neural networks (CNN) are now starting to become more and more successful in music information retrieval as well. We compare four CNN types for classifying a dataset of more than 3000 acoustic and synthesized samples...... of the most prominent drum set instru- ments (bass, snare, hi-hat). We use the Mel scale log magnitudes (MLS) as a representation for the input of the CNN. We compare the classification results of 1) a CNN (3 conv/max-pool layers and 2 fully connected layers) without drop-out and batch normalization vs. three...... variants, 2) with drop-out, 3) with batch normalization (BN), and 4) with both drop-out and BN. The CNNs with BN yield the best classification results (97% accuracy)....

  12. Combining neural networks for protein secondary structure prediction

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1995-01-01

    In this paper structured neural networks are applied to the problem of predicting the secondary structure of proteins. A hierarchical approach is used where specialized neural networks are designed for each structural class and then combined using another neural network. The submodels are designed...... by using a priori knowledge of the mapping between protein building blocks and the secondary structure and by using weight sharing. Since none of the individual networks have more than 600 adjustable weights over-fitting is avoided. When ensembles of specialized experts are combined the performance...... is better than most secondary structure prediction methods based on single sequences even though this model contains much fewer parameters...

  13. Bayesian networks: a combined tuning heuristic

    NARCIS (Netherlands)

    Bolt, J.H.

    2016-01-01

    One of the issues in tuning an output probability of a Bayesian network by changing multiple parameters is the relative amount of the individual parameter changes. In an existing heuristic parameters are tied such that their changes induce locally a maximal change of the tuned probability. This

  14. COMBINED AND STORM SEWER NETWORK MONITORING

    Directory of Open Access Journals (Sweden)

    Justyna Synowiecka

    2014-10-01

    Full Text Available Monitoring of the drainage networks is an extremely important tool used to understand the phenomena occurring in them. In an era of urbanization and increased run-off, at the expense of natural retention in the catchment, it helps to minimize the risk of local flooding and pollution. In its scope includes measurement of the amount of rainfall, with the use of rain gauges, and their measure in the sewer network, in matter of flows and channel filling, with the help of flow meters. An indispensable part in this step is their proper calibration calibration. In addition to ongoing monitoring of the sewer system, periodic inspections by the qualified employees of Water and Sewage Company should be done. The following article reviews measurement devices, their calibration methods, as well as the phenomena that occur during operation in the sewer network. It provides a solution for monitoring and control based on the experience of the Municipal Water and Sewage Company in Wroclaw, describing common operational problems, their causes, prevention methods and a network operation walkthrough with the improve of performance indicators KPI (Key Performance Indicators according the ECB (European Benchmarking Co-operation.

  15. Mango: combining and analyzing heterogeneous biological networks.

    Science.gov (United States)

    Chang, Jennifer; Cho, Hyejin; Chou, Hui-Hsien

    2016-01-01

    Heterogeneous biological data such as sequence matches, gene expression correlations, protein-protein interactions, and biochemical pathways can be merged and analyzed via graphs, or networks. Existing software for network analysis has limited scalability to large data sets or is only accessible to software developers as libraries. In addition, the polymorphic nature of the data sets requires a more standardized method for integration and exploration. Mango facilitates large network analyses with its Graph Exploration Language, automatic graph attribute handling, and real-time 3-dimensional visualization. On a personal computer Mango can load, merge, and analyze networks with millions of links and can connect to online databases to fetch and merge biological pathways. Mango is written in C++ and runs on Mac OS, Windows, and Linux. The stand-alone distributions, including the Graph Exploration Language integrated development environment, are freely available for download from http://www.complex.iastate.edu/download/Mango. The Mango User Guide listing all features can be found at http://www.gitbook.com/book/j23414/mango-user-guide.

  16. Classification of epileptic seizures using wavelet packet log energy and norm entropies with recurrent Elman neural network classifier.

    Science.gov (United States)

    Raghu, S; Sriraam, N; Kumar, G Pradeep

    2017-02-01

    Electroencephalogram shortly termed as EEG is considered as the fundamental segment for the assessment of the neural activities in the brain. In cognitive neuroscience domain, EEG-based assessment method is found to be superior due to its non-invasive ability to detect deep brain structure while exhibiting superior spatial resolutions. Especially for studying the neurodynamic behavior of epileptic seizures, EEG recordings reflect the neuronal activity of the brain and thus provide required clinical diagnostic information for the neurologist. This specific proposed study makes use of wavelet packet based log and norm entropies with a recurrent Elman neural network (REN) for the automated detection of epileptic seizures. Three conditions, normal, pre-ictal and epileptic EEG recordings were considered for the proposed study. An adaptive Weiner filter was initially applied to remove the power line noise of 50 Hz from raw EEG recordings. Raw EEGs were segmented into 1 s patterns to ensure stationarity of the signal. Then wavelet packet using Haar wavelet with a five level decomposition was introduced and two entropies, log and norm were estimated and were applied to REN classifier to perform binary classification. The non-linear Wilcoxon statistical test was applied to observe the variation in the features under these conditions. The effect of log energy entropy (without wavelets) was also studied. It was found from the simulation results that the wavelet packet log entropy with REN classifier yielded a classification accuracy of 99.70 % for normal-pre-ictal, 99.70 % for normal-epileptic and 99.85 % for pre-ictal-epileptic.

  17. Novel method to classify hemodynamic response obtained using multi-channel fNIRS measurements into two groups: Exploring the combinations of channels

    Directory of Open Access Journals (Sweden)

    Hiroko eIchikawa

    2014-07-01

    Full Text Available Near-infrared spectroscopy (NIRS in psychiatric studies has widely demonstrated that cerebral hemodynamics differs among psychiatric patients. Recently we found that children with attention attention-deficit / hyperactivity disorder (ADHD and children with autism spectrum disorders (ASD showed different hemodynamic responses to their own mother’s face. Based on this finding, we may be able to classify their hemodynamic data into two those groups and predict which diagnostic group an unknown participant belongs to. In the present study, we proposed a novel statistical method for classifying the hemodynamic data of these two groups. By applying a support vector machine (SVM, we searched the combination of measurement channels at which the hemodynamic response differed between the two groups; ADHD and ASD. The SVM found the optimal subset of channels in each data set and successfully classified the ADHD data from the ASD data. For the 24-dimentional hemodynamic data, two optimal subsets classified the hemodynamic data with 84% classification accuracy while the subset contains all 24 channels classified with 62% classification accuracy. These results indicate the potential application of our novel method for classifying the hemodynamic data into two groups and revealing the combinations of channels that efficiently differentiate the two groups.

  18. Classifying spaces and classifying topoi

    CERN Document Server

    Moerdijk, Izak

    1995-01-01

    This monograph presents a new, systematic treatment of the relation between classifying topoi and classifying spaces of topological categories. Using a new generalized geometric realization which applies to topoi, a weak homotopy equival- ence is constructed between the classifying space and the classifying topos of any small (topological) category. Topos theory is then applied to give an answer to the question of what structures are classified by "classifying" spaces. The monograph should be accessible to anyone with basic knowledge of algebraic topology, sheaf theory, and a little topos theory.

  19. A cross-sectional evaluation of meditation experience on electroencephalography data by artificial neural network and support vector machine classifiers.

    Science.gov (United States)

    Lee, Yu-Hao; Hsieh, Ya-Ju; Shiah, Yung-Jong; Lin, Yu-Huei; Chen, Chiao-Yun; Tyan, Yu-Chang; GengQiu, JiaCheng; Hsu, Chung-Yao; Chen, Sharon Chia-Ju

    2017-04-01

    To quantitate the meditation experience is a subjective and complex issue because it is confounded by many factors such as emotional state, method of meditation, and personal physical condition. In this study, we propose a strategy with a cross-sectional analysis to evaluate the meditation experience with 2 artificial intelligence techniques: artificial neural network and support vector machine. Within this analysis system, 3 features of the electroencephalography alpha spectrum and variant normalizing scaling are manipulated as the evaluating variables for the detection of accuracy. Thereafter, by modulating the sliding window (the period of the analyzed data) and shifting interval of the window (the time interval to shift the analyzed data), the effect of immediate analysis for the 2 methods is compared. This analysis system is performed on 3 meditation groups, categorizing their meditation experiences in 10-year intervals from novice to junior and to senior. After an exhausted calculation and cross-validation across all variables, the high accuracy rate >98% is achievable under the criterion of 0.5-minute sliding window and 2 seconds shifting interval for both methods. In a word, the minimum analyzable data length is 0.5 minute and the minimum recognizable temporal resolution is 2 seconds in the decision of meditative classification. Our proposed classifier of the meditation experience promotes a rapid evaluation system to distinguish meditation experience and a beneficial utilization of artificial techniques for the big-data analysis.

  20. Classifying Microorganisms

    DEFF Research Database (Denmark)

    Sommerlund, Julie

    2006-01-01

    This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris......This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological...... of Denmark. It is thus a 'real time' and material study of scientific paradigms and discourses....

  1. DropConnected neural networks trained on time-frequency and inter-beat features for classifying heart sounds.

    Science.gov (United States)

    Kay, Edmund; Agarwal, Anurag

    2017-07-31

    Automatic heart sound analysis has the potential to improve the diagnosis of valvular heart diseases in the primary care phase, as well as in countries where there is neither the expertise nor the equipment to perform echocardiograms. An algorithm has been trained, on the PhysioNet open-access heart sounds database, to classify heart sounds as normal or abnormal. The heart sounds are segmented using an open-source algorithm based on a hidden semi-Markov model. Following this, the time-frequency behaviour of a single heartbeat is characterized by using a novel implementation of the continuous wavelet transform, mel-frequency cepstral coefficients, and certain complexity measures. These features help detect the presence of any murmurs. A number of other features are also extracted to characterise the inter-beat behaviour of the heart sounds, which helps to recognize diseases such as arrhythmia. The extracted features are normalized and their dimensionality is reduced using principal component analysis. They are then used as the input to a fully-connected, two-hidden-layer neural network, trained by error backpropagation, and regularized with DropConnect. This algorithm achieved an accuracy of 85.2% on the test data, which placed third in the PhysioNet/Computing in Cardiology Challenge (first place scored 86.0%). However, this is unrealistic of real-world performance, as the test data contained a dataset (dataset-e) in which normal and abnormal heart sounds were recorded with different stethoscopes. A 10-fold cross-validation study on the training data (excluding dataset-e) gives a mean score of 74.8%, which is a more realistic estimate of accuracy. With dataset-e excluded from training, the algorithm scored only 58.1% on the test data.

  2. Classification of mass and normal breast tissue: A convolution neural network classifier with spatial domain and texture images

    International Nuclear Information System (INIS)

    Sahiner, B.; Chan, H.P.; Petrick, N.; Helvie, M.A.; Adler, D.D.; Goodsitt, M.M.; Wei, D.

    1996-01-01

    The authors investigated the classification of regions of interest (ROI's) on mammograms as either mass or normal tissue using a convolution neural network (CNN). A CNN is a back-propagation neural network with two-dimensional (2-D) weight kernels that operate on images. A generalized, fast and stable implementation of the CNN was developed. The input images to the CNN were obtained form the ROI's using two techniques. The first technique employed averaging and subsampling. The second technique employed texture feature extraction methods applied to small subregions inside the ROI. Features computed over different subregions were arranged as texture images, which were subsequently used as CNN inputs. The effects of CNN architecture and texture feature parameters on classification accuracy were studied. Receiver operating characteristic (ROC) methodology was used to evaluate the classification accuracy. A data set consisting of 168 ROI's containing biopsy-proven masses and 504 ROI's containing normal breast tissue was extracted from 168 mammograms by radiologists experienced in mammography. This data set was used for training and testing the CNN. With the best combination of CNN architecture and texture feature parameters, the area under the test ROC curve reached 0.87, which corresponded to a true-positive fraction of 90% at a false positive fraction of 31%. The results demonstrate the feasibility of using a CNN for classification of masses and normal tissue on mammograms

  3. Classifying injury narratives of large administrative databases for surveillance-A practical approach combining machine learning ensembles and human review.

    Science.gov (United States)

    Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R

    2017-01-01

    Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as

  4. Context-Aware Mobile Service Adaptation via a Co-Evolution eXtended Classifier System in Mobile Network Environments

    OpenAIRE

    Shangguang Wang; Zibin Zheng; Zhengping Wu; Qibo Sun; Hua Zou; Fangchun Yang

    2014-01-01

    With the popularity of mobile services, an effective context-aware mobile service adaptation is becoming more and more important for operators. In this paper, we propose a Co-evolution eXtended Classifier System (CXCS) to perform context-aware mobile service adaptation. Our key idea is to learn user context, match adaptation rule, and provide the best suitable mobile services for users. Different from previous adaptation schemes, our proposed CXCS can produce a new user's initial classifier p...

  5. A simple network agreement-based approach for combining evidences in a heterogeneous sensor network

    Directory of Open Access Journals (Sweden)

    Raúl Eusebio-Grande

    2015-12-01

    Full Text Available In this research we investigate how the evidences provided by both static and mobile nodes that are part of a heterogenous sensor network can be combined to have trustworthy results. A solution relying on a network agreement-based approach was implemented and tested.

  6. Propagation of New Innovations: An Approach to Classify Human Behavior and Movement from Available Social Network Data

    Science.gov (United States)

    Mahmud, Faisal; Samiul, Hasan

    2010-01-01

    It is interesting to observe new innovations, products, or ideas propagating into the society. One important factor of this propagation is the role of individual's social network; while another factor is individual's activities. In this paper, an approach will be made to analyze the propagation of different ideas in a popular social network. Individuals' responses to different activities in the network will be analyzed. The properties of network will also be investigated for successful propagation of innovations.

  7. Combine harvester monitor system based on wireless sensor network

    Science.gov (United States)

    A measurement method based on Wireless Sensor Network (WSN) was developed to monitor the working condition of combine harvester for remote application. Three JN5139 modules were chosen for sensor data acquisition and another two as a router and a coordinator, which could create a tree topology netwo...

  8. A combined video and synchronous VSAT data network

    Science.gov (United States)

    Rowse, William

    Private Satellite Network currently operates Business Television networks for Fortune 500 companies. Several of these satellite-based networks, using VSAT technology, are combining the transmission of video with the broadcast of one-way data. This is made possible by use of the PSN Business Television Terminal which incorporates Scientific Atlanta's B-MAC system. In addition to providing high quality video, B-MAC can provide six channels of 204.5 kbs audio. Four of the six channels may be used to directly carry up to 19.2 kbs of asynchronous data or up to 56 kbs of synchronous data using circuitry jointly developed by PSN and Scientific Atlanta. The approach PSN has taken to provide one network customer in the financial industry with both video and broadcast data is described herein.

  9. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    Science.gov (United States)

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  10. COMBINING PCA ANALYSIS AND ARTIFICIAL NEURAL NETWORKS IN MODELLING ENTREPRENEURIAL INTENTIONS OF STUDENTS

    Directory of Open Access Journals (Sweden)

    Marijana Zekić-Sušac

    2013-02-01

    Full Text Available Despite increased interest in the entrepreneurial intentions and career choices of young adults, reliable prediction models are yet to be developed. Two nonparametric methods were used in this paper to model entrepreneurial intentions: principal component analysis (PCA and artificial neural networks (ANNs. PCA was used to perform feature extraction in the first stage of modelling, while artificial neural networks were used to classify students according to their entrepreneurial intentions in the second stage. Four modelling strategies were tested in order to find the most efficient model. Dataset was collected in an international survey on entrepreneurship self-efficacy and identity. Variables describe students’ demographics, education, attitudes, social and cultural norms, self-efficacy and other characteristics. The research reveals benefits from the combination of the PCA and ANNs in modeling entrepreneurial intentions, and provides some ideas for further research.

  11. New neural network classifier of fall-risk based on the Mahalanobis distance and kinematic parameters assessed by a wearable device

    International Nuclear Information System (INIS)

    Giansanti, Daniele; Macellari, Velio; Maccioni, Giovanni

    2008-01-01

    Fall prevention lacks easy, quantitative and wearable methods for the classification of fall-risk (FR). Efforts must be thus devoted to the choice of an ad hoc classifier both to reduce the size of the sample used to train the classifier and to improve performances. A new methodology that uses a neural network (NN) and a wearable device are hereby proposed for this purpose. The NN uses kinematic parameters assessed by a wearable device with accelerometers and rate gyroscopes during a posturography protocol. The training of the NN was based on the Mahalanobis distance and was carried out on two groups of 30 elderly subjects with varying fall-risk Tinetti scores. The validation was done on two groups of 100 subjects with different fall-risk Tinetti scores and showed that, both in terms of specificity and sensitivity, the NN performed better than other classifiers (naive Bayes, Bayes net, multilayer perceptron, support vector machines, statistical classifiers). In particular, (i) the proposed NN methodology improved the specificity and sensitivity by a mean of 3% when compared to the statistical classifier based on the Mahalanobis distance (SCMD) described in Giansanti (2006 Physiol. Meas. 27 1081–90); (ii) the assessed specificity was 97%, the assessed sensitivity was 98% and the area under receiver operator characteristics was 0.965. (note)

  12. OCSANA: optimal combinations of interventions from network analysis.

    Science.gov (United States)

    Vera-Licona, Paola; Bonnet, Eric; Barillot, Emmanuel; Zinovyev, Andrei

    2013-06-15

    Targeted therapies interfering with specifically one protein activity are promising strategies in the treatment of diseases like cancer. However, accumulated empirical experience has shown that targeting multiple proteins in signaling networks involved in the disease is often necessary. Thus, one important problem in biomedical research is the design and prioritization of optimal combinations of interventions to repress a pathological behavior, while minimizing side-effects. OCSANA (optimal combinations of interventions from network analysis) is a new software designed to identify and prioritize optimal and minimal combinations of interventions to disrupt the paths between source nodes and target nodes. When specified by the user, OCSANA seeks to additionally minimize the side effects that a combination of interventions can cause on specified off-target nodes. With the crucial ability to cope with very large networks, OCSANA includes an exact solution and a novel selective enumeration approach for the combinatorial interventions' problem. The latest version of OCSANA, implemented as a plugin for Cytoscape and distributed under LGPL license, is available together with source code at http://bioinfo.curie.fr/projects/ocsana.

  13. Discriminant analysis in the presence of interferences: combined application of target factor analysis and a Bayesian soft-classifier.

    Science.gov (United States)

    Rinke, Caitlin N; Williams, Mary R; Brown, Christopher; Baudelet, Matthieu; Richardson, Martin; Sigman, Michael E

    2012-11-13

    A method is described for performing discriminant analysis in the presence of interfering background signal. The method is based on performing target factor analysis on a data set comprised of contributions from analyte(s) and interfering components. A library of data from representative analyte classes is tested for possible contributing factors by performing oblique rotations of the principal factors to obtain the best match, in a least-squares sense, between test and predicted vectors. The degree of match between the test and predicted vectors is measured by the Pearson correlation coefficient, r, and the distribution of r for each class is determined. A Bayesian soft classifier is used to calculate the posterior probability based on the distributions of r for each class, which assist the analyst in assessing the presence of one or more analytes. The method is demonstrated by analyses performed on spectra obtained by laser induced breakdown spectroscopy (LIBS). Single and multiple bullet jacketing transfers to steel and porcelain substrates were analyzed to identify the jacketing materials. Additionally, the metal surrounding bullet holes was analyzed to identify the class of bullet jacketing that passed through a stainless steel plate. Of 36 single sample transfers, the copper jacketed (CJ) and non-jacketed (NJ) class on porcelain had an average posterior probability of the metal deposited on the substrate of 1.0. Metal jacketed (MJ) bullet transfers to steel and porcelain were not detected as successfully. Multiple transfers of CJ/NJ and CJ/MJ on the two substrates resulted in posterior probabilities that reflected the presence of both jacketing materials. The MJ/NJ transfers gave posterior probabilities that reflected the presence of the NJ material, but the MJ component was mistaken for CJ on steel, while non-zero probabilities were obtained for both CJ and MJ on porcelain. Jacketing transfer from a bullet to steel as the projectile passed through the steel

  14. Sustainability of Hydrogen Supply Chain. Part II: Prioritizing and Classifying the Sustainability of Hydrogen Supply Chains based on the Combination of Extension Theory and AHP

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Manzardo, Alessandro; Toniolo, Sara

    2013-01-01

    The purpose of this study is to develop a method for prioritizing and classifying the sustainability of hydrogen supply chains and assist decision-making for the stakeholders/decision-makers. Multiple criteria for sustainability assessment of hydrogen supply chains are considered and multiple...... decision-makers are allowed to participate in the decision-making using linguistic terms. In this study, extension theory and analytic hierarchy process are combined to rate the sustainability of hydrogen supply chains. The sustainability of hydrogen supply chains could be identified according...

  15. Selection of discriminant mid-infrared wavenumbers by combining a naïve Bayesian classifier and a genetic algorithm: Application to the evaluation of lignocellulosic biomass biodegradation.

    Science.gov (United States)

    Rammal, Abbas; Perrin, Eric; Vrabie, Valeriu; Assaf, Rabih; Fenniri, Hassan

    2017-07-01

    Infrared spectroscopy provides useful information on the molecular compositions of biological systems related to molecular vibrations, overtones, and combinations of fundamental vibrations. Mid-infrared (MIR) spectroscopy is sensitive to organic and mineral components and has attracted growing interest in the development of biomarkers related to intrinsic characteristics of lignocellulose biomass. However, not all spectral information is valuable for biomarker construction or for applying analysis methods such as classification. Better processing and interpretation can be achieved by identifying discriminating wavenumbers. The selection of wavenumbers has been addressed through several variable- or feature-selection methods. Some of them have not been adapted for use in large data sets or are difficult to tune, and others require additional information, such as concentrations. This paper proposes a new approach by combining a naïve Bayesian classifier with a genetic algorithm to identify discriminating spectral wavenumbers. The genetic algorithm uses a linear combination of an a posteriori probability and the Bayes error rate as the fitness function for optimization. Such a function allows the improvement of both the compactness and the separation of classes. This approach was tested to classify a small set of maize roots in soil according to their biodegradation process based on their MIR spectra. The results show that this optimization method allows better discrimination of the biodegradation process, compared with using the information of the entire MIR spectrum, the use of the spectral information at wavenumbers selected by a genetic algorithm based on a classical validity index or the use of the spectral information selected by combining a genetic algorithm with other methods, such as Linear Discriminant Analysis. The proposed method selects wavenumbers that correspond to principal vibrations of chemical functional groups of compounds that undergo degradation

  16. Combined techniques for network measurements at accelerator facilities

    International Nuclear Information System (INIS)

    Pschorn, I.

    1999-01-01

    Usually network measurements at GSi (Gesellschaft fur Schwerionen forschung) are carried out by employing the Leica tachymeter TC2002K etc. Due to time constraints and the fact that GSi possesses only one of these selected, high precision total-stations, it was suddenly necessary to think about employing a Laser tracker as the major instrument for a reference network measurement. The idea was to compare the different instruments and to proof if it is possible at all to carry out a precise network measurement using a laser tracker. In the end the SMX Tracker4500 combined with Leica NA3000 for network measurements at GSi, Darmstadt and at BESSY Il, Berlin (both located in Germany) was applied. A few results are shown in the following chapters. A new technology in 3D metrology came up. Some ideas of applying these new tools in the field of accelerator measurements are given. Finally aspects of calibration and checking the performance of the employed high precision instrument are pointed out in this paper. (author)

  17. Combining morphometric features and convolutional networks fusion for glaucoma diagnosis

    Science.gov (United States)

    Perdomo, Oscar; Arevalo, John; González, Fabio A.

    2017-11-01

    Glaucoma is an eye condition that leads to loss of vision and blindness. Ophthalmoscopy exam evaluates the shape, color and proportion between the optic disc and physiologic cup, but the lack of agreement among experts is still the main diagnosis problem. The application of deep convolutional neural networks combined with automatic extraction of features such as: the cup-to-disc distance in the four quadrants, the perimeter, area, eccentricity, the major radio, the minor radio in optic disc and cup, in addition to all the ratios among the previous parameters may help with a better automatic grading of glaucoma. This paper presents a strategy to merge morphological features and deep convolutional neural networks as a novel methodology to support the glaucoma diagnosis in eye fundus images.

  18. Comparison of two neural network classifiers in the differential diagnosis of essential tremor and Parkinson's disease by 123I-FP-CIT brain SPECT

    International Nuclear Information System (INIS)

    Palumbo, Barbara; Fravolini, Mario Luca; Nuvoli, Susanna; Spanu, Angela; Madeddu, Giuseppe; Paulus, Kai Stephan; Schillaci, Orazio

    2010-01-01

    To contribute to the differentiation of Parkinson's disease (PD) and essential tremor (ET), we compared two different artificial neural network classifiers using 123 I-FP-CIT SPECT data, a probabilistic neural network (PNN) and a classification tree (ClT). 123 I-FP-CIT brain SPECT with semiquantitative analysis was performed in 216 patients: 89 with ET, 64 with PD with a Hoehn and Yahr (H and Y) score of ≤2 (early PD), and 63 with PD with a H and Y score of ≥2.5 (advanced PD). For each of the 1,000 experiments carried out, 108 patients were randomly selected as the PNN training set, while the remaining 108 validated the trained PNN, and the percentage of the validation data correctly classified in the three groups of patients was computed. The expected performance of an ''average performance PNN'' was evaluated. In analogy, for ClT 1,000 classification trees with similar structures were generated. For PNN, the probability of correct classification in patients with early PD was 81.9±8.1% (mean±SD), in patients with advanced PD 78.9±8.1%, and in ET patients 96.6±2.6%. For ClT, the first decision rule gave a mean value for the putamen of 5.99, which resulted in a probability of correct classification of 93.5±3.4%. This means that patients with putamen values >5.99 were classified as having ET, while patients with putamen values <5.99 were classified as having PD. Furthermore, if the caudate nucleus value was higher than 6.97 patients were classified as having early PD (probability 69.8±5.3%), and if the value was <6.97 patients were classified as having advanced PD (probability 88.1%±8.8%). These results confirm that PNN achieved valid classification results. Furthermore, ClT provided reliable cut-off values able to differentiate ET and PD of different severities. (orig.)

  19. Combination of Bayesian Network and Overlay Model in User Modeling

    Directory of Open Access Journals (Sweden)

    Loc Nguyen

    2009-12-01

    Full Text Available The core of adaptive system is user model containing personal information such as knowledge, learning styles, goals… which is requisite for learning personalized process. There are many modeling approaches, for example: stereotype, overlay, plan recognition… but they don’t bring out the solid method for reasoning from user model. This paper introduces the statistical method that combines Bayesian network and overlay modeling so that it is able to infer user’s knowledge from evidences collected during user’s learning process.

  20. Applying Data Mining to Classify Age by Intestinal Microbiota in 92 Healthy Men Using a Combination of Several Restriction Enzymes for T-RFLP Experiments.

    Science.gov (United States)

    Kobayashi, Toshio; Osaki, Takako; Oikawa, Shinya

    2014-01-01

    The composition of the intestinal microbiota was measured following consumption of identical meals for 3 days in 92 Japanese men, and terminal restriction fragment length polymorphism (T-RFLP) was used to analyze their feces. The obtained operational taxonomic units (OTUs) and the subjects' ages were classified by using Data mining (DM) software that compared these data with continuous data and for 5 partitions for age divided at 5 years intervals between the ages of 30 and 50. The DM provided Decision trees in which the selected OTUs were closely related to the ages of the subjects. DM was also used to compare the OTUs from the T-RFLP data with seven restriction enzymes (two enzymes of 516f-BslI and 516f-HaeIII, two enzymes of 27f-MspI and 27f-AluI, three enzymes of 35f-HhaI, 35f-MspI and 35f-AluI) and their various combinations. The OTUs delivered from the five enzyme-digested partitions were analyzed to classify their age clusters. For use in future DM processing, we discussed the enzymes that were effective for accurate classification. We selected two OTUs (HA624 and HA995) that were useful for classifying the subject's ages. Depending on the 16S rRNA sequences of the OTUs, Ruminicoccus obeum clones 1-4 were present in 18 of 36 bacterial candidates in the older age group-related OTU (HA624). On the other hand, Ruminicoccus obeum clones 1-33 were present in 65 of 269 candidates in the younger age group-related OUT (HA995).

  1. Synergy Maps: exploring compound combinations using network-based visualization.

    Science.gov (United States)

    Lewis, Richard; Guha, Rajarshi; Korcsmaros, Tamás; Bender, Andreas

    2015-01-01

    The phenomenon of super-additivity of biological response to compounds applied jointly, termed synergy, has the potential to provide many therapeutic benefits. Therefore, high throughput screening of compound combinations has recently received a great deal of attention. Large compound libraries and the feasibility of all-pairs screening can easily generate large, information-rich datasets. Previously, these datasets have been visualized using either a heat-map or a network approach-however these visualizations only partially represent the information encoded in the dataset. A new visualization technique for pairwise combination screening data, termed "Synergy Maps", is presented. In a Synergy Map, information about the synergistic interactions of compounds is integrated with information about their properties (chemical structure, physicochemical properties, bioactivity profiles) to produce a single visualization. As a result the relationships between compound and combination properties may be investigated simultaneously, and thus may afford insight into the synergy observed in the screen. An interactive web app implementation, available at http://richlewis42.github.io/synergy-maps, has been developed for public use, which may find use in navigating and filtering larger scale combination datasets. This tool is applied to a recent all-pairs dataset of anti-malarials, tested against Plasmodium falciparum, and a preliminary analysis is given as an example, illustrating the disproportionate synergism of histone deacetylase inhibitors previously described in literature, as well as suggesting new hypotheses for future investigation. Synergy Maps improve the state of the art in compound combination visualization, by simultaneously representing individual compound properties and their interactions. The web-based tool allows straightforward exploration of combination data, and easier identification of correlations between compound properties and interactions.

  2. A Bayesian classifier for symbol recognition

    OpenAIRE

    Barrat , Sabine; Tabbone , Salvatore; Nourrissier , Patrick

    2007-01-01

    URL : http://www.buyans.com/POL/UploadedFile/134_9977.pdf; International audience; We present in this paper an original adaptation of Bayesian networks to symbol recognition problem. More precisely, a descriptor combination method, which enables to improve significantly the recognition rate compared to the recognition rates obtained by each descriptor, is presented. In this perspective, we use a simple Bayesian classifier, called naive Bayes. In fact, probabilistic graphical models, more spec...

  3. Combining morphological analysis and Bayesian networks for strategic decision support

    Directory of Open Access Journals (Sweden)

    A de Waal

    2007-12-01

    Full Text Available Morphological analysis (MA and Bayesian networks (BN are two closely related modelling methods, each of which has its advantages and disadvantages for strategic decision support modelling. MA is a method for defining, linking and evaluating problem spaces. BNs are graphical models which consist of a qualitative and quantitative part. The qualitative part is a cause-and-effect, or causal graph. The quantitative part depicts the strength of the causal relationships between variables. Combining MA and BN, as two phases in a modelling process, allows us to gain the benefits of both of these methods. The strength of MA lies in defining, linking and internally evaluating the parameters of problem spaces and BN modelling allows for the definition and quantification of causal relationships between variables. Short summaries of MA and BN are provided in this paper, followed by discussions how these two computer aided methods may be combined to better facilitate modelling procedures. A simple example is presented, concerning a recent application in the field of environmental decision support.

  4. Comparison of Two Classifiers; K-Nearest Neighbor and Artificial Neural Network, for Fault Diagnosis on a Main Engine Journal-Bearing

    Directory of Open Access Journals (Sweden)

    A. Moosavian

    2013-01-01

    Full Text Available Vibration analysis is an accepted method in condition monitoring of machines, since it can provide useful and reliable information about machine working condition. This paper surveys a new scheme for fault diagnosis of main journal-bearings of internal combustion (IC engine based on power spectral density (PSD technique and two classifiers, namely, K-nearest neighbor (KNN and artificial neural network (ANN. Vibration signals for three different conditions of journal-bearing; normal, with oil starvation condition and extreme wear fault were acquired from an IC engine. PSD was applied to process the vibration signals. Thirty features were extracted from the PSD values of signals as a feature source for fault diagnosis. KNN and ANN were trained by training data set and then used as diagnostic classifiers. Variable K value and hidden neuron count (N were used in the range of 1 to 20, with a step size of 1 for KNN and ANN to gain the best classification results. The roles of PSD, KNN and ANN techniques were studied. From the results, it is shown that the performance of ANN is better than KNN. The experimental results dèmonstrate that the proposed diagnostic method can reliably separate different fault conditions in main journal-bearings of IC engine.

  5. Cropping Pattern Detection and Change Analysis in Central Luzon, Philippines Using Multi-Temporal MODIS Imagery and Artificial Neural Network Classifier

    Science.gov (United States)

    dela Torre, D. M.; Perez, G. J. P.

    2016-12-01

    Cropping practices in the Philippines has been intensifying with greater demand for food and agricultural supplies in view of an increasing population and advanced technologies for farming. This has not been monitored regularly using traditional methods but alternative methods using remote sensing has been promising yet underutilized. This study employed multi-temporal data from MODIS and neural network classifier to map annual land use in agricultural areas from 2001-2014 in Central Luzon, the primary rice growing area of the Philippines. Land use statistics derived from these maps were compared with historical El Nino events to examine how land area is affected by drought events. Fourteen maps of agricultural land use was produced, with the primary classes being single-cropping, double-cropping and perennial crops with secondary classes of forests, urban, bare, water and other classes. Primary classes were produced from the neural network classifier while secondary classes were derived from NDVI threshold masks. The overall accuracy for the 2014 map was 62.05% and a kappa statistic of 0.45. 155.56% increase in single-cropping systems from 2001 to 2014 was observed while double cropping systems decreased by 14.83%. Perennials increased by 76.21% while built-up areas decreased by 12.22% within the 14-year interval. There are several sources of error including mixed-pixels, scale-conversion problems and limited ground reference data. An analysis including El Niño events in 2004 and 2010 demonstrated that marginally irrigated areas that usually planted twice in a year resorted to single cropping, indicating that scarcity of water limited the intensification allowable in the area. Findings from this study can be used to predict future use of agricultural land in the country and also examine how farmlands have responded to climatic factors and stressors.

  6. Natural and Unnatural Oil Layers on the Surface of the Gulf of Mexico Detected and Quantified in Synthetic Aperture RADAR Images with Texture Classifying Neural Network Algorithms

    Science.gov (United States)

    MacDonald, I. R.; Garcia-Pineda, O. G.; Morey, S. L.; Huffer, F.

    2011-12-01

    Effervescent hydrocarbons rise naturally from hydrocarbon seeps in the Gulf of Mexico and reach the ocean surface. This oil forms thin (~0.1 μm) layers that enhance specular reflectivity and have been widely used to quantify the abundance and distribution of natural seeps using synthetic aperture radar (SAR). An analogous process occurred at a vastly greater scale for oil and gas discharged from BP's Macondo well blowout. SAR data allow direct comparison of the areas of the ocean surface covered by oil from natural sources and the discharge. We used a texture classifying neural network algorithm to quantify the areas of naturally occurring oil-covered water in 176 SAR image collections from the Gulf of Mexico obtained between May 1997 and November 2007, prior to the blowout. Separately we also analyzed 36 SAR images collections obtained between 26 April and 30 July, 2010 while the discharged oil was visible in the Gulf of Mexico. For the naturally occurring oil, we removed pollution events and transient oceanographic effects by including only the reflectance anomalies that that recurred in the same locality over multiple images. We measured the area of oil layers in a grid of 10x10 km cells covering the entire Gulf of Mexico. Floating oil layers were observed in only a fraction of the total Gulf area amounting to 1.22x10^5 km^2. In a bootstrap sample of 2000 replications, the combined average area of these layers was 7.80x10^2 km^2 (sd 86.03). For a regional comparison, we divided the Gulf of Mexico into four quadrates along 90° W longitude, and 25° N latitude. The NE quadrate, where the BP discharge occurred, received on average 7.0% of the total natural seepage in the Gulf of Mexico (5.24 x10^2 km^2, sd 21.99); the NW quadrate received on average 68.0% of this total (5.30 x10^2 km^2, sd 69.67). The BP blowout occurred in the NE quadrate of the Gulf of Mexico; discharged oil that reached the surface drifted over a large area north of 25° N. Performing a

  7. Mapping Robinia Pseudoacacia Forest Health Conditions by Using Combined Spectral, Spatial, and Textural Information Extracted from IKONOS Imagery and Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2015-07-01

    Full Text Available The textural and spatial information extracted from very high resolution (VHR remote sensing imagery provides complementary information for applications in which the spectral information is not sufficient for identification of spectrally similar landscape features. In this study grey-level co-occurrence matrix (GLCM textures and a local statistical analysis Getis statistic (Gi, computed from IKONOS multispectral (MS imagery acquired from the Yellow River Delta in China, along with a random forest (RF classifier, were used to discriminate Robina pseudoacacia tree health levels. Specifically, eight GLCM texture features (mean, variance, homogeneity, dissimilarity, contrast, entropy, angular second moment, and correlation were first calculated from IKONOS NIR band (Band 4 to determine an optimal window size (13 × 13 and an optimal direction (45°. Then, the optimal window size and direction were applied to the three other IKONOS MS bands (blue, green, and red for calculating the eight GLCM textures. Next, an optimal distance value (5 and an optimal neighborhood rule (Queen’s case were determined for calculating the four Gi features from the four IKONOS MS bands. Finally, different RF classification results of the three forest health conditions were created: (1 an overall accuracy (OA of 79.5% produced using the four MS band reflectances only; (2 an OA of 97.1% created with the eight GLCM features calculated from IKONOS Band 4 with the optimal window size of 13 × 13 and direction 45°; (3 an OA of 93.3% created with the all 32 GLCM features calculated from the four IKONOS MS bands with a window size of 13 × 13 and direction of 45°; (4 an OA of 94.0% created using the four Gi features calculated from the four IKONOS MS bands with the optimal distance value of 5 and Queen’s neighborhood rule; and (5 an OA of 96.9% created with the combined 16 spectral (four, spatial (four, and textural (eight features. The most important feature ranked by RF

  8. A framework for mapping tree species combining hyperspectral and LiDAR data: Role of selected classifiers and sensor across three spatial scales

    Science.gov (United States)

    Ghosh, Aniruddha; Fassnacht, Fabian Ewald; Joshi, P. K.; Koch, Barbara

    2014-02-01

    Knowledge of tree species distribution is important worldwide for sustainable forest management and resource evaluation. The accuracy and information content of species maps produced using remote sensing images vary with scale, sensor (optical, microwave, LiDAR), classification algorithm, verification design and natural conditions like tree age, forest structure and density. Imaging spectroscopy reduces the inaccuracies making use of the detailed spectral response. However, the scale effect still has a strong influence and cannot be neglected. This study aims to bridge the knowledge gap in understanding the scale effect in imaging spectroscopy when moving from 4 to 30 m pixel size for tree species mapping, keeping in mind that most current and future hyperspectral satellite based sensors work with spatial resolution around 30 m or more. Two airborne (HyMAP) and one spaceborne (Hyperion) imaging spectroscopy dataset with pixel sizes of 4, 8 and 30 m, respectively were available to examine the effect of scale over a central European forest. The forest under examination is a typical managed forest with relatively homogenous stands featuring mostly two canopy layers. Normalized digital surface model (nDSM) derived from LiDAR data was used additionally to examine the effect of height information in tree species mapping. Six different sets of predictor variables (reflectance value of all bands, selected components of a Minimum Noise Fraction (MNF), Vegetation Indices (VI) and each of these sets combined with LiDAR derived height) were explored at each scale. Supervised kernel based (Support Vector Machines) and ensemble based (Random Forest) machine learning algorithms were applied on the dataset to investigate the effect of the classifier. Iterative bootstrap-validation with 100 iterations was performed for classification model building and testing for all the trials. For scale, analysis of overall classification accuracy and kappa values indicated that 8 m spatial

  9. LCC: Light Curves Classifier

    Science.gov (United States)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  10. Combining complex networks and data mining: why and how

    OpenAIRE

    Zanin, M.; Papo, D.; Sousa, P. A.; Menasalvas, E.; Nicchi, A.; Kubik, E.; Boccaletti, S.

    2016-01-01

    The increasing power of computer technology does not dispense with the need to extract meaningful in- formation out of data sets of ever growing size, and indeed typically exacerbates the complexity of this task. To tackle this general problem, two methods have emerged, at chronologically different times, that are now commonly used in the scientific community: data mining and complex network theory. Not only do complex network analysis and data mining share the same general goal, that of extr...

  11. Disease named entity recognition by combining conditional random fields and bidirectional recurrent neural networks.

    Science.gov (United States)

    Wei, Qikang; Chen, Tao; Xu, Ruifeng; He, Yulan; Gui, Lin

    2016-01-01

    The recognition of disease and chemical named entities in scientific articles is a very important subtask in information extraction in the biomedical domain. Due to the diversity and complexity of disease names, the recognition of named entities of diseases is rather tougher than those of chemical names. Although there are some remarkable chemical named entity recognition systems available online such as ChemSpot and tmChem, the publicly available recognition systems of disease named entities are rare. This article presents a system for disease named entity recognition (DNER) and normalization. First, two separate DNER models are developed. One is based on conditional random fields model with a rule-based post-processing module. The other one is based on the bidirectional recurrent neural networks. Then the named entities recognized by each of the DNER model are fed into a support vector machine classifier for combining results. Finally, each recognized disease named entity is normalized to a medical subject heading disease name by using a vector space model based method. Experimental results show that using 1000 PubMed abstracts for training, our proposed system achieves an F1-measure of 0.8428 at the mention level and 0.7804 at the concept level, respectively, on the testing data of the chemical-disease relation task in BioCreative V.Database URL: http://219.223.252.210:8080/SS/cdr.html. © The Author(s) 2016. Published by Oxford University Press.

  12. [Rapid Identification of Epicarpium Citri Grandis via Infrared Spectroscopy and Fluorescence Spectrum Imaging Technology Combined with Neural Network].

    Science.gov (United States)

    Pan, Sha-sha; Huang, Fu-rong; Xiao, Chi; Xian, Rui-yi; Ma, Zhi-guo

    2015-10-01

    To explore rapid reliable methods for detection of Epicarpium citri grandis (ECG), the experiment using Fourier Transform Attenuated Total Reflection Infrared Spectroscopy (FTIR/ATR) and Fluorescence Spectrum Imaging Technology combined with Multilayer Perceptron (MLP) Neural Network pattern recognition, for the identification of ECG, and the two methods are compared. Infrared spectra and fluorescence spectral images of 118 samples, 81 ECG and 37 other kinds of ECG, are collected. According to the differences in tspectrum, the spectra data in the 550-1 800 cm(-1) wavenumber range and 400-720 nm wavelength are regarded as the study objects of discriminant analysis. Then principal component analysis (PCA) is applied to reduce the dimension of spectroscopic data of ECG and MLP Neural Network is used in combination to classify them. During the experiment were compared the effects of different methods of data preprocessing on the model: multiplicative scatter correction (MSC), standard normal variable correction (SNV), first-order derivative(FD), second-order derivative(SD) and Savitzky-Golay (SG). The results showed that: after the infrared spectra data via the Savitzky-Golay (SG) pretreatment through the MLP Neural Network with the hidden layer function as sigmoid, we can get the best discrimination of ECG, the correct percent of training set and testing set are both 100%. Using fluorescence spectral imaging technology, corrected by the multiple scattering (MSC) results in the pretreatment is the most ideal. After data preprocessing, the three layers of the MLP Neural Network of the hidden layer function as sigmoid function can get 100% correct percent of training set and 96.7% correct percent of testing set. It was shown that the FTIR/ATR and fluorescent spectral imaging technology combined with MLP Neural Network can be used for the identification study of ECG and has the advantages of rapid, reliable effect.

  13. Combining complex networks and data mining: Why and how

    Science.gov (United States)

    Zanin, M.; Papo, D.; Sousa, P. A.; Menasalvas, E.; Nicchi, A.; Kubik, E.; Boccaletti, S.

    2016-05-01

    The increasing power of computer technology does not dispense with the need to extract meaningful information out of data sets of ever growing size, and indeed typically exacerbates the complexity of this task. To tackle this general problem, two methods have emerged, at chronologically different times, that are now commonly used in the scientific community: data mining and complex network theory. Not only do complex network analysis and data mining share the same general goal, that of extracting information from complex systems to ultimately create a new compact quantifiable representation, but they also often address similar problems too. In the face of that, a surprisingly low number of researchers turn out to resort to both methodologies. One may then be tempted to conclude that these two fields are either largely redundant or totally antithetic. The starting point of this review is that this state of affairs should be put down to contingent rather than conceptual differences, and that these two fields can in fact advantageously be used in a synergistic manner. An overview of both fields is first provided, some fundamental concepts of which are illustrated. A variety of contexts in which complex network theory and data mining have been used in a synergistic manner are then presented. Contexts in which the appropriate integration of complex network metrics can lead to improved classification rates with respect to classical data mining algorithms and, conversely, contexts in which data mining can be used to tackle important issues in complex network theory applications are illustrated. Finally, ways to achieve a tighter integration between complex networks and data mining, and open lines of research are discussed.

  14. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier.

    Science.gov (United States)

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  15. A Combined Network Architecture Using Art2 and Back Propagation for Adaptive Estimation of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    Einar Sørheim

    1990-10-01

    Full Text Available A neural network architecture called ART2/BP is proposed. Thc goal has been to construct an artificial neural network that learns incrementally an unknown mapping, and is motivated by the instability found in back propagation (BP networks: after first learning pattern A and then pattern B, a BP network often has completely 'forgotten' pattern A. A network using both supervised and unsupervised training is proposed, consisting of a combination of ART2 and BP. ART2 is used to build and focus a supervised backpropagation network consisting of many small subnetworks each specialized on a particular domain of the input space. The ART2/BP network has the advantage of being able to dynamically expand itself in response to input patterns containing new information. Simulation results show that the ART2/BP network outperforms a classical maximum likelihood method for the estimation of a discrete dynamic and nonlinear transfer function.

  16. Improving predictions of protein-protein interfaces by combining amino acid-specific classifiers based on structural and physicochemical descriptors with their weighted neighbor averages.

    Science.gov (United States)

    de Moraes, Fábio R; Neshich, Izabella A P; Mazoni, Ivan; Yano, Inácio H; Pereira, José G C; Salim, José A; Jardine, José G; Neshich, Goran

    2014-01-01

    Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR) from free surface residues (FSR). We formulated a linear discriminative analysis (LDA) classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/) are suitable for such a task. Receiver operating characteristic (ROC) analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication) or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study is now

  17. Improving predictions of protein-protein interfaces by combining amino acid-specific classifiers based on structural and physicochemical descriptors with their weighted neighbor averages.

    Directory of Open Access Journals (Sweden)

    Fábio R de Moraes

    Full Text Available Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR from free surface residues (FSR. We formulated a linear discriminative analysis (LDA classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/ are suitable for such a task. Receiver operating characteristic (ROC analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study

  18. Improving Predictions of Protein-Protein Interfaces by Combining Amino Acid-Specific Classifiers Based on Structural and Physicochemical Descriptors with Their Weighted Neighbor Averages

    Science.gov (United States)

    de Moraes, Fábio R.; Neshich, Izabella A. P.; Mazoni, Ivan; Yano, Inácio H.; Pereira, José G. C.; Salim, José A.; Jardine, José G.; Neshich, Goran

    2014-01-01

    Protein-protein interactions are involved in nearly all regulatory processes in the cell and are considered one of the most important issues in molecular biology and pharmaceutical sciences but are still not fully understood. Structural and computational biology contributed greatly to the elucidation of the mechanism of protein interactions. In this paper, we present a collection of the physicochemical and structural characteristics that distinguish interface-forming residues (IFR) from free surface residues (FSR). We formulated a linear discriminative analysis (LDA) classifier to assess whether chosen descriptors from the BlueStar STING database (http://www.cbi.cnptia.embrapa.br/SMS/) are suitable for such a task. Receiver operating characteristic (ROC) analysis indicates that the particular physicochemical and structural descriptors used for building the linear classifier perform much better than a random classifier and in fact, successfully outperform some of the previously published procedures, whose performance indicators were recently compared by other research groups. The results presented here show that the selected set of descriptors can be utilized to predict IFRs, even when homologue proteins are missing (particularly important for orphan proteins where no homologue is available for comparative analysis/indication) or, when certain conformational changes accompany interface formation. The development of amino acid type specific classifiers is shown to increase IFR classification performance. Also, we found that the addition of an amino acid conservation attribute did not improve the classification prediction. This result indicates that the increase in predictive power associated with amino acid conservation is exhausted by adequate use of an extensive list of independent physicochemical and structural parameters that, by themselves, fully describe the nano-environment at protein-protein interfaces. The IFR classifier developed in this study is now

  19. Disrupting Cocaine Trafficking Networks: Interdicting a Combined Social-Functional Network Model

    Science.gov (United States)

    2016-03-01

    LITERATURE REVIEW ...................................................................................29  A.  QUANTITATIVE NETWORK ANALYSIS METHODS...direct sale. Using hypothetical data based on open-source material, we define a social network of three main categories of archetypical DTOs (with...follows this structure: Chapter II is a review of existing literature in the field; Chapter III presents the network specifics, the attacker- defender

  20. Neural Network Combination by Fuzzy Integral for Robust Change Detection in Remotely Sensed Imagery

    Directory of Open Access Journals (Sweden)

    Youcef Chibani

    2005-08-01

    Full Text Available Combining multiple neural networks has been used to improve the decision accuracy in many application fields including pattern recognition and classification. In this paper, we investigate the potential of this approach for land cover change detection. In a first step, we perform many experiments in order to find the optimal individual networks in terms of architecture and training rule. In the second step, different neural network change detectors are combined using a method based on the notion of fuzzy integral. This method combines objective evidences in the form of network outputs, with subjective measures of their performances. Various forms of the fuzzy integral, which are, namely, Choquet integral, Sugeno integral, and two extensions of Sugeno integral with ordered weighted averaging operators, are implemented. Experimental analysis using error matrices and Kappa analysis showed that the fuzzy integral outperforms individual networks and constitutes an appropriate strategy to increase the accuracy of change detection.

  1. Targeting Neuronal Networks with Combined Drug and Stimulation Paradigms Guided by Neuroimaging to Treat Brain Disorders.

    Science.gov (United States)

    Faingold, Carl L; Blumenfeld, Hal

    2015-10-01

    Improved therapy of brain disorders can be achieved by focusing on neuronal networks, utilizing combined pharmacological and stimulation paradigms guided by neuroimaging. Neuronal networks that mediate normal brain functions, such as hearing, interact with other networks, which is important but commonly neglected. Network interaction changes often underlie brain disorders, including epilepsy. "Conditional multireceptive" (CMR) brain areas (e.g., brainstem reticular formation and amygdala) are critical in mediating neuroplastic changes that facilitate network interactions. CMR neurons receive multiple inputs but exhibit extensive response variability due to milieu and behavioral state changes and are exquisitely sensitive to agents that increase or inhibit GABA-mediated inhibition. Enhanced CMR neuronal responsiveness leads to expression of emergent properties--nonlinear events--resulting from network self-organization. Determining brain disorder mechanisms requires animals that model behaviors and neuroanatomical substrates of human disorders identified by neuroimaging. However, not all sites activated during network operation are requisite for that operation. Other active sites are ancillary, because their blockade does not alter network function. Requisite network sites exhibit emergent properties that are critical targets for pharmacological and stimulation therapies. Improved treatment of brain disorders should involve combined pharmacological and stimulation therapies, guided by neuroimaging, to correct network malfunctions by targeting specific network neurons. © The Author(s) 2015.

  2. On the Potential of Interference Rejection Combining in B4G Networks

    DEFF Research Database (Denmark)

    Tavares, Fernando Menezes Leitão; Berardinelli, Gilberto; Mahmood, Nurul Huda

    2013-01-01

    Beyond 4th Generation (B4G) local area networks will be characterized by the dense uncoordinated deployment of small cells. This paper shows that inter-cell interference, which is a main limiting factor in such networks, can be effectively contained using Interference Rejection Combining (IRC) re...

  3. Machine Learning Classification Combining Multiple Features of A Hyper-Network of fMRI Data in Alzheimer's Disease.

    Science.gov (United States)

    Guo, Hao; Zhang, Fan; Chen, Junjie; Xu, Yong; Xiang, Jie

    2017-01-01

    Exploring functional interactions among various brain regions is helpful for understanding the pathological underpinnings of neurological disorders. Brain networks provide an important representation of those functional interactions, and thus are widely applied in the diagnosis and classification of neurodegenerative diseases. Many mental disorders involve a sharp decline in cognitive ability as a major symptom, which can be caused by abnormal connectivity patterns among several brain regions. However, conventional functional connectivity networks are usually constructed based on pairwise correlations among different brain regions. This approach ignores higher-order relationships, and cannot effectively characterize the high-order interactions of many brain regions working together. Recent neuroscience research suggests that higher-order relationships between brain regions are important for brain network analysis. Hyper-networks have been proposed that can effectively represent the interactions among brain regions. However, this method extracts the local properties of brain regions as features, but ignores the global topology information, which affects the evaluation of network topology and reduces the performance of the classifier. This problem can be compensated by a subgraph feature-based method, but it is not sensitive to change in a single brain region. Considering that both of these feature extraction methods result in the loss of information, we propose a novel machine learning classification method that combines multiple features of a hyper-network based on functional magnetic resonance imaging in Alzheimer's disease. The method combines the brain region features and subgraph features, and then uses a multi-kernel SVM for classification. This retains not only the global topological information, but also the sensitivity to change in a single brain region. To certify the proposed method, 28 normal control subjects and 38 Alzheimer's disease patients were

  4. Combining morphological analysis and Bayesian Networks for strategic decision support

    CSIR Research Space (South Africa)

    De Waal, AJ

    2007-12-01

    Full Text Available problem spaces. BNs are graphical models which consist of a qualitative and quantitative part. The qualitative part is a cause-and-effect, or causal graph. The quantitative part depicts the strength of the causal relationships between variables. Combining...

  5. Assessment of the predictive accuracy of five in silico prediction tools, alone or in combination, and two metaservers to classify long QT syndrome gene mutations.

    Science.gov (United States)

    Leong, Ivone U S; Stuckey, Alexander; Lai, Daniel; Skinner, Jonathan R; Love, Donald R

    2015-05-13

    Long QT syndrome (LQTS) is an autosomal dominant condition predisposing to sudden death from malignant arrhythmia. Genetic testing identifies many missense single nucleotide variants of uncertain pathogenicity. Establishing genetic pathogenicity is an essential prerequisite to family cascade screening. Many laboratories use in silico prediction tools, either alone or in combination, or metaservers, in order to predict pathogenicity; however, their accuracy in the context of LQTS is unknown. We evaluated the accuracy of five in silico programs and two metaservers in the analysis of LQTS 1-3 gene variants. The in silico tools SIFT, PolyPhen-2, PROVEAN, SNPs&GO and SNAP, either alone or in all possible combinations, and the metaservers Meta-SNP and PredictSNP, were tested on 312 KCNQ1, KCNH2 and SCN5A gene variants that have previously been characterised by either in vitro or co-segregation studies as either "pathogenic" (283) or "benign" (29). The accuracy, sensitivity, specificity and Matthews Correlation Coefficient (MCC) were calculated to determine the best combination of in silico tools for each LQTS gene, and when all genes are combined. The best combination of in silico tools for KCNQ1 is PROVEAN, SNPs&GO and SIFT (accuracy 92.7%, sensitivity 93.1%, specificity 100% and MCC 0.70). The best combination of in silico tools for KCNH2 is SIFT and PROVEAN or PROVEAN, SNPs&GO and SIFT. Both combinations have the same scores for accuracy (91.1%), sensitivity (91.5%), specificity (87.5%) and MCC (0.62). In the case of SCN5A, SNAP and PROVEAN provided the best combination (accuracy 81.4%, sensitivity 86.9%, specificity 50.0%, and MCC 0.32). When all three LQT genes are combined, SIFT, PROVEAN and SNAP is the combination with the best performance (accuracy 82.7%, sensitivity 83.0%, specificity 80.0%, and MCC 0.44). Both metaservers performed better than the single in silico tools; however, they did not perform better than the best performing combination of in silico

  6. Automatic tremor detection with a combined cross-correlation and neural network approach

    Science.gov (United States)

    Horstmann, T.; Harrington, R. M.; Cochran, E. S.

    2011-12-01

    Low-amplitude, long-duration, and ambiguous phase arrivals associated with crustal tremor make automatic detection difficult. We present a new detection method that combines cross-correlation with a neural network clustering algorithm. The approach is independent of any a priori assumptions regarding tremor event duration; instead, it examines frequency content, amplitude, and motion products of continuous data to distinguish tremor from earthquakes and background noise in an automated fashion. Because no assumptions regarding event duration are required, the clustering algorithm is therefore able to detect short, burst-like events which may be missed by many current methods. We detect roughly 130 seismic events occurring over 100 minutes, including earthquakes and tremor, in a three-week long test data set of waveforms recorded near Cholame, California. The detection has a success rate of over 90% when compared to visually selected events. We use continuous broadband data from 13 STS-2 seismometers deployed from May 2010 to July 2011 along the Cholame segment of the San Andreas Fault, as well as stations from the HRSN network. The large volume of waveforms requires first reducing the amount of data before applying the neural network algorithm. First, we filter the data between 2 Hz and 8 Hz, calculate envelopes, and decimate them to 0.2 Hz. We cross-correlate signals at each station with two master stations using a moving 520-second time window with a 5-sec time step. We calculate a mean cross-correlation coefficient value between all station pairs for each time window and each master station, and select the master station with the highest mean value. Time windows with mean coefficients exceeding 0.3 are used in the neural network approach, and windows separated by less than 300 seconds are grouped together. In the second step, we apply the neural network algorithm, i.e., Self Organized Map (SOM), to classify the reduced data set. We first calculate feature

  7. Combining epidemiological and genetic networks signifies the importance of early treatment in HIV-1 transmission.

    Directory of Open Access Journals (Sweden)

    Narges Zarrabi

    Full Text Available Inferring disease transmission networks is important in epidemiology in order to understand and prevent the spread of infectious diseases. Reconstruction of the infection transmission networks requires insight into viral genome data as well as social interactions. For the HIV-1 epidemic, current research either uses genetic information of patients' virus to infer the past infection events or uses statistics of sexual interactions to model the network structure of viral spreading. Methods for a reliable reconstruction of HIV-1 transmission dynamics, taking into account both molecular and societal data are still lacking. The aim of this study is to combine information from both genetic and epidemiological scales to characterize and analyse a transmission network of the HIV-1 epidemic in central Italy.We introduce a novel filter-reduction method to build a network of HIV infected patients based on their social and treatment information. The network is then combined with a genetic network, to infer a hypothetical infection transmission network. We apply this method to a cohort study of HIV-1 infected patients in central Italy and find that patients who are highly connected in the network have longer untreated infection periods. We also find that the network structures for homosexual males and heterosexual populations are heterogeneous, consisting of a majority of 'peripheral nodes' that have only a few sexual interactions and a minority of 'hub nodes' that have many sexual interactions. Inferring HIV-1 transmission networks using this novel combined approach reveals remarkable correlations between high out-degree individuals and longer untreated infection periods. These findings signify the importance of early treatment and support the potential benefit of wide population screening, management of early diagnoses and anticipated antiretroviral treatment to prevent viral transmission and spread. The approach presented here for reconstructing HIV-1

  8. Combining epidemiological and genetic networks signifies the importance of early treatment in HIV-1 transmission.

    Science.gov (United States)

    Zarrabi, Narges; Prosperi, Mattia; Belleman, Robert G; Colafigli, Manuela; De Luca, Andrea; Sloot, Peter M A

    2012-01-01

    Inferring disease transmission networks is important in epidemiology in order to understand and prevent the spread of infectious diseases. Reconstruction of the infection transmission networks requires insight into viral genome data as well as social interactions. For the HIV-1 epidemic, current research either uses genetic information of patients' virus to infer the past infection events or uses statistics of sexual interactions to model the network structure of viral spreading. Methods for a reliable reconstruction of HIV-1 transmission dynamics, taking into account both molecular and societal data are still lacking. The aim of this study is to combine information from both genetic and epidemiological scales to characterize and analyse a transmission network of the HIV-1 epidemic in central Italy.We introduce a novel filter-reduction method to build a network of HIV infected patients based on their social and treatment information. The network is then combined with a genetic network, to infer a hypothetical infection transmission network. We apply this method to a cohort study of HIV-1 infected patients in central Italy and find that patients who are highly connected in the network have longer untreated infection periods. We also find that the network structures for homosexual males and heterosexual populations are heterogeneous, consisting of a majority of 'peripheral nodes' that have only a few sexual interactions and a minority of 'hub nodes' that have many sexual interactions. Inferring HIV-1 transmission networks using this novel combined approach reveals remarkable correlations between high out-degree individuals and longer untreated infection periods. These findings signify the importance of early treatment and support the potential benefit of wide population screening, management of early diagnoses and anticipated antiretroviral treatment to prevent viral transmission and spread. The approach presented here for reconstructing HIV-1 transmission networks

  9. Ensemble of Neural Classifiers for Scoring Knowledge Base Triples

    OpenAIRE

    Yamada, Ikuya; Sato, Motoki; Shindo, Hiroyuki

    2017-01-01

    This paper describes our approach for the triple scoring task at the WSDM Cup 2017. The task required participants to assign a relevance score for each pair of entities and their types in a knowledge base in order to enhance the ranking results in entity retrieval tasks. We propose an approach wherein the outputs of multiple neural network classifiers are combined using a supervised machine learning model. The experimental results showed that our proposed method achieved the best performance ...

  10. Principal Component Analysis Coupled with Artificial Neural Networks—A Combined Technique Classifying Small Molecular Structures Using a Concatenated Spectral Database

    Directory of Open Access Journals (Sweden)

    Mihail Lucian Birsa

    2011-10-01

    Full Text Available In this paper we present several expert systems that predict the class identity of the modeled compounds, based on a preprocessed spectral database. The expert systems were built using Artificial Neural Networks (ANN and are designed to predict if an unknown compound has the toxicological activity of amphetamines (stimulant and hallucinogen, or whether it is a nonamphetamine. In attempts to circumvent the laws controlling drugs of abuse, new chemical structures are very frequently introduced on the black market. They are obtained by slightly modifying the controlled molecular structures by adding or changing substituents at various positions on the banned molecules. As a result, no substance similar to those forming a prohibited class may be used nowadays, even if it has not been specifically listed. Therefore, reliable, fast and accessible systems capable of modeling and then identifying similarities at molecular level, are highly needed for epidemiological, clinical, and forensic purposes. In order to obtain the expert systems, we have preprocessed a concatenated spectral database, representing the GC-FTIR (gas chromatography-Fourier transform infrared spectrometry and GC-MS (gas chromatography-mass spectrometry spectra of 103 forensic compounds. The database was used as input for a Principal Component Analysis (PCA. The scores of the forensic compounds on the main principal components (PCs were then used as inputs for the ANN systems. We have built eight PC-ANN systems (principal component analysis coupled with artificial neural network with a different number of input variables: 15 PCs, 16 PCs, 17 PCs, 18 PCs, 19 PCs, 20 PCs, 21 PCs and 22 PCs. The best expert system was found to be the ANN network built with 18 PCs, which accounts for an explained variance of 77%. This expert system has the best sensitivity (a rate of classification C = 100% and a rate of true positives TP = 100%, as well as a good selectivity (a rate of true negatives TN

  11. Combining region- and network-level brain-behavior relationships in a structural equation model.

    Science.gov (United States)

    Bolt, Taylor; Prince, Emily B; Nomi, Jason S; Messinger, Daniel; Llabre, Maria M; Uddin, Lucina Q

    2018-01-15

    Brain-behavior associations in fMRI studies are typically restricted to a single level of analysis: either a circumscribed brain region-of-interest (ROI) or a larger network of brain regions. However, this common practice may not always account for the interdependencies among ROIs of the same network or potentially unique information at the ROI-level, respectively. To account for both sources of information, we combined measurement and structural components of structural equation modeling (SEM) approaches to empirically derive networks from ROI activity, and to assess the association of both individual ROIs and their respective whole-brain activation networks with task performance using three large task-fMRI datasets and two separate brain parcellation schemes. The results for working memory and relational tasks revealed that well-known ROI-performance associations are either non-significant or reversed when accounting for the ROI's common association with its corresponding network, and that the network as a whole is instead robustly associated with task performance. The results for the arithmetic task revealed that in certain cases, an ROI can be robustly associated with task performance, even when accounting for its associated network. The SEM framework described in this study provides researchers additional flexibility in testing brain-behavior relationships, as well as a principled way to combine ROI- and network-levels of analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Quantum ensembles of quantum classifiers.

    Science.gov (United States)

    Schuld, Maria; Petruccione, Francesco

    2018-02-09

    Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.

  13. A sequential Monte Carlo model of the combined GB gas and electricity network

    International Nuclear Information System (INIS)

    Chaudry, Modassar; Wu, Jianzhong; Jenkins, Nick

    2013-01-01

    A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets

  14. Research on Large-Scale Road Network Partition and Route Search Method Combined with Traveler Preferences

    Directory of Open Access Journals (Sweden)

    De-Xin Yu

    2013-01-01

    Full Text Available Combined with improved Pallottino parallel algorithm, this paper proposes a large-scale route search method, which considers travelers’ route choice preferences. And urban road network is decomposed into multilayers effectively. Utilizing generalized travel time as road impedance function, the method builds a new multilayer and multitasking road network data storage structure with object-oriented class definition. Then, the proposed path search algorithm is verified by using the real road network of Guangzhou city as an example. By the sensitive experiments, we make a comparative analysis of the proposed path search method with the current advanced optimal path algorithms. The results demonstrate that the proposed method can increase the road network search efficiency by more than 16% under different search proportion requests, node numbers, and computing process numbers, respectively. Therefore, this method is a great breakthrough in the guidance field of urban road network.

  15. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    This paper introduces a mutually beneficial interplay between network coding and scalable video source coding in order to propose an energy-efficient video streaming approach accommodating multiple heterogeneous receivers, for which current solutions are either inefficient or insufficient. State...... support of multi-resolution video streaming....

  16. Neural Network for Combining Linear and Non-Linear Modelling of Dynamic Systems

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1994-01-01

    The purpose of this paper is to develop a method to combine linear models with MLP networks. In other words to find a method to make a non-linear and multivariable model that performs at least as good as a linear model, when the training data lacks information.......The purpose of this paper is to develop a method to combine linear models with MLP networks. In other words to find a method to make a non-linear and multivariable model that performs at least as good as a linear model, when the training data lacks information....

  17. Modeling the future evolution of the virtual water trade network: A combination of network and gravity models

    Science.gov (United States)

    Sartori, Martina; Schiavo, Stefano; Fracasso, Andrea; Riccaboni, Massimo

    2017-12-01

    The paper investigates how the topological features of the virtual water (VW) network and the size of the associated VW flows are likely to change over time, under different socio-economic and climate scenarios. We combine two alternative models of network formation -a stochastic and a fitness model, used to describe the structure of VW flows- with a gravity model of trade to predict the intensity of each bilateral flow. This combined approach is superior to existing methodologies in its ability to replicate the observed features of VW trade. The insights from the models are used to forecast future VW flows in 2020 and 2050, under different climatic scenarios, and compare them with future water availability. Results suggest that the current trend of VW exports is not sustainable for all countries. Moreover, our approach highlights that some VW importers might be exposed to "imported water stress" as they rely heavily on imports from countries whose water use is unsustainable.

  18. Classifier fusion for VoIP attacks classification

    Science.gov (United States)

    Safarik, Jakub; Rezac, Filip

    2017-05-01

    SIP is one of the most successful protocols in the field of IP telephony communication. It establishes and manages VoIP calls. As the number of SIP implementation rises, we can expect a higher number of attacks on the communication system in the near future. This work aims at malicious SIP traffic classification. A number of various machine learning algorithms have been developed for attack classification. The paper presents a comparison of current research and the use of classifier fusion method leading to a potential decrease in classification error rate. Use of classifier combination makes a more robust solution without difficulties that may affect single algorithms. Different voting schemes, combination rules, and classifiers are discussed to improve the overall performance. All classifiers have been trained on real malicious traffic. The concept of traffic monitoring depends on the network of honeypot nodes. These honeypots run in several networks spread in different locations. Separation of honeypots allows us to gain an independent and trustworthy attack information.

  19. Classification of protein fold classes by knot theory and prediction of folds by neural networks: A combined theoretical and experimental approach

    DEFF Research Database (Denmark)

    Ramnarayan, K.; Bohr, Henrik; Jalkanen, Karl J.

    2008-01-01

    classifications, we utilize standard neural network methods for predicting protein fold classes from amino acid sequences. We also make an analysis of the redundancy of the structural classifications in relation to function and ligand binding. Finally we advocate the use of combining the measurement of the VA......We present different means of classifying protein structure. One is made rigorous by mathematical knot invariants that coincide reasonably well with ordinary graphical fold classification and another classification is by packing analysis. Furthermore when constructing our mathematical fold...

  20. Combining sequence and Gene Ontology for protein module detection in the Weighted Network.

    Science.gov (United States)

    Yu, Yang; Liu, Jie; Feng, Nuan; Song, Bo; Zheng, Zeyu

    2017-01-07

    Studies of protein modules in a Protein-Protein Interaction (PPI) network contribute greatly to the understanding of biological mechanisms. With the development of computing science, computational approaches have played an important role in locating protein modules. In this paper, a new approach combining Gene Ontology and amino acid background frequency is introduced to detect the protein modules in the weighted PPI networks. The proposed approach mainly consists of three parts: the feature extraction, the weighted graph construction and the protein complex detection. Firstly, the topology-sequence information is utilized to present the feature of protein complex. Secondly, six types of the weighed graph are constructed by combining PPI network and Gene Ontology information. Lastly, protein complex algorithm is applied to the weighted graph, which locates the clusters based on three conditions, including density, network diameter and the included angle cosine. Experiments have been conducted on two protein complex benchmark sets for yeast and the results show that the approach is more effective compared to five typical algorithms with the performance of f-measure and precision. The combination of protein interaction network with sequence and gene ontology data is helpful to improve the performance and provide a optional method for protein module detection. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. A Soft-Hard Combination-Based Cooperative Spectrum Sensing Scheme for Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Nhu Tri Do

    2015-02-01

    Full Text Available In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of −15 dB. In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.

  2. A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.

    Science.gov (United States)

    Do, Nhu Tri; An, Beongku

    2015-02-13

    In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.

  3. Customising the therapeutic response of signalling networks to promote antitumor responses by drug combinations

    Directory of Open Access Journals (Sweden)

    Alexey eGoltsov

    2014-02-01

    Full Text Available Drug resistance, de novo and acquired, pervades cellular signalling networks from one signalling motif to another as a result of cancer progression and/or drug intervention. This resistance is one of the key determinants of efficacy in targeted anticancer drug therapy. Although poorly understood, drug resistance is already being addressed in combination therapy by selecting drug targets where sensitivity increases due to combination components or as a result of de novo or acquired mutations. Additionally, successive drug combinations have shown low resistance potency. To promote a rational, systematic development of combination therapies, it is necessary to establish the underlying mechanisms that drive the advantages of drug combinations and design methods to determine advanced targets for drug combination therapy. Based on a joint systems analysis of cellular signalling network (SN response and its sensitivity to drug action and oncogenic mutations, we describe an in silico method to analyse the targets of drug combinations. The method explores mechanisms of sensitizing the SN through combination of two drugs targeting vertical signalling pathways. We propose a paradigm of SN response customization by one drug to both maximize the effect of another drug in combination and promote a robust therapeutic response against oncogenic mutations. The method was applied to the customization of the response of the ErbB/PI3K/PTEN/AKT pathway by combination of drugs targeting HER2 receptors and proteins in the downstream pathway. The results of a computational experiment showed that the modification of the SN response from hyperbolic to smooth sigmoid response by manipulation of two drugs in combination leads to greater robustness in therapeutic response against oncogenic mutations determining cancer heterogeneity. The application of this method in drug combination co-development suggests a combined evaluation of inhibition effects along with the

  4. Combined principal component preprocessing and n-tuple neural networks for improved classification

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Linneberg, Christian

    2000-01-01

    We present a combined principal component analysis/neural network scheme for classification. The data used to illustrate the method consist of spectral fluorescence recordings from seven different production facilities, and the task is to relate an unknown sample to one of these seven factories. ...

  5. Neural Network for Combining Linear and Non-Linear Modelling of Dynamic Systems

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1994-01-01

    The purpose of this paper is to develop a method to combine linear models with MLP networks. In other words to find a method to make a non-linear and multivariable model that performs at least as good as a linear model, when the training data lacks information....

  6. Material basis of Chinese herbal formulas explored by combining pharmacokinetics with network pharmacology.

    Directory of Open Access Journals (Sweden)

    Lixia Pei

    Full Text Available The clinical application of Traditional Chinese medicine (TCM, using several herbs in combination (called formulas, has a history of more than one thousand years. However, the bioactive compounds that account for their therapeutic effects remain unclear. We hypothesized that the material basis of a formula are those compounds with a high content in the decoction that are maintained at a certain level in the system circulation. Network pharmacology provides new methodological insights for complicated system studies. In this study, we propose combining pharmacokinetic (PK analysis with network pharmacology to explore the material basis of TCM formulas as exemplified by the Bushen Zhuanggu formula (BZ composed of Psoralea corylifolia L., Aconitum carmichaeli Debx., and Cnidium monnieri (L. Cuss. A sensitive and credible liquid chromatography tandem mass spectrometry (LC-MS/MS method was established for the simultaneous determination of 15 compounds present in the three herbs. The concentrations of these compounds in the BZ decoction and in rat plasma after oral BZ administration were determined. Up to 12 compounds were detected in the BZ decoction, but only 5 could be analyzed using PK parameters. Combined PK results, network pharmacology analysis revealed that 4 compounds might serve as the material basis for BZ. We concluded that a sensitive, reliable, and suitable LC-MS/MS method for both the composition and pharmacokinetic study of BZ has been established. The combination of PK with network pharmacology might be a potent method for exploring the material basis of TCM formulas.

  7. Combining Cloud Networks and Course Management Systems for Enhanced Analysis in Teaching Laboratories

    Science.gov (United States)

    Abrams, Neal M.

    2012-01-01

    A cloud network system is combined with standard computing applications and a course management system to provide a robust method for sharing data among students. This system provides a unique method to improve data analysis by easily increasing the amount of sampled data available for analysis. The data can be shared within one course as well as…

  8. Hybrid ANN optimized artificial fish swarm algorithm based classifier for classification of suspicious lesions in breast DCE-MRI

    Science.gov (United States)

    Janaki Sathya, D.; Geetha, K.

    2017-12-01

    Automatic mass or lesion classification systems are developed to aid in distinguishing between malignant and benign lesions present in the breast DCE-MR images, the systems need to improve both the sensitivity and specificity of DCE-MR image interpretation in order to be successful for clinical use. A new classifier (a set of features together with a classification method) based on artificial neural networks trained using artificial fish swarm optimization (AFSO) algorithm is proposed in this paper. The basic idea behind the proposed classifier is to use AFSO algorithm for searching the best combination of synaptic weights for the neural network. An optimal set of features based on the statistical textural features is presented. The investigational outcomes of the proposed suspicious lesion classifier algorithm therefore confirm that the resulting classifier performs better than other such classifiers reported in the literature. Therefore this classifier demonstrates that the improvement in both the sensitivity and specificity are possible through automated image analysis.

  9. Inferring Regulatory Networks by Combining Perturbation Screens and Steady State Gene Expression Profiles

    Science.gov (United States)

    Michailidis, George

    2014-01-01

    Reconstructing transcriptional regulatory networks is an important task in functional genomics. Data obtained from experiments that perturb genes by knockouts or RNA interference contain useful information for addressing this reconstruction problem. However, such data can be limited in size and/or are expensive to acquire. On the other hand, observational data of the organism in steady state (e.g., wild-type) are more readily available, but their informational content is inadequate for the task at hand. We develop a computational approach to appropriately utilize both data sources for estimating a regulatory network. The proposed approach is based on a three-step algorithm to estimate the underlying directed but cyclic network, that uses as input both perturbation screens and steady state gene expression data. In the first step, the algorithm determines causal orderings of the genes that are consistent with the perturbation data, by combining an exhaustive search method with a fast heuristic that in turn couples a Monte Carlo technique with a fast search algorithm. In the second step, for each obtained causal ordering, a regulatory network is estimated using a penalized likelihood based method, while in the third step a consensus network is constructed from the highest scored ones. Extensive computational experiments show that the algorithm performs well in reconstructing the underlying network and clearly outperforms competing approaches that rely only on a single data source. Further, it is established that the algorithm produces a consistent estimate of the regulatory network. PMID:24586224

  10. Promotion of cooperation in the form C0C1D classified by 'degree grads' in a scale-free network

    International Nuclear Information System (INIS)

    Zhao, Li; Ye, Xiang-Jun; Huang, Zi-Gang; Sun, Jin-Tu; Yang, Lei; Wang, Ying-Hai; Do, Younghae

    2010-01-01

    In this paper, we revisit the issue of the public goods game (PGG) on a heterogeneous graph. By introducing a new effective topology parameter, 'degree grads' ψ, we clearly classify the agents into three kinds, namely, C 0 , C 1 , and D. The mechanism for the heterogeneous topology promoting cooperation is discussed in detail from the perspective of C 0 C 1 D, which reflects the fact that the unreasoning imitation behaviour of C 1 agents, who are 'cheated' by the well-paid C 0 agents inhabiting special positions, stabilizes the formation of the cooperation community. The analytical and simulation results for certain parameters are found to coincide well with each other. The C 0 C 1 D case provides a picture of the actual behaviours in real society and thus is potentially of interest

  11. Two critical brain networks for generation and combination of remote associations.

    Science.gov (United States)

    Bendetowicz, David; Urbanski, Marika; Garcin, Béatrice; Foulon, Chris; Levy, Richard; Bréchemier, Marie-Laure; Rosso, Charlotte; Thiebaut de Schotten, Michel; Volle, Emmanuelle

    2018-01-01

    Recent functional imaging findings in humans indicate that creativity relies on spontaneous and controlled processes, possibly supported by the default mode and the fronto-parietal control networks, respectively. Here, we examined the ability to generate and combine remote semantic associations, in relation to creative abilities, in patients with focal frontal lesions. Voxel-based lesion-deficit mapping, disconnection-deficit mapping and network-based lesion-deficit approaches revealed critical prefrontal nodes and connections for distinct mechanisms related to creative cognition. Damage to the right medial prefrontal region, or its potential disrupting effect on the default mode network, affected the ability to generate remote ideas, likely by altering the organization of semantic associations. Damage to the left rostrolateral prefrontal region and its connections, or its potential disrupting effect on the left fronto-parietal control network, spared the ability to generate remote ideas but impaired the ability to appropriately combine remote ideas. Hence, the current findings suggest that damage to specific nodes within the default mode and fronto-parietal control networks led to a critical loss of verbal creative abilities by altering distinct cognitive mechanisms. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Bioconductor's EnrichmentBrowser: seamless navigation through combined results of set- & network-based enrichment analysis.

    Science.gov (United States)

    Geistlinger, Ludwig; Csaba, Gergely; Zimmer, Ralf

    2016-01-20

    Enrichment analysis of gene expression data is essential to find functional groups of genes whose interplay can explain experimental observations. Numerous methods have been published that either ignore (set-based) or incorporate (network-based) known interactions between genes. However, the often subtle benefits and disadvantages of the individual methods are confusing for most biological end users and there is currently no convenient way to combine methods for an enhanced result interpretation. We present the EnrichmentBrowser package as an easily applicable software that enables (1) the application of the most frequently used set-based and network-based enrichment methods, (2) their straightforward combination, and (3) a detailed and interactive visualization and exploration of the results. The package is available from the Bioconductor repository and implements additional support for standardized expression data preprocessing, differential expression analysis, and definition of suitable input gene sets and networks. The EnrichmentBrowser package implements essential functionality for the enrichment analysis of gene expression data. It combines the advantages of set-based and network-based enrichment analysis in order to derive high-confidence gene sets and biological pathways that are differentially regulated in the expression data under investigation. Besides, the package facilitates the visualization and exploration of such sets and pathways.

  13. Classifying Returns as Extreme

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2014-01-01

    I consider extreme returns for the stock and bond markets of 14 EU countries using two classification schemes: One, the univariate classification scheme from the previous literature that classifies extreme returns for each market separately, and two, a novel multivariate classification scheme...... that classifies extreme returns for several markets jointly. The new classification scheme holds about the same information as the old one, while demanding a shorter sample period. The new classification scheme is useful....

  14. Combined cycle power plant with indirect dry cooling tower forecasting using artificial neural network

    Directory of Open Access Journals (Sweden)

    Asad Dehghani Samani

    2017-07-01

    Full Text Available Application of Artificial Neural Network (ANN in modeling of combined cycle power plant (CCPP with dry cooling tower (Heller tower has been investigated in this paper. Prediction of power plant output (megawatt under different working conditions was made using multi-layer feed-forward ANN and training was performed with operational data using back-propagation. Two ANN network was constructed for the steam turbine (ST and the main cooling system(MCS. Results indicate that the ANN model is effective in predicting the power plant output with good accuracy.

  15. Combining many interaction networks to predict gene function and analyze gene lists.

    Science.gov (United States)

    Mostafavi, Sara; Morris, Quaid

    2012-05-01

    In this article, we review how interaction networks can be used alone or in combination in an automated fashion to provide insight into gene and protein function. We describe the concept of a "gene-recommender system" that can be applied to any large collection of interaction networks to make predictions about gene or protein function based on a query list of proteins that share a function of interest. We discuss these systems in general and focus on one specific system, GeneMANIA, that has unique features and uses different algorithms from the majority of other systems. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Dynamic Response Genes in CD4+ T Cells Reveal a Network of Interactive Proteins that Classifies Disease Activity in Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Sandra Hellberg

    2016-09-01

    Full Text Available Multiple sclerosis (MS is a chronic inflammatory disease of the CNS and has a varying disease course as well as variable response to treatment. Biomarkers may therefore aid personalized treatment. We tested whether in vitro activation of MS patient-derived CD4+ T cells could reveal potential biomarkers. The dynamic gene expression response to activation was dysregulated in patient-derived CD4+ T cells. By integrating our findings with genome-wide association studies, we constructed a highly connected MS gene module, disclosing cell activation and chemotaxis as central components. Changes in several module genes were associated with differences in protein levels, which were measurable in cerebrospinal fluid and were used to classify patients from control individuals. In addition, these measurements could predict disease activity after 2 years and distinguish low and high responders to treatment in two additional, independent cohorts. While further validation is needed in larger cohorts prior to clinical implementation, we have uncovered a set of potentially promising biomarkers.

  17. Classification of wheat varieties: Use of two-dimensional gel electrophoresis for varieties that can not be classified by matrix assisted laser desorption/ionization-time of flight-mass spectrometry and an artificial neural network

    DEFF Research Database (Denmark)

    Jacobsen, Susanne; Nesic, Ljiljana; Petersen, Marianne Kjerstine

    2001-01-01

    Analyzing a gliadin extract by matrix assisted laser desorption/ionization-time of flight-mass spectrometry (MALDI- TOF-MS) combined with an artificial neural network (ANN) is a suitable method for identification of wheat varieties. However, the ANN can not distinguish between all different wheat...

  18. Classification of wheat varieties: Use of two-dimensional gel electrophoresis for varieties that can not be classified by matrix assisted laser desorption/ionization-time of flight-mass spectrometry and an artificial neural network

    DEFF Research Database (Denmark)

    Jacobsen, Susanne; Nesic, Ljiljana; Petersen, Marianne Kjerstine

    2001-01-01

    Analyzing a gliadin extract by matrix assisted laser desorption/ionization-time of flight-mass spectrometry (MALDI- TOF-MS) combined with an artificial neural network (ANN) is a suitable method for identification of wheat varieties. However, the ANN can not distinguish between all different wheat...... not be separated by MALDI-TOF-MS and NN....

  19. Hierarchical mixtures of naive Bayes classifiers

    NARCIS (Netherlands)

    Wiering, M.A.

    2002-01-01

    Naive Bayes classifiers tend to perform very well on a large number of problem domains, although their representation power is quite limited compared to more sophisticated machine learning algorithms. In this pa- per we study combining multiple naive Bayes classifiers by using the hierar- chical

  20. Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model

    Directory of Open Access Journals (Sweden)

    Bogdan Gliwa

    2011-01-01

    Full Text Available The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods. Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented.

  1. Method for Constructing Composite Response Surfaces by Combining Neural Networks with Polynominal Interpolation or Estimation Techniques

    Science.gov (United States)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2007-01-01

    A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode

  2. Combined expert system/neural networks method for process fault diagnosis

    Science.gov (United States)

    Reifman, Jaques; Wei, Thomas Y. C.

    1995-01-01

    A two-level hierarchical approach for process fault diagnosis is an operating system employs a function-oriented approach at a first level and a component characteristic-oriented approach at a second level, where the decision-making procedure is structured in order of decreasing intelligence with increasing precision. At the first level, the diagnostic method is general and has knowledge of the overall process including a wide variety of plant transients and the functional behavior of the process components. An expert system classifies malfunctions by function to narrow the diagnostic focus to a particular set of possible faulty components that could be responsible for the detected functional misbehavior of the operating system. At the second level, the diagnostic method limits its scope to component malfunctions, using more detailed knowledge of component characteristics. Trained artificial neural networks are used to further narrow the diagnosis and to uniquely identify the faulty component by classifying the abnormal condition data as a failure of one of the hypothesized components through component characteristics. Once an anomaly is detected, the hierarchical structure is used to successively narrow the diagnostic focus from a function misbehavior, i.e., a function oriented approach, until the fault can be determined, i.e., a component characteristic-oriented approach.

  3. Combined expert system/neural networks method for process fault diagnosis

    Science.gov (United States)

    Reifman, J.; Wei, T.Y.C.

    1995-08-15

    A two-level hierarchical approach for process fault diagnosis of an operating system employs a function-oriented approach at a first level and a component characteristic-oriented approach at a second level, where the decision-making procedure is structured in order of decreasing intelligence with increasing precision. At the first level, the diagnostic method is general and has knowledge of the overall process including a wide variety of plant transients and the functional behavior of the process components. An expert system classifies malfunctions by function to narrow the diagnostic focus to a particular set of possible faulty components that could be responsible for the detected functional misbehavior of the operating system. At the second level, the diagnostic method limits its scope to component malfunctions, using more detailed knowledge of component characteristics. Trained artificial neural networks are used to further narrow the diagnosis and to uniquely identify the faulty component by classifying the abnormal condition data as a failure of one of the hypothesized components through component characteristics. Once an anomaly is detected, the hierarchical structure is used to successively narrow the diagnostic focus from a function misbehavior, i.e., a function oriented approach, until the fault can be determined, i.e., a component characteristic-oriented approach. 9 figs.

  4. Inter-synaptic learning of combination rules in a cortical network model

    Directory of Open Access Journals (Sweden)

    Frédéric eLavigne

    2014-08-01

    Full Text Available Selecting responses in working memory while processing combinations of stimuli depends strongly on their relations stored in long-term memory. However, the learning of XOR-like combinations of stimuli and responses according to complex rules raises the issue of the non-linear separability of the responses within the space of stimuli. One proposed solution is to add neurons that perform a stage of non-linear processing between the stimuli and responses, at the cost of increasing the network size. Based on the non-linear integration of synaptic inputs within dendritic compartments, we propose here an inter-synaptic (IS learning algorithm that determines the probability of potentiating/depressing each synapse as a function of the co-activity of the other synapses within the same dendrite. The IS learning is effective with random connectivity and without either a priori wiring or additional neurons.Our results show that IS learning generates efficacy values that are sufficient for the processing of XOR-like combinations, on the basis of the sole correlational structure of the stimuli and responses. We analyze the types of dendrites involved in terms of the number of synapses from pre-synaptic neurons coding for the stimuli and responses. The synaptic efficacy values obtained show that different dendrites specialize in the detection of different combinations of stimuli. The resulting behavior of the cortical network model is analyzed as a function of inter-synaptic vs. Hebbian learning. Combinatorial priming effects show that the retrospective activity of neurons coding for the stimuli trigger XOR-like combination-selective prospective activity of neurons coding for the expected response. The synergistic effects of inter-synaptic learning and of mixed-coding neurons are simulated. The results show that, although each mechanism is sufficient by itself, their combined effects improve the performance of the network.

  5. Right putamen and age are the most discriminant features to diagnose Parkinson's disease by using 123I-FP-CIT brain SPET data by using an artificial neural network classifier, a classification tree (ClT).

    Science.gov (United States)

    Cascianelli, S; Tranfaglia, C; Fravolini, M L; Bianconi, F; Minestrini, M; Nuvoli, S; Tambasco, N; Dottorini, M E; Palumbo, B

    2017-01-01

    The differential diagnosis of Parkinson's disease (PD) and other conditions, such as essential tremor and drug-induced parkinsonian syndrome or normal aging brain, represents a diagnostic challenge. 123 I-FP-CIT brain SPET is able to contribute to the differential diagnosis. Semiquantitative analysis of radiopharmaceutical uptake in basal ganglia (caudate nuclei and putamina) is very useful to support the diagnostic process. An artificial neural network classifier using 123 I-FP-CIT brain SPET data, a classification tree (CIT), was applied. CIT is an automatic classifier composed of a set of logical rules, organized as a decision tree to produce an optimised threshold based classification of data to provide discriminative cut-off values. We applied a CIT to 123 I-FP-CIT brain SPET semiquantitave data, to obtain cut-off values of radiopharmaceutical uptake ratios in caudate nuclei and putamina with the aim to diagnose PD versus other conditions. We retrospectively investigated 187 patients undergoing 123 I-FP-CIT brain SPET (Millenium VG, G.E.M.S.) with semiquantitative analysis performed with Basal Ganglia (BasGan) V2 software according to EANM guidelines; among them 113 resulted affected by PD (PD group) and 74 (N group) by other non parkinsonian conditions, such as Essential Tremor and drug-induced PD. PD group included 113 subjects (60M and 53F of age: 60-81yrs) having Hoehn and Yahr score (HY): 0.5-1.5; Unified Parkinson Disease Rating Scale (UPDRS) score: 6-38; N group included 74 subjects (36M and 38 F range of age 60-80 yrs). All subjects were clinically followed for at least 6-18 months to confirm the diagnosis. To examinate data obtained by using CIT, for each of the 1,000 experiments carried out, 10% of patients were randomly selected as the CIT training set, while the remaining 90% validated the trained CIT, and the percentage of the validation data correctly classified in the two groups of patients was computed. The expected performance of an "average

  6. Combining the Performance Strengths of the Logistic Regression and Neural Network Models: A Medical Outcomes Approach

    Directory of Open Access Journals (Sweden)

    Wun Wong

    2003-01-01

    Full Text Available The assessment of medical outcomes is important in the effort to contain costs, streamline patient management, and codify medical practices. As such, it is necessary to develop predictive models that will make accurate predictions of these outcomes. The neural network methodology has often been shown to perform as well, if not better, than the logistic regression methodology in terms of sample predictive performance. However, the logistic regression method is capable of providing an explanation regarding the relationship(s between variables. This explanation is often crucial to understanding the clinical underpinnings of the disease process. Given the respective strengths of the methodologies in question, the combined use of a statistical (i.e., logistic regression and machine learning (i.e., neural network technology in the classification of medical outcomes is warranted under appropriate conditions. The study discusses these conditions and describes an approach for combining the strengths of the models.

  7. THERMODYNAMIC ANALYSIS AND SIMULATION OF A NEW COMBINED POWER AND REFRIGERATION CYCLE USING ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Hossein Rezvantalab

    2011-01-01

    Full Text Available In this study, a new combined power and refrigeration cycle is proposed, which combines the Rankine and absorption refrigeration cycles. Using a binary ammonia-water mixture as the working fluid, this combined cycle produces both power and refrigeration output simultaneously by employing only one external heat source. In order to achieve the highest possible exergy efficiency, a secondary turbine is inserted to expand the hot weak solution leaving the boiler. Moreover, an artificial neural network (ANN is used to simulate the thermodynamic properties and the relationship between the input thermodynamic variables on the cycle performance. It is shown that turbine inlet pressure, as well as heat source and refrigeration temperatures have significant effects on the net power output, refrigeration output and exergy efficiency of the combined cycle. In addition, the results of ANN are in excellent agreement with the mathematical simulation and cover a wider range for evaluation of cycle performance.

  8. Classifying features in CT imagery: accuracy for some single- and multiple-species classifiers

    Science.gov (United States)

    Daniel L. Schmoldt; Jing He; A. Lynn Abbott

    1998-01-01

    Our current approach to automatically label features in CT images of hardwood logs classifies each pixel of an image individually. These feature classifiers use a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Initially, this type of...

  9. Enhanced Stochastic Methodology for Combined Architecture of E-Commerce and Security Networks

    OpenAIRE

    Kim, Song-Kyoo

    2009-01-01

    This paper deals with network architecture which is a combination of electronic commerce and security systems in the typical Internet ecosystems. The e-commerce model that is typically known as online shopping can be considered as a multichannel queueing system. In the other hand, stochastic security system is designed for improving the reliability and availability of the e-commerce system. The security system in this paper deals with a complex system that consists of main unrelia...

  10. High-risk plaque features can be detected in non-stenotic carotid plaques of patients with ischaemic stroke classified as cryptogenic using combined 18F-FDG PET/MR imaging

    International Nuclear Information System (INIS)

    Hyafil, Fabien; Schindler, Andreas; Obenhuber, Tilman; Saam, Tobias; Sepp, Dominik; Hoehn, Sabine; Poppert, Holger; Bayer-Karpinska, Anna; Boeckh-Behrens, Tobias; Hacker, Marcus; Nekolla, Stephan G.; Rominger, Axel; Dichgans, Martin; Schwaiger, Markus

    2016-01-01

    The aim of this study was to investigate in 18 patients with ischaemic stroke classified as cryptogenic and presenting non-stenotic carotid atherosclerotic plaques the morphological and biological aspects of these plaques with magnetic resonance imaging (MRI) and 18 F-fluoro-deoxyglucose positron emission tomography ( 18 F-FDG PET) imaging. Carotid arteries were imaged 150 min after injection of 18 F-FDG with a combined PET/MRI system. American Heart Association (AHA) lesion type and plaque composition were determined on consecutive MRI axial sections (n = 460) in both carotid arteries. 18 F-FDG uptake in carotid arteries was quantified using tissue to background ratio (TBR) on corresponding PET sections. The prevalence of complicated atherosclerotic plaques (AHA lesion type VI) detected with high-resolution MRI was significantly higher in the carotid artery ipsilateral to the ischaemic stroke as compared to the contralateral side (39 vs 0 %; p = 0.001). For all other AHA lesion types, no significant differences were found between ipsilateral and contralateral sides. In addition, atherosclerotic plaques classified as high-risk lesions with MRI (AHA lesion type VI) were associated with higher 18 F-FDG uptake in comparison with other AHA lesions (TBR = 3.43 ± 1.13 vs 2.41 ± 0.84, respectively; p < 0.001). Furthermore, patients presenting at least one complicated lesion (AHA lesion type VI) with MRI showed significantly higher 18 F-FDG uptake in both carotid arteries (ipsilateral and contralateral to the stroke) in comparison with carotid arteries of patients showing no complicated lesion with MRI (mean TBR = 3.18 ± 1.26 and 2.80 ± 0.94 vs 2.19 ± 0.57, respectively; p < 0.05) in favour of a diffuse inflammatory process along both carotid arteries associated with complicated plaques. Morphological and biological features of high-risk plaques can be detected with 18 F-FDG PET/MRI in non-stenotic atherosclerotic plaques ipsilateral to the stroke, suggesting a causal

  11. High-risk plaque features can be detected in non-stenotic carotid plaques of patients with ischaemic stroke classified as cryptogenic using combined {sup 18}F-FDG PET/MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Hyafil, Fabien [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany); Bichat University Hospital, Department of Nuclear Medicine, Paris (France); Schindler, Andreas; Obenhuber, Tilman; Saam, Tobias [Ludwig Maximilians University Hospital Munich, Institute for Clinical Radiology, Munich (Germany); Sepp, Dominik; Hoehn, Sabine; Poppert, Holger [Technische Universitaet Muenchen, Department of Neurology, Klinikum rechts der Isar, Munich (Germany); Bayer-Karpinska, Anna [Ludwig Maximilians University Hospital Munich, Institute for Stroke and Dementia Research, Munich (Germany); Boeckh-Behrens, Tobias [Technische Universitaet Muenchen, Department of Neuroradiology, Klinikum Rechts der Isar, Munich (Germany); Hacker, Marcus [Medical University of Vienna, Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria); Nekolla, Stephan G. [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany); Partner Site Munich Heart Alliance, German Centre for Cardiovascular Research (DZHK), Munich (Germany); Rominger, Axel [Ludwig Maximilians University Hospital Munich, Department of Nuclear Medicine, Munich (Germany); Dichgans, Martin [Technische Universitaet Muenchen, Department of Neurology, Klinikum rechts der Isar, Munich (Germany); Munich Cluster of Systems Neurology (SyNergy), Munich (Germany); Schwaiger, Markus [Technische Universitaet Muenchen, Department of Nuclear Medicine, Klinikum rechts der Isar, Munich (Germany)

    2016-02-15

    The aim of this study was to investigate in 18 patients with ischaemic stroke classified as cryptogenic and presenting non-stenotic carotid atherosclerotic plaques the morphological and biological aspects of these plaques with magnetic resonance imaging (MRI) and {sup 18}F-fluoro-deoxyglucose positron emission tomography ({sup 18}F-FDG PET) imaging. Carotid arteries were imaged 150 min after injection of {sup 18}F-FDG with a combined PET/MRI system. American Heart Association (AHA) lesion type and plaque composition were determined on consecutive MRI axial sections (n = 460) in both carotid arteries. {sup 18}F-FDG uptake in carotid arteries was quantified using tissue to background ratio (TBR) on corresponding PET sections. The prevalence of complicated atherosclerotic plaques (AHA lesion type VI) detected with high-resolution MRI was significantly higher in the carotid artery ipsilateral to the ischaemic stroke as compared to the contralateral side (39 vs 0 %; p = 0.001). For all other AHA lesion types, no significant differences were found between ipsilateral and contralateral sides. In addition, atherosclerotic plaques classified as high-risk lesions with MRI (AHA lesion type VI) were associated with higher {sup 18}F-FDG uptake in comparison with other AHA lesions (TBR = 3.43 ± 1.13 vs 2.41 ± 0.84, respectively; p < 0.001). Furthermore, patients presenting at least one complicated lesion (AHA lesion type VI) with MRI showed significantly higher {sup 18}F-FDG uptake in both carotid arteries (ipsilateral and contralateral to the stroke) in comparison with carotid arteries of patients showing no complicated lesion with MRI (mean TBR = 3.18 ± 1.26 and 2.80 ± 0.94 vs 2.19 ± 0.57, respectively; p < 0.05) in favour of a diffuse inflammatory process along both carotid arteries associated with complicated plaques. Morphological and biological features of high-risk plaques can be detected with {sup 18}F-FDG PET/MRI in non-stenotic atherosclerotic plaques ipsilateral

  12. Analysis and evolution of air quality monitoring networks using combined statistical information indexes

    Directory of Open Access Journals (Sweden)

    Axel Osses

    2013-10-01

    Full Text Available In this work, we present combined statistical indexes for evaluating air quality monitoring networks based on concepts derived from the information theory and Kullback–Liebler divergence. More precisely, we introduce: (1 the standard measure of complementary mutual information or ‘specificity’ index; (2 a new measure of information gain or ‘representativity’ index; (3 the information gaps associated with the evolution of a network and (4 the normalised information distance used in clustering analysis. All these information concepts are illustrated by applying them to 14 yr of data collected by the air quality monitoring network in Santiago de Chile (33.5 S, 70.5 W, 500 m a.s.l.. We find that downtown stations, located in a relatively flat area of the Santiago basin, generally show high ‘representativity’ and low ‘specificity’, whereas the contrary is found for a station located in a canyon to the east of the basin, consistently with known emission and circulation patterns of Santiago. We also show interesting applications of information gain to the analysis of the evolution of a network, where the choice of background information is also discussed, and of mutual information distance to the classifications of stations. Our analyses show that information as those presented here should of course be used in a complementary way when addressing the analysis of an air quality network for planning and evaluation purposes.

  13. Combining inferred regulatory and reconstructed metabolic networks enhances phenotype prediction in yeast.

    Science.gov (United States)

    Wang, Zhuo; Danziger, Samuel A; Heavner, Benjamin D; Ma, Shuyi; Smith, Jennifer J; Li, Song; Herricks, Thurston; Simeonidis, Evangelos; Baliga, Nitin S; Aitchison, John D; Price, Nathan D

    2017-05-01

    Gene regulatory and metabolic network models have been used successfully in many organisms, but inherent differences between them make networks difficult to integrate. Probabilistic Regulation Of Metabolism (PROM) provides a partial solution, but it does not incorporate network inference and underperforms in eukaryotes. We present an Integrated Deduced And Metabolism (IDREAM) method that combines statistically inferred Environment and Gene Regulatory Influence Network (EGRIN) models with the PROM framework to create enhanced metabolic-regulatory network models. We used IDREAM to predict phenotypes and genetic interactions between transcription factors and genes encoding metabolic activities in the eukaryote, Saccharomyces cerevisiae. IDREAM models contain many fewer interactions than PROM and yet produce significantly more accurate growth predictions. IDREAM consistently outperformed PROM using any of three popular yeast metabolic models and across three experimental growth conditions. Importantly, IDREAM's enhanced accuracy makes it possible to identify subtle synthetic growth defects. With experimental validation, these novel genetic interactions involving the pyruvate dehydrogenase complex suggested a new role for fatty acid-responsive factor Oaf1 in regulating acetyl-CoA production in glucose grown cells.

  14. Optimal Operation of Network-Connected Combined Heat and Powers for Customer Profit Maximization

    Directory of Open Access Journals (Sweden)

    Da Xie

    2016-06-01

    Full Text Available Network-connected combined heat and powers (CHPs, owned by a community, can export surplus heat and electricity to corresponding heat and electric networks after community loads are satisfied. This paper proposes a new optimization model for network-connected CHP operation. Both CHPs’ overall efficiency and heat to electricity ratio (HTER are assumed to vary with loading levels. Based on different energy flow scenarios where heat and electricity are exported to the network from the community or imported, four profit models are established accordingly. They reflect the different relationships between CHP energy supply and community load demand across time. A discrete optimization model is then developed to maximize the profit for the community. The models are derived from the intervals determined by the daily operation modes of CHP and real-time buying and selling prices of heat, electricity and natural gas. By demonstrating the proposed models on a 1 MW network-connected CHP, results show that the community profits are maximized in energy markets. Thus, the proposed optimization approach can help customers to devise optimal CHP operating strategies for maximizing benefits.

  15. Combining in silico evolution and nonlinear dimensionality reduction to redesign responses of signaling networks.

    Science.gov (United States)

    Prescott, Aaron M; Abel, Steven M

    2017-01-13

    The rational design of network behavior is a central goal of synthetic biology. Here, we combine in silico evolution with nonlinear dimensionality reduction to redesign the responses of fixed-topology signaling networks and to characterize sets of kinetic parameters that underlie various input-output relations. We first consider the earliest part of the T cell receptor (TCR) signaling network and demonstrate that it can produce a variety of input-output relations (quantified as the level of TCR phosphorylation as a function of the characteristic TCR binding time). We utilize an evolutionary algorithm (EA) to identify sets of kinetic parameters that give rise to: (i) sigmoidal responses with the activation threshold varied over 6 orders of magnitude, (ii) a graded response, and (iii) an inverted response in which short TCR binding times lead to activation. We also consider a network with both positive and negative feedback and use the EA to evolve oscillatory responses with different periods in response to a change in input. For each targeted input-output relation, we conduct many independent runs of the EA and use nonlinear dimensionality reduction to embed the resulting data for each network in two dimensions. We then partition the results into groups and characterize constraints placed on the parameters by the different targeted response curves. Our approach provides a way (i) to guide the design of kinetic parameters of fixed-topology networks to generate novel input-output relations and (ii) to constrain ranges of biological parameters using experimental data. In the cases considered, the network topologies exhibit significant flexibility in generating alternative responses, with distinct patterns of kinetic rates emerging for different targeted responses.

  16. Combining in silico evolution and nonlinear dimensionality reduction to redesign responses of signaling networks

    Science.gov (United States)

    Prescott, Aaron M.; Abel, Steven M.

    2016-12-01

    The rational design of network behavior is a central goal of synthetic biology. Here, we combine in silico evolution with nonlinear dimensionality reduction to redesign the responses of fixed-topology signaling networks and to characterize sets of kinetic parameters that underlie various input-output relations. We first consider the earliest part of the T cell receptor (TCR) signaling network and demonstrate that it can produce a variety of input-output relations (quantified as the level of TCR phosphorylation as a function of the characteristic TCR binding time). We utilize an evolutionary algorithm (EA) to identify sets of kinetic parameters that give rise to: (i) sigmoidal responses with the activation threshold varied over 6 orders of magnitude, (ii) a graded response, and (iii) an inverted response in which short TCR binding times lead to activation. We also consider a network with both positive and negative feedback and use the EA to evolve oscillatory responses with different periods in response to a change in input. For each targeted input-output relation, we conduct many independent runs of the EA and use nonlinear dimensionality reduction to embed the resulting data for each network in two dimensions. We then partition the results into groups and characterize constraints placed on the parameters by the different targeted response curves. Our approach provides a way (i) to guide the design of kinetic parameters of fixed-topology networks to generate novel input-output relations and (ii) to constrain ranges of biological parameters using experimental data. In the cases considered, the network topologies exhibit significant flexibility in generating alternative responses, with distinct patterns of kinetic rates emerging for different targeted responses.

  17. Prediction of Increasing Production Activities using Combination of Query Aggregation on Complex Events Processing and Neural Network

    Directory of Open Access Journals (Sweden)

    Achmad Arwan

    2016-07-01

    Full Text Available AbstrakProduksi, order, penjualan, dan pengiriman adalah serangkaian event yang saling terkait dalam industri manufaktur. Selanjutnya hasil dari event tersebut dicatat dalam event log. Complex Event Processing adalah metode yang digunakan untuk menganalisis apakah terdapat pola kombinasi peristiwa tertentu (peluang/ancaman yang terjadi pada sebuah sistem, sehingga dapat ditangani secara cepat dan tepat. Jaringan saraf tiruan adalah metode yang digunakan untuk mengklasifikasi data peningkatan proses produksi. Hasil pencatatan rangkaian proses yang menyebabkan peningkatan produksi digunakan sebagai data latih untuk mendapatkan fungsi aktivasi dari jaringan saraf tiruan. Penjumlahan hasil catatan event log dimasukkan ke input jaringan saraf tiruan untuk perhitungan nilai aktivasi. Ketika nilai aktivasi lebih dari batas yang ditentukan, maka sistem mengeluarkan sinyal untuk meningkatkan produksi, jika tidak, sistem tetap memantau kejadian. Hasil percobaan menunjukkan bahwa akurasi dari metode ini adalah 77% dari 39 rangkaian aliran event.Kata kunci: complex event processing, event, jaringan saraf tiruan, prediksi peningkatan produksi, proses. AbstractProductions, orders, sales, and shipments are series of interrelated events within manufacturing industry. Further these events were recorded in the event log. Complex event processing is a method that used to analyze whether there are patterns of combinations of certain events (opportunities / threats that occur in a system, so it can be addressed quickly and appropriately. Artificial neural network is a method that we used to classify production increase activities. The series of events that cause the increase of the production used as a dataset to train the weight of neural network which result activation value. An aggregate stream of events inserted into the neural network input to compute the value of activation. When the value is over a certain threshold (the activation value results

  18. Classifying Cereal Data

    Science.gov (United States)

    The DSQ includes questions about cereal intake and allows respondents up to two responses on which cereals they consume. We classified each cereal reported first by hot or cold, and then along four dimensions: density of added sugars, whole grains, fiber, and calcium.

  19. What Combinations of Contents is Driving Popularity in IPTV-based Social Networks?

    Science.gov (United States)

    Bhatt, Rajen

    IPTV-based Social Networks are gaining popularity with TV programs coming over IP connection and internet like applications available on home TV. One such application is rating TV programs over some predefined genres. In this paper, we suggest an approach for building a recommender system to be used by content distributors, publishers, and motion pictures producers-directors to decide on what combinations of contents may drive popularity or unpopularity. This may be used then for creating a proper mixture of media contents which can drive high popularity. This may also be used for the purpose of catering customized contents for group of users whose taste is similar and thus combinations of contents driving popularity for a certain group is also similar. We use a novel approach for this formulation utilizing fuzzy decision trees. Computational experiments performed over real-world program review database shows that the proposed approach is very efficient towards understanding of the content combinations.

  20. Energetic materials identification by laser-induced breakdown spectroscopy combined with artificial neural network.

    Science.gov (United States)

    Farhadian, Amir Hossein; Tehrani, Masoud Kavosh; Keshavarz, Mohammad Hossein; Darbani, Seyyed Mohammad Reza

    2017-04-20

    In this study, for the first time to the best of our knowledge, a combination of the laser-induced breakdown spectroscopy (LIBS) technique and artificial neural network (ANN) analysis has been implemented for the identification of energetic materials, including TNT, RDX, black powder, and propellant. Also, aluminum, copper, inconel, and graphite have been used for more accurate investigation and comparison. After the LIBS test and spectrum acquisition on all samples in both air and argon ambient, optimized neural networks were designed by LIBS data. Based on input data, three ANN algorithms are proposed: the first is fed with the whole LIBS spectra in air (ANN1) and the second with the principle component analysis (PCA) scores of each spectrum in air (ANN2) and the other with the PCA scores of the spectrum in Ar (ANN3). According to the results, error of the network is very low in ANN2 and 3 and the best identification and discrimination was obtained by ANN3. After these, in order to validate and for more investigation of this combined method, we also used Al/RDX standard samples for analysis.

  1. A computational framework for gene regulatory network inference that combines multiple methods and datasets.

    Science.gov (United States)

    Gupta, Rita; Stincone, Anna; Antczak, Philipp; Durant, Sarah; Bicknell, Roy; Bikfalvi, Andreas; Falciani, Francesco

    2011-04-13

    Reverse engineering in systems biology entails inference of gene regulatory networks from observational data. This data typically include gene expression measurements of wild type and mutant cells in response to a given stimulus. It has been shown that when more than one type of experiment is used in the network inference process the accuracy is higher. Therefore the development of generally applicable and effective methodologies that embed multiple sources of information in a single computational framework is a worthwhile objective. This paper presents a new method for network inference, which uses multi-objective optimisation (MOO) to integrate multiple inference methods and experiments. We illustrate the potential of the methodology by combining ODE and correlation-based network inference procedures as well as time course and gene inactivation experiments. Here we show that our methodology is effective for a wide spectrum of data sets and method integration strategies. The approach we present in this paper is flexible and can be used in any scenario that benefits from integration of multiple sources of information and modelling procedures in the inference process. Moreover, the application of this method to two case studies representative of bacteria and vertebrate systems has shown potential in identifying key regulators of important biological processes.

  2. Combining Topological Hardware and Topological Software: Color-Code Quantum Computing with Topological Superconductor Networks

    Science.gov (United States)

    Litinski, Daniel; Kesselring, Markus S.; Eisert, Jens; von Oppen, Felix

    2017-07-01

    We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall-superconductor hybrids.

  3. Exploring the Combination of Dempster-Shafer Theory and Neural Network for Predicting Trust and Distrust

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2016-01-01

    Full Text Available In social media, trust and distrust among users are important factors in helping users make decisions, dissect information, and receive recommendations. However, the sparsity and imbalance of social relations bring great difficulties and challenges in predicting trust and distrust. Meanwhile, there are numerous inducing factors to determine trust and distrust relations. The relationship among inducing factors may be dependency, independence, and conflicting. Dempster-Shafer theory and neural network are effective and efficient strategies to deal with these difficulties and challenges. In this paper, we study trust and distrust prediction based on the combination of Dempster-Shafer theory and neural network. We firstly analyze the inducing factors about trust and distrust, namely, homophily, status theory, and emotion tendency. Then, we quantify inducing factors of trust and distrust, take these features as evidences, and construct evidence prototype as input nodes of multilayer neural network. Finally, we propose a framework of predicting trust and distrust which uses multilayer neural network to model the implementing process of Dempster-Shafer theory in different hidden layers, aiming to overcome the disadvantage of Dempster-Shafer theory without optimization method. Experimental results on a real-world dataset demonstrate the effectiveness of the proposed framework.

  4. Intelligent Garbage Classifier

    Directory of Open Access Journals (Sweden)

    Ignacio Rodríguez Novelle

    2008-12-01

    Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  5. Classifying Linear Canonical Relations

    OpenAIRE

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  6. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    Science.gov (United States)

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  7. Combining network analysis with Cognitive Work Analysis: insights into social organisational and cooperation analysis.

    Science.gov (United States)

    Houghton, Robert J; Baber, Chris; Stanton, Neville A; Jenkins, Daniel P; Revell, Kirsten

    2015-01-01

    Cognitive Work Analysis (CWA) allows complex, sociotechnical systems to be explored in terms of their potential configurations. However, CWA does not explicitly analyse the manner in which person-to-person communication is performed in these configurations. Consequently, the combination of CWA with Social Network Analysis provides a means by which CWA output can be analysed to consider communication structure. The approach is illustrated through a case study of a military planning team. The case study shows how actor-to-actor and actor-to-function mapping can be analysed, in terms of centrality, to produce metrics of system structure under different operating conditions. In this paper, a technique for building social network diagrams from CWA is demonstrated.The approach allows analysts to appreciate the potential impact of organisational structure on a command system.

  8. Analysing collaboration among HIV agencies through combining network theory and relational coordination.

    Science.gov (United States)

    Khosla, Nidhi; Marsteller, Jill Ann; Hsu, Yea Jen; Elliott, David L

    2016-02-01

    Agencies with different foci (e.g. nutrition, social, medical, housing) serve people living with HIV (PLHIV). Serving needs of PLHIV comprehensively requires a high degree of coordination among agencies which often benefits from more frequent communication. We combined Social Network theory and Relational Coordination theory to study coordination among HIV agencies in Baltimore. Social Network theory implies that actors (e.g., HIV agencies) establish linkages amongst themselves in order to access resources (e.g., information). Relational Coordination theory suggests that high quality coordination among agencies or teams relies on the seven dimensions of frequency, timeliness and accuracy of communication, problem-solving communication, knowledge of agencies' work, mutual respect and shared goals. We collected data on frequency of contact from 57 agencies using a roster method. Response options were ordinal ranging from 'not at all' to 'daily'. We analyzed data using social network measures. Next, we selected agencies with which at least one-third of the sample reported monthly or more frequent interaction. This yielded 11 agencies whom we surveyed on seven relational coordination dimensions with questions scored on a Likert scale of 1-5. Network density, defined as the proportion of existing connections to all possible connections, was 20% when considering monthly or higher interaction. Relational coordination scores from individual agencies to others ranged between 1.17 and 5.00 (maximum possible score 5). The average scores for different dimensions across all agencies ranged between 3.30 and 4.00. Shared goals (4.00) and mutual respect (3.91) scores were highest, while scores such as knowledge of each other's work and problem-solving communication were relatively lower. Combining theoretically driven analyses in this manner offers an innovative way to provide a comprehensive picture of inter-agency coordination and the quality of exchange that underlies

  9. Choice of implant combinations in total hip replacement: systematic review and network meta-analysis.

    Science.gov (United States)

    López-López, José A; Humphriss, Rachel L; Beswick, Andrew D; Thom, Howard H Z; Hunt, Linda P; Burston, Amanda; Fawsitt, Christopher G; Hollingworth, William; Higgins, Julian P T; Welton, Nicky J; Blom, Ashley W; Marques, Elsa M R

    2017-11-02

    Objective  To compare the survival of different implant combinations for primary total hip replacement (THR). Design  Systematic review and network meta-analysis. Data sources  Medline, Embase, The Cochrane Library, ClinicalTrials.gov, WHO International Clinical Trials Registry Platform, and the EU Clinical Trials Register. Review methods  Published randomised controlled trials comparing different implant combinations. Implant combinations were defined by bearing surface materials (metal-on-polyethylene, ceramic-on-polyethylene, ceramic-on-ceramic, or metal-on-metal), head size (large ≥36 mm or small meta-analysis for revision. There was no evidence that the risk of revision surgery was reduced by other implant combinations compared with the reference implant combination. Although estimates are imprecise, metal-on-metal, small head, cemented implants (hazard ratio 4.4, 95% credible interval 1.6 to 16.6) and resurfacing (12.1, 2.1 to 120.3) increase the risk of revision at 0-2 years after primary THR compared with the reference implant combination. Similar results were observed for the 2-10 years period. 31 studies (2888 patients) were included in the analysis of Harris hip score. No implant combination had a better score than the reference implant combination. Conclusions  Newer implant combinations were not found to be better than the reference implant combination (metal-on-polyethylene (not highly cross linked), small head, cemented) in terms of risk of revision surgery or Harris hip score. Metal-on-metal, small head, cemented implants and resurfacing increased the risk of revision surgery compared with the reference implant combination. The results were consistent with observational evidence and were replicated in sensitivity analysis but were limited by poor reporting across studies. Systematic review registration  PROSPERO CRD42015019435. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence

  10. Adaptive Linear and Normalized Combination of Radial Basis Function Networks for Function Approximation and Regression

    Directory of Open Access Journals (Sweden)

    Yunfeng Wu

    2014-01-01

    Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.

  11. Classifying TDSS Stellar Variables

    Science.gov (United States)

    Amaro, Rachael Christina; Green, Paul J.; TDSS Collaboration

    2017-01-01

    The Time Domain Spectroscopic Survey (TDSS), a subprogram of SDSS-IV eBOSS, obtains classification/discovery spectra of point-source photometric variables selected from PanSTARRS and SDSS multi-color light curves regardless of object color or lightcurve shape. Tens of thousands of TDSS spectra are already available and have been spectroscopically classified both via pipeline and by visual inspection. About half of these spectra are quasars, half are stars. Our goal is to classify the stars with their correct variability types. We do this by acquiring public multi-epoch light curves for brighter stars (rpulsating white dwarfs, and other exotic systems. The key difference between our catalog and others is that along with the light curves, we will be using TDSS spectra to help in the classification of variable type, as spectra are rich with information allowing estimation of physical parameters like temperature, metallicity, gravity, etc. This work was supported by the SDSS Research Experience for Undergraduates program, which is funded by a grant from Sloan Foundation to the Astrophysical Research Consortium.

  12. Classifying basic research designs.

    Science.gov (United States)

    Burkett, G L

    1990-01-01

    Considerable confusion over terminology for classifying basic types of research design in family medicine stems from the rich variety of substantive topics studied by family medicine researchers, differences in research terminology among the disciplines that family medicine research draws from, and lack of uniform research design terminology within these disciplines themselves. Many research design textbooks themselves fail to specify the dimensions on which research designs are classified or the logic underlying the classification systems proposed. This paper describes a typology based on three dimensions that may be used to characterize the basic design qualities of any study. These dimensions are: 1) the nature of the research objective (exploratory, descriptive, or analytic); 2) the time frame under investigation (retrospective, cross-sectional, or prospective); and 3) whether the investigator intervenes in the events under study (observational or interventional). This three-dimensional typology may be helpful for teaching basic research design concepts, for contemplating research design decisions in planning a study, and as a basis for further consideration of a more detailed, uniform research design classification system.

  13. A combined neural network and decision trees model for prognosis of breast cancer relapse.

    Science.gov (United States)

    Jerez-Aragonés, José M; Gómez-Ruiz, José A; Ramos-Jiménez, Gonzalo; Muñoz-Pérez, José; Alba-Conejo, Emilio

    2003-01-01

    The prediction of clinical outcome of patients after breast cancer surgery plays an important role in medical tasks such as diagnosis and treatment planning. Different prognostic factors for breast cancer outcome appear to be significant predictors for overall survival, but probably form part of a bigger picture comprising many factors. Survival estimations are currently performed by clinicians using the statistical techniques of survival analysis. In this sense, artificial neural networks are shown to be a powerful tool for analysing datasets where there are complicated non-linear interactions between the input data and the information to be predicted. This paper presents a decision support tool for the prognosis of breast cancer relapse that combines a novel algorithm TDIDT (control of induction by sample division method, CIDIM), to select the most relevant prognostic factors for the accurate prognosis of breast cancer, with a system composed of different neural networks topologies that takes as input the selected variables in order for it to reach good correct classification probability. In addition, a new method for the estimate of Bayes' optimal error using the neural network paradigm is proposed. Clinical-pathological data were obtained from the Medical Oncology Service of the Hospital Clinico Universitario of Málaga, Spain. The results show that the proposed system is an useful tool to be used by clinicians to search through large datasets seeking subtle patterns in prognostic factors, and that may further assist the selection of appropriate adjuvant treatments for the individual patient.

  14. Application of artificial neural network model combined with four biomarkers in auxiliary diagnosis of lung cancer.

    Science.gov (United States)

    Duan, Xiaoran; Yang, Yongli; Tan, Shanjuan; Wang, Sihua; Feng, Xiaolei; Cui, Liuxin; Feng, Feifei; Yu, Songcheng; Wang, Wei; Wu, Yongjun

    2017-08-01

    The purpose of the study was to explore the application of artificial neural network model in the auxiliary diagnosis of lung cancer and compare the effects of back-propagation (BP) neural network with Fisher discrimination model for lung cancer screening by the combined detections of four biomarkers of p16, RASSF1A and FHIT gene promoter methylation levels and the relative telomere length. Real-time quantitative methylation-specific PCR was used to detect the levels of three-gene promoter methylation, and real-time PCR method was applied to determine the relative telomere length. BP neural network and Fisher discrimination analysis were used to establish the discrimination diagnosis model. The levels of three-gene promoter methylation in patients with lung cancer were significantly higher than those of the normal controls. The values of Z(P) in two groups were 2.641 (0.008), 2.075 (0.038) and 3.044 (0.002), respectively. The relative telomere lengths of patients with lung cancer (0.93 ± 0.32) were significantly lower than those of the normal controls (1.16 ± 0.57), t = 4.072, P intelligent diagnosis tool for lung cancer.

  15. Combined metabolomic and correlation networks analyses reveal fumarase insufficiency altered amino acid metabolism.

    Science.gov (United States)

    Hou, Entai; Li, Xian; Liu, Zerong; Zhang, Fuchang; Tian, Zhongmin

    2018-04-01

    Fumarase catalyzes the interconversion of fumarate and l-malate in the tricarboxylic acid cycle. Fumarase insufficiencies were associated with increased levels of fumarate, decreased levels of malate and exacerbated salt-induced hypertension. To gain insights into the metabolism profiles induced by fumarase insufficiency and identify key regulatory metabolites, we applied a GC-MS based metabolomics platform coupled with a network approach to analyze fumarase insufficient human umbilical vein endothelial cells (HUVEC) and negative controls. A total of 24 altered metabolites involved in seven metabolic pathways were identified as significantly altered, and enriched for the biological module of amino acids metabolism. In addition, Pearson correlation network analysis revealed that fumaric acid, l-malic acid, l-aspartic acid, glycine and l-glutamic acid were hub metabolites according to Pagerank based on their three centrality indices. Alanine aminotransferase and glutamate dehydrogenase activities increased significantly in fumarase deficiency HUVEC. These results confirmed that fumarase insufficiency altered amino acid metabolism. The combination of metabolomics and network methods would provide another perspective on expounding the molecular mechanism at metabolomics level. Copyright © 2017 John Wiley & Sons, Ltd.

  16. A probabilistic approach to combining smart meter and electric vehicle charging data to investigate distribution network impacts

    International Nuclear Information System (INIS)

    Neaimeh, Myriam; Wardle, Robin; Jenkins, Andrew M.; Yi, Jialiang; Hill, Graeme; Lyons, Padraig F.; Hübner, Yvonne; Blythe, Phil T.; Taylor, Phil C.

    2015-01-01

    Highlights: • Working with unique datasets of EV charging and smart meter load demand. • Distribution networks are not a homogenous group with more capabilities to accommodate EVs than previously suggested. • Spatial and temporal diversity of EV charging demand alleviate the impacts on networks. • An extensive recharging infrastructure could enable connection of additional EVs on constrained distribution networks. • Electric utilities could increase the network capability to accommodate EVs by investing in recharging infrastructure. - Abstract: This work uses a probabilistic method to combine two unique datasets of real world electric vehicle charging profiles and residential smart meter load demand. The data was used to study the impact of the uptake of Electric Vehicles (EVs) on electricity distribution networks. Two real networks representing an urban and rural area, and a generic network representative of a heavily loaded UK distribution network were used. The findings show that distribution networks are not a homogeneous group with a variation of capabilities to accommodate EVs and there is a greater capability than previous studies have suggested. Consideration of the spatial and temporal diversity of EV charging demand has been demonstrated to reduce the estimated impacts on the distribution networks. It is suggested that distribution network operators could collaborate with new market players, such as charging infrastructure operators, to support the roll out of an extensive charging infrastructure in a way that makes the network more robust; create more opportunities for demand side management; and reduce planning uncertainties associated with the stochastic nature of EV charging demand.

  17. Classifying network attack scenarios using an ontology

    CSIR Research Space (South Africa)

    Van Heerden, RP

    2012-03-01

    Full Text Available on the web soon after the breach. Kelley et al used these publicly available lists for testing password cracking algorithms (Kelley, 2011). 5.3.6 Snooping for secrets This scenario represents curious individuals nosing around for secrets. Gary Mc... and again): Measuring password strength by simulating password-cracking algorithms (CMU-CyLab-11-008). Lancor, L., & Workman, R. (2007). Using Google hacking to enhance defense strategies. ACM SIGCSE Bulletin, 39(1), 491-495. Lau, F., Rubin, S. H...

  18. A Simple Neural Network Contextual Classifier

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Tidemann, J.

    1997-01-01

    I. Kanellopoulos, G.G. Wilkinson, F. Roli and J. Austin (Eds.)Proceedings of European Union Environment and Climate Programme Concerted Action COMPARES (COnnectionist Methods in Pre-processing and Analysis of REmote Sensing data).......I. Kanellopoulos, G.G. Wilkinson, F. Roli and J. Austin (Eds.)Proceedings of European Union Environment and Climate Programme Concerted Action COMPARES (COnnectionist Methods in Pre-processing and Analysis of REmote Sensing data)....

  19. SHORT-TERM SOLAR RADIATION FORECASTING BY USING AN ITERATIVE COMBINATION OF WAVELET ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Julio Cesar Royer

    2016-03-01

    Full Text Available The information provided by accurate forecasts of solar energy time series are considered essential for performing an appropriate prediction of the electrical power that will be available in an electric system, as pointed out in Zhou et al. (2011. However, since the underlying data are highly non-stationary, it follows that to produce their accurate predictions is a very difficult assignment. In order to accomplish it, this paper proposes an iterative Combination of Wavelet Artificial Neural Networks (CWANN which is aimed to produce short-term solar radiation time series forecasting. Basically, the CWANN method can be split into three stages: at first one, a decomposition of level p, defined in terms of a wavelet basis, of a given solar radiation time series is performed, generating r+1 Wavelet Components (WC; at second one, these r+1 WCs are individually modeled by the k different ANNs, where k>5, and the 5 best forecasts of each WC are combined by means of another ANN, producing the combined forecasts of WC; and, at third one, the combined forecasts WC are simply added, generating the forecasts of the underlying solar radiation data. An iterative algorithm is proposed for iteratively searching for the optimal values for the CWANN parameters, as we will see. In order to evaluate it, ten real solar radiation time series of Brazilian system were modeled here. In all statistical results, the CWANN method has achieved remarkable greater forecasting performances when compared with a traditional ANN (described in Section 2.1.

  20. [Effect of microneedle combined with Lauromacrogol on skin capillary network: experimental study].

    Science.gov (United States)

    Xu, Sida; Wei, Qiang; Fan, Youfen; Chen, Shihai; Liu, Qingfeng; Yin, Guoqiang; Liao, Mingde; Sun, Yu

    2014-11-01

    To explore the effect of microneedle combined with Lauromacrogol on skin capillary network. 24 male Leghone (1.5-2.0 kg in weight) were randomly divided into three groups as group A (microneedle combined with Lauromacrogol), B (microneedle combined with physiological saline) , and C(control). The cockscombs were treated. The specimens were taken on the 7th, 14th, 21th , and 28th day postoperatively. HE staining, immunohistochemical staining and special staining were performed for study of the number of capillary and collagen I/III , as well as elastic fibers. The color of cockscombs in group A became lightening after treatment. The number of capillary decreased as showing by HE staining. The collagen I and III in group B was significantly different from that in group A and C (P microneedle combined with Lauromacrogol could effectively reduce the capillary in cockscomb without any tissue fibrosis. Microneedle can stimulate the proliferation of elastic fiber, so as to improve the skin ageing process.

  1. Combined Ozone Retrieval From METOP Sensors Using META-Training Of Deep Neural Networks

    Science.gov (United States)

    Felder, Martin; Sehnke, Frank; Kaifel, Anton

    2013-12-01

    The newest installment of our well-proven Neural Net- work Ozone Retrieval System (NNORSY) combines the METOP sensors GOME-2 and IASI with cloud information from AVHRR. Through the use of advanced meta- learning techniques like automatic feature selection and automatic architecture search applied to a set of deep neural networks, having at least two or three hidden layers, we have been able to avoid many technical issues normally encountered during the construction of such a joint retrieval system. This has been made possible by harnessing the processing power of modern consumer graphics cards with high performance graphic processors (GPU), which decreases training times by about two orders of magnitude. The system was trained on data from 2009 and 2010, including target ozone profiles from ozone sondes, ACE- FTS and MLS-AURA. To make maximum use of tropospheric information in the spectra, the data were partitioned into several sets of different cloud fraction ranges with the GOME-2 FOV, on which specialized retrieval networks are being trained. For the final ozone retrieval processing the different specialized networks are combined. The resulting retrieval system is very stable and does not show any systematic dependence on solar zenith angle, scan angle or sensor degradation. We present several sensitivity studies with regard to cloud fraction and target sensor type, as well as the performance in several latitude bands and with respect to independent validation stations. A visual cross-comparison against high-resolution ozone profiles from the KNMI EUMETSAT Ozone SAF product has also been performed and shows some distinctive features which we will briefly discuss. Overall, we demonstrate that a complex retrieval system can now be constructed with a minimum of ma- chine learning knowledge, using automated algorithms for many design decisions previously requiring expert knowledge. Provided sufficient training data and computation power of GPUs is available, the

  2. A fuzzy classifier system for process control

    Science.gov (United States)

    Karr, C. L.; Phillips, J. C.

    1994-01-01

    A fuzzy classifier system that discovers rules for controlling a mathematical model of a pH titration system was developed by researchers at the U.S. Bureau of Mines (USBM). Fuzzy classifier systems successfully combine the strengths of learning classifier systems and fuzzy logic controllers. Learning classifier systems resemble familiar production rule-based systems, but they represent their IF-THEN rules by strings of characters rather than in the traditional linguistic terms. Fuzzy logic is a tool that allows for the incorporation of abstract concepts into rule based-systems, thereby allowing the rules to resemble the familiar 'rules-of-thumb' commonly used by humans when solving difficult process control and reasoning problems. Like learning classifier systems, fuzzy classifier systems employ a genetic algorithm to explore and sample new rules for manipulating the problem environment. Like fuzzy logic controllers, fuzzy classifier systems encapsulate knowledge in the form of production rules. The results presented in this paper demonstrate the ability of fuzzy classifier systems to generate a fuzzy logic-based process control system.

  3. Classifying and segmenting microscopy images with deep multiple instance learning.

    Science.gov (United States)

    Kraus, Oren Z; Ba, Jimmy Lei; Frey, Brendan J

    2016-06-15

    High-content screening (HCS) technologies have enabled large scale imaging experiments for studying cell biology and for drug screening. These systems produce hundreds of thousands of microscopy images per day and their utility depends on automated image analysis. Recently, deep learning approaches that learn feature representations directly from pixel intensity values have dominated object recognition challenges. These tasks typically have a single centered object per image and existing models are not directly applicable to microscopy datasets. Here we develop an approach that combines deep convolutional neural networks (CNNs) with multiple instance learning (MIL) in order to classify and segment microscopy images using only whole image level annotations. We introduce a new neural network architecture that uses MIL to simultaneously classify and segment microscopy images with populations of cells. We base our approach on the similarity between the aggregation function used in MIL and pooling layers used in CNNs. To facilitate aggregating across large numbers of instances in CNN feature maps we present the Noisy-AND pooling function, a new MIL operator that is robust to outliers. Combining CNNs with MIL enables training CNNs using whole microscopy images with image level labels. We show that training end-to-end MIL CNNs outperforms several previous methods on both mammalian and yeast datasets without requiring any segmentation steps. Torch7 implementation available upon request. oren.kraus@mail.utoronto.ca. © The Author 2016. Published by Oxford University Press.

  4. Stack filter classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  5. Noise reduction in urban LRT networks by combining track based solutions.

    Science.gov (United States)

    Vogiatzis, Konstantinos; Vanhonacker, Patrick

    2016-10-15

    The overall objective of the Quiet-Track project is to provide step-changing track based noise mitigation and maintenance schemes for railway rolling noise in LRT (Light Rail Transit) networks. WP 4 in particular focuses on the combination of existing track based solutions to yield a global performance of at least 6dB(A). The validation was carried out using a track section in the network of Athens Metro Line 1 with an existing outside concrete slab track (RHEDA track) where high airborne rolling noise was observed. The procedure for the selection of mitigation measures is based on numerical simulations, combining WRNOISE and IMMI software tools for noise prediction with experimental determination of the required track and vehicle parameters (e.g., rail and wheel roughness). The availability of a detailed rolling noise calculation procedure allows for detailed designing of measures and of ranking individual measures. It achieves this by including the modelling of the wheel/rail source intensity and of the noise propagation with the ability to evaluate the effect of modifications at source level (e.g., grinding, rail dampers, wheel dampers, change in resiliency of wheels and/or rail fixation) and of modifications in the propagation path (absorption at the track base, noise barriers, screening). A relevant combination of existing solutions was selected in the function of the simulation results. Three distinct existing solutions were designed in detail aiming at a high rolling noise attenuation and not affecting the normal operation of the metro system: Action 1: implementation of sound absorbing precast elements (panel type) on the track bed, Action 2: implementation of an absorbing noise barrier with a height of 1.10-1.20m above rail level, and Action 3: installation of rail dampers. The selected solutions were implemented on site and the global performance was measured step by step for comparison with simulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Classifiers and Plurality: evidence from a deictic classifier language

    Directory of Open Access Journals (Sweden)

    Filomena Sandalo

    2016-12-01

    Full Text Available This paper investigates the semantic contribution of plural morphology and its interaction with classifiers in Kadiwéu. We show that Kadiwéu, a Waikurúan language spoken in South America, is a classifier language similar to Chinese but classifiers are an obligatory ingredient of all determiner-like elements, such as quantifiers, numerals, and wh-words for arguments. What all elements with classifiers have in common is that they contribute an atomized/individualized interpretation of the NP. Furthermore, this paper revisits the relationship between classifiers and number marking and challenges the common assumption that classifiers and plurals are mutually exclusive.

  7. Combining neural network models to predict spatial patterns of airborne pollutant accumulation in soils around an industrial point emission source.

    Science.gov (United States)

    Dimopoulos, Ioannis F; Tsiros, Ioannis X; Serelis, Konstantinos; Chronopoulou, Aikaterini

    2004-12-01

    Neural networks (NNs) have the ability to model a wide range of complex nonlinearities. A major disadvantage of NNs, however, is their instability, especially under conditions of sparse, noisy, and limited data sets. In this paper, different combining network methods are used to benefit from the existence of local minima and from the instabilities of NNs. A nonlinear k-fold cross-validation method is used to test the performance of the various networks and also to develop and select a set of networks that exhibits a low correlation of errors. The various NN models are applied to estimate the spatial patterns of atmospherically transported and deposited lead (Pb) in soils around an historical industrial air emission point source. It is shown that the resulting ensemble networks consistently give superior predictions compared with the individual networks because, for the ensemble networks, R2 values were found to be higher than 0.9 while, for the contributing individual networks, values for R2 ranged between 0.35 and 0.85. It is concluded that combining networks can be adopted as an important component in the application of artificial NN techniques in applied air quality studies.

  8. Regional brain network organization distinguishes the combined and inattentive subtypes of Attention Deficit Hyperactivity Disorder.

    Science.gov (United States)

    Saad, Jacqueline F; Griffiths, Kristi R; Kohn, Michael R; Clarke, Simon; Williams, Leanne M; Korgaonkar, Mayuresh S

    2017-01-01

    Attention Deficit Hyperactivity Disorder (ADHD) is characterized clinically by hyperactive/impulsive and/or inattentive symptoms which determine diagnostic subtypes as Predominantly Hyperactive-Impulsive (ADHD-HI), Predominantly Inattentive (ADHD-I), and Combined (ADHD-C). Neuroanatomically though we do not yet know if these clinical subtypes reflect distinct aberrations in underlying brain organization. We imaged 34 ADHD participants defined using DSM-IV criteria as ADHD-I ( n  = 16) or as ADHD-C ( n  = 18) and 28 matched typically developing controls, aged 8-17 years, using high-resolution T1 MRI. To quantify neuroanatomical organization we used graph theoretical analysis to assess properties of structural covariance between ADHD subtypes and controls (global network measures: path length, clustering coefficient, and regional network measures: nodal degree). As a context for interpreting network organization differences, we also quantified gray matter volume using voxel-based morphometry. Each ADHD subtype was distinguished by a different organizational profile of the degree to which specific regions were anatomically connected with other regions (i.e., in "nodal degree"). For ADHD-I (compared to both ADHD-C and controls) the nodal degree was higher in the hippocampus. ADHD-I also had a higher nodal degree in the supramarginal gyrus, calcarine sulcus, and superior occipital cortex compared to ADHD-C and in the amygdala compared to controls. By contrast, the nodal degree was higher in the cerebellum for ADHD-C compared to ADHD-I and in the anterior cingulate, middle frontal gyrus and putamen compared to controls. ADHD-C also had reduced nodal degree in the rolandic operculum and middle temporal pole compared to controls. These regional profiles were observed in the context of no differences in gray matter volume or global network organization. Our results suggest that the clinical distinction between the Inattentive and Combined subtypes of ADHD may also be

  9. Feature Selection Combined with Neural Network Structure Optimization for HIV-1 Protease Cleavage Site Prediction

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2015-01-01

    Full Text Available It is crucial to understand the specificity of HIV-1 protease for designing HIV-1 protease inhibitors. In this paper, a new feature selection method combined with neural network structure optimization is proposed to analyze the specificity of HIV-1 protease and find the important positions in an octapeptide that determined its cleavability. Two kinds of newly proposed features based on Amino Acid Index database plus traditional orthogonal encoding features are used in this paper, taking both physiochemical and sequence information into consideration. Results of feature selection prove that p2, p1, p1′, and p2′ are the most important positions. Two feature fusion methods are used in this paper: combination fusion and decision fusion aiming to get comprehensive feature representation and improve prediction performance. Decision fusion of subsets that getting after feature selection obtains excellent prediction performance, which proves feature selection combined with decision fusion is an effective and useful method for the task of HIV-1 protease cleavage site prediction. The results and analysis in this paper can provide useful instruction and help designing HIV-1 protease inhibitor in the future.

  10. A signal combining technique based on channel shortening for cooperative sensor networks

    KAUST Repository

    Hussain, Syed Imtiaz

    2010-06-01

    The cooperative relaying process needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems, e.g. wireless sensor networks where the nodes are equipped with very basic communication hardware. In this paper, we consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination can capture the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. ©2010 IEEE.

  11. Predicting combined sewer overflows chamber depth using artificial neural networks with rainfall radar data.

    Science.gov (United States)

    Mounce, S R; Shepherd, W; Sailor, G; Shucksmith, J; Saul, A J

    2014-01-01

    Combined sewer overflows (CSOs) represent a common feature in combined urban drainage systems and are used to discharge excess water to the environment during heavy storms. To better understand the performance of CSOs, the UK water industry has installed a large number of monitoring systems that provide data for these assets. This paper presents research into the prediction of the hydraulic performance of CSOs using artificial neural networks (ANN) as an alternative to hydraulic models. Previous work has explored using an ANN model for the prediction of chamber depth using time series for depth and rain gauge data. Rainfall intensity data that can be provided by rainfall radar devices can be used to improve on this approach. Results are presented using real data from a CSO for a catchment in the North of England, UK. An ANN model trained with the pseudo-inverse rule was shown to be capable of predicting CSO depth with less than 5% error for predictions more than 1 hour ahead for unseen data. Such predictive approaches are important to the future management of combined sewer systems.

  12. Equal gain combining for cooperative spectrum sensing in cognitive radio networks

    KAUST Repository

    Hamza, Doha R.

    2014-08-01

    Sensing with equal gain combining (SEGC), a novel cooperative spectrum sensing technique for cognitive radio networks, is proposed. Cognitive radios simultaneously transmit their sensing results to the fusion center (FC) over multipath fading reporting channels. The cognitive radios estimate the phases of the reporting channels and use those estimates for coherent combining of the sensing results at the FC. A global decision is made at the FC by comparing the received signal with a threshold. We obtain the global detection probabilities and secondary throughput exactly through a moment generating function approach. We verify our solution via system simulation and demonstrate that the Chernoff bound and central limit theory approximation are not tight. The cases of hard sensing and soft sensing are considered and we provide examples in which hard sensing is advantageous to soft sensing. We contrast the performance of SEGC with maximum ratio combining of the sensors\\' results and provide examples where the former is superior. Furthermore, we evaluate the performance of SEGC against existing orthogonal reporting techniques such as time division multiple access (TDMA). SEGC performance always dominates that of TDMA in terms of secondary throughput. We also study the impact of phase and synchronization errors and demonstrate the robustness of the SEGC technique against such imperfections. © 2002-2012 IEEE.

  13. Supervised and dynamic neuro-fuzzy systems to classify physiological responses in robot-assisted neurorehabilitation.

    Science.gov (United States)

    Lledó, Luis D; Badesa, Francisco J; Almonacid, Miguel; Cano-Izquierdo, José M; Sabater-Navarro, José M; Fernández, Eduardo; Garcia-Aracil, Nicolás

    2015-01-01

    This paper presents the application of an Adaptive Resonance Theory (ART) based on neural networks combined with Fuzzy Logic systems to classify physiological reactions of subjects performing robot-assisted rehabilitation therapies. First, the theoretical background of a neuro-fuzzy classifier called S-dFasArt is presented. Then, the methodology and experimental protocols to perform a robot-assisted neurorehabilitation task are described. Our results show that the combination of the dynamic nature of S-dFasArt classifier with a supervisory module are very robust and suggest that this methodology could be very useful to take into account emotional states in robot-assisted environments and help to enhance and better understand human-robot interactions.

  14. Supervised and dynamic neuro-fuzzy systems to classify physiological responses in robot-assisted neurorehabilitation.

    Directory of Open Access Journals (Sweden)

    Luis D Lledó

    Full Text Available This paper presents the application of an Adaptive Resonance Theory (ART based on neural networks combined with Fuzzy Logic systems to classify physiological reactions of subjects performing robot-assisted rehabilitation therapies. First, the theoretical background of a neuro-fuzzy classifier called S-dFasArt is presented. Then, the methodology and experimental protocols to perform a robot-assisted neurorehabilitation task are described. Our results show that the combination of the dynamic nature of S-dFasArt classifier with a supervisory module are very robust and suggest that this methodology could be very useful to take into account emotional states in robot-assisted environments and help to enhance and better understand human-robot interactions.

  15. Optimal Seamline Detection for Orthoimage Mosaicking by Combining Deep Convolutional Neural Network and Graph Cuts

    Directory of Open Access Journals (Sweden)

    Li Li

    2017-07-01

    Full Text Available When mosaicking orthoimages, especially in urban areas with various obvious ground objects like buildings, roads, cars or trees, the detection of optimal seamlines is one of the key technologies for creating seamless and pleasant image mosaics. In this paper, we propose a new approach to detect optimal seamlines for orthoimage mosaicking with the use of deep convolutional neural network (CNN and graph cuts. Deep CNNs have been widely used in many fields of computer vision and photogrammetry in recent years, and graph cuts is one of the most widely used energy optimization frameworks. We first propose a deep CNN for land cover semantic segmentation in overlap regions between two adjacent images. Then, the energy cost of each pixel in the overlap regions is defined based on the classification probabilities of belonging to each of the specified classes. To find the optimal seamlines globally, we fuse the CNN-classified energy costs of all pixels into the graph cuts energy minimization framework. The main advantage of our proposed method is that the pixel similarity energy costs between two images are defined using the classification results of the CNN based semantic segmentation instead of using the image informations of color, gradient or texture as traditional methods do. Another advantage of our proposed method is that the semantic informations are fully used to guide the process of optimal seamline detection, which is more reasonable than only using the hand designed features defined to represent the image differences. Finally, the experimental results on several groups of challenging orthoimages show that the proposed method is capable of finding high-quality seamlines among urban and non-urban orthoimages, and outperforms the state-of-the-art algorithms and the commercial software based on the visual comparison, statistical evaluation and quantitative evaluation based on the structural similarity (SSIM index.

  16. Combining neural networks and signed particles to simulate quantum systems more efficiently

    Science.gov (United States)

    Sellier, Jean Michel

    2018-04-01

    Recently a new formulation of quantum mechanics has been suggested which describes systems by means of ensembles of classical particles provided with a sign. This novel approach mainly consists of two steps: the computation of the Wigner kernel, a multi-dimensional function describing the effects of the potential over the system, and the field-less evolution of the particles which eventually create new signed particles in the process. Although this method has proved to be extremely advantageous in terms of computational resources - as a matter of fact it is able to simulate in a time-dependent fashion many-body systems on relatively small machines - the Wigner kernel can represent the bottleneck of simulations of certain systems. Moreover, storing the kernel can be another issue as the amount of memory needed is cursed by the dimensionality of the system. In this work, we introduce a new technique which drastically reduces the computation time and memory requirement to simulate time-dependent quantum systems which is based on the use of an appropriately tailored neural network combined with the signed particle formalism. In particular, the suggested neural network is able to compute efficiently and reliably the Wigner kernel without any training as its entire set of weights and biases is specified by analytical formulas. As a consequence, the amount of memory for quantum simulations radically drops since the kernel does not need to be stored anymore as it is now computed by the neural network itself, only on the cells of the (discretized) phase-space which are occupied by particles. As its is clearly shown in the final part of this paper, not only this novel approach drastically reduces the computational time, it also remains accurate. The author believes this work opens the way towards effective design of quantum devices, with incredible practical implications.

  17. Combined D-optimal design and generalized regression neural network for modeling of plasma etching rate

    Directory of Open Access Journals (Sweden)

    You Hailong

    2014-01-01

    Full Text Available Plasma etching process plays a critical role in semiconductor manufacturing. Because physical and chemical mechanisms involved in plasma etching are extremely complicated, models supporting process control are difficult to construct. This paper uses a 35-run D-optimal design to efficiently collect data under well planned conditions for important controllable variables such as power, pressure, electrode gap and gas flows of Cl2 and He and the response, etching rate, for building an empirical underlying model. Since the relationship between the control and response variables could be highly nonlinear, a generalized regression neural network is used to select important model variables and their combination effects and to fit the model. Compared with the response surface methodology, the proposed method has better prediction performance in training and testing samples. A success application of the model to control the plasma etching process demonstrates the effectiveness of the methods.

  18. A Neural Network Combined Inverse Controller for a Two-Rear-Wheel Independently Driven Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Duo Zhang

    2014-07-01

    Full Text Available Vehicle active safety control is attracting ever increasing attention in the attempt to improve the stability and the maneuverability of electric vehicles. In this paper, a neural network combined inverse (NNCI controller is proposed, incorporating the merits of left-inversion and right-inversion. As the left-inversion soft-sensor can estimate the sideslip angle, while the right-inversion is utilized to decouple control. Then, the proposed NNCI controller not only linearizes and decouples the original nonlinear system, but also directly obtains immeasurable state feedback in constructing the right-inversion. Hence, the proposed controller is very practical in engineering applications. The proposed system is co-simulated based on the vehicle simulation package CarSim in connection with Matlab/Simulink. The results verify the effectiveness of the proposed control strategy.

  19. Extreme weather monitoring system with combination of micro-satellites and ground-based observation networks

    Science.gov (United States)

    Takahashi, Y.; Sato, M.; Castro, E. C.; Ishida, T.; Marciano, J. J.; Kubota, H.; Yamashita, K.

    2017-12-01

    Thunderstorm causes torrential rainfall and is the energy source of typhoon. In these decades it has been revealed that lightning discharge is a very good proxy of thunderstorm activity. However, operational and sustainable observation system that can provide sufficient information of lightning strokes has not been constructed in Asia. On the other hand, 50-kg micro-satellite is now one of the operational tools for remote-sensing, which could be fabricated also by developing countries. International project to promote the combination of the micro-satellites and ground-based observation networks, supported by programs of SATREPS by DOST and JST-JICA, e-ASIA by JST and other Asian agencies and Core-to-core by JSPS, is now going under international agreement among Asian countries. We will establish a new way to obtain very detail semi-real time information of thunderstorm and typhoon activities, using visible stereo and thermal infrared imaging by target pointing with 50-kg micro-satellite, and ground-based networks consisting of lightning sensors, AWS and infrasound sensors, that cannot be achieved only with existing observation methods. Based on these new techniques together with advanced radar system and drop/radio sondes, we will try to construct the cutting-edge observation system to monitor the development of thunderstorm and typhoon, which may greatly contribute to the prediction of disasters and the public alerting system.

  20. Combined Resource Allocation System for Device-to-Device Communication towards LTE Networks

    Directory of Open Access Journals (Sweden)

    Abbas Fakhar

    2016-01-01

    Full Text Available The LTE networks are being developed to grant mobile broadband services in the fourth generation (4G systems and allow operators to use spectrum more efficiently.D2D communication is a promising technique to provide wireless services and enhance spectrum exploitation in the LTE Heterogeneous Networks (HetNets.D2D communication in HetNets allows users to communicate with each other directly by reusing the resources when communicating via the base stations. But during the downlink period, both the D2D receiver and the Heterogeneous Users equipment’s (HUE experience interference caused by resource allocation. In this article, we identify and analyze the interference problem of HetNets caused by D2D transmitter during download. We propose a combined resource allocation and resource reuse method for LTE HetNets, where resource allocation to HUEs is employed on the basis of comparative fair algorithm and resource reuse to D2D users is employed on acquisitive empirical algorithm. This approach evaluates whether D2D mode is suitable or not by path loss evaluation, after that decreases the interference to HUE by selection of the minimum channel gain between HUE and D2D transmitter each time to mitigate interference. Our simulation results show that the efficiency and throughput of HetNets is improved by using the proposed method.

  1. Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations

    Science.gov (United States)

    Niemeier, Wolfgang; Tengen, Dieter

    2017-06-01

    In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.

  2. Enhanced activation of motor execution networks using action observation combined with imagination of lower limb movements.

    Science.gov (United States)

    Villiger, Michael; Estévez, Natalia; Hepp-Reymond, Marie-Claude; Kiper, Daniel; Kollias, Spyros S; Eng, Kynan; Hotz-Boendermaker, Sabina

    2013-01-01

    The combination of first-person observation and motor imagery, i.e. first-person observation of limbs with online motor imagination, is commonly used in interactive 3D computer gaming and in some movie scenes. These scenarios are designed to induce a cognitive process in which a subject imagines himself/herself acting as the agent in the displayed movement situation. Despite the ubiquity of this type of interaction and its therapeutic potential, its relationship to passive observation and imitation during observation has not been directly studied using an interactive paradigm. In the present study we show activation resulting from observation, coupled with online imagination and with online imitation of a goal-directed lower limb movement using functional MRI (fMRI) in a mixed block/event-related design. Healthy volunteers viewed a video (first-person perspective) of a foot kicking a ball. They were instructed to observe-only the action (O), observe and simultaneously imagine performing the action (O-MI), or imitate the action (O-IMIT). We found that when O-MI was compared to O, activation was enhanced in the ventralpremotor cortex bilaterally, left inferior parietal lobule and left insula. The O-MI and O-IMIT conditions shared many activation foci in motor relevant areas as confirmed by conjunction analysis. These results show that (i) combining observation with motor imagery (O-MI) enhances activation compared to observation-only (O) in the relevant foot motor network and in regions responsible for attention, for control of goal-directed movements and for the awareness of causing an action, and (ii) it is possible to extensively activate the motor execution network using O-MI, even in the absence of overt movement. Our results may have implications for the development of novel virtual reality interactions for neurorehabilitation interventions and other applications involving training of motor tasks.

  3. Common brain networks for distinct deficits in visual neglect. A combined structural and tractography MRI approach.

    Science.gov (United States)

    Toba, Monica N; Migliaccio, Raffaella; Batrancourt, Bénédicte; Bourlon, Clémence; Duret, Christophe; Pradat-Diehl, Pascale; Dubois, Bruno; Bartolomeo, Paolo

    2017-10-18

    Visual neglect is a heterogeneous, multi-component syndrome resulting from right hemisphere damage. Neglect patients do not pay attention to events occurring on their left side, and have a poor functional outcome. The intra-hemispheric location of lesions producing neglect is debated, because studies using different methods reported different locations in the grey matter and in the white matter of the right hemisphere. These reported locations show various patterns of overlapping with the fronto-parietal attention networks demonstrated by functional neuroimaging. We explored the anatomical correlates of neglect patients' performance on distinct tests of neglect. For the first time in neglect anatomy studies, we individually assessed 25 patients with subacute strokes in the right hemisphere, by using a combined structural and diffusion tensor deterministic tractography approach, with separate analyses for each neglect test. The results revealed that lesions in nodes of the ventral attention network (angular and supramarginal gyri) were selectively associated with deficits in performance on all the tests used; damage to other structures correlated with impaired performance on specific tests, such as the bells test (middle and inferior frontal gyri), or the reading test (temporal regions). Importantly, however, white matter damage proved crucial in producing neglect-related deficits. Voxel-based lesion-symptom mapping (VLSM) and tractography consistently revealed that damage to the ventral branch of the superior longitudinal fasciculus (SLF III) and to the inferior fronto-occipital fasciculus (IFOF) predicted pathological scores on line bisection/drawing copy and on the bells test, respectively. Moreover, damage to distinct sectors of SLF III, or combined SLF/IFOF damage, gave rise to different performance profiles. Our results indicate that both grey and white matter lesion analysis must be taken into account to determine the neural correlates of neglect

  4. Financial Time Series Modelling with Hybrid Model Based on Customized RBF Neural Network Combined With Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Lukas Falat

    2014-01-01

    Full Text Available In this paper, authors apply feed-forward artificial neural network (ANN of RBF type into the process of modelling and forecasting the future value of USD/CAD time series. Authors test the customized version of the RBF and add the evolutionary approach into it. They also combine the standard algorithm for adapting weights in neural network with an unsupervised clustering algorithm called K-means. Finally, authors suggest the new hybrid model as a combination of a standard ANN and a moving average for error modeling that is used to enhance the outputs of the network using the error part of the original RBF. Using high-frequency data, they examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, authors perform the comparative out-of-sample analysis of the suggested hybrid model with statistical models and the standard neural network.

  5. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques.

    Directory of Open Access Journals (Sweden)

    Hazlee Azil Illias

    Full Text Available It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN and particle swarm optimisation (PSO techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.

  6. Combined application of mixture experimental design and artificial neural networks in the solid dispersion development.

    Science.gov (United States)

    Medarević, Djordje P; Kleinebudde, Peter; Djuriš, Jelena; Djurić, Zorica; Ibrić, Svetlana

    2016-01-01

    This study for the first time demonstrates combined application of mixture experimental design and artificial neural networks (ANNs) in the solid dispersions (SDs) development. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs were prepared by solvent casting method to improve carbamazepine dissolution rate. The influence of the composition of prepared SDs on carbamazepine dissolution rate was evaluated using d-optimal mixture experimental design and multilayer perceptron ANNs. Physicochemical characterization proved the presence of the most stable carbamazepine polymorph III within the SD matrix. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs significantly improved carbamazepine dissolution rate compared to pure drug. Models developed by ANNs and mixture experimental design well described the relationship between proportions of SD components and percentage of carbamazepine released after 10 (Q10) and 20 (Q20) min, wherein ANN model exhibit better predictability on test data set. Proportions of carbamazepine and poloxamer 188 exhibited the highest influence on carbamazepine release rate. The highest carbamazepine release rate was observed for SDs with the lowest proportions of carbamazepine and the highest proportions of poloxamer 188. ANNs and mixture experimental design can be used as powerful data modeling tools in the systematic development of SDs. Taking into account advantages and disadvantages of both techniques, their combined application should be encouraged.

  7. Combining Bayesian Networks and Agent Based Modeling to develop a decision-support model in Vietnam

    Science.gov (United States)

    Nong, Bao Anh; Ertsen, Maurits; Schoups, Gerrit

    2016-04-01

    Complexity and uncertainty in natural resources management have been focus themes in recent years. Within these debates, with the aim to define an approach feasible for water management practice, we are developing an integrated conceptual modeling framework for simulating decision-making processes of citizens, in our case in the Day river area, Vietnam. The model combines Bayesian Networks (BNs) and Agent-Based Modeling (ABM). BNs are able to combine both qualitative data from consultants / experts / stakeholders, and quantitative data from observations on different phenomena or outcomes from other models. Further strengths of BNs are that the relationship between variables in the system is presented in a graphical interface, and that components of uncertainty are explicitly related to their probabilistic dependencies. A disadvantage is that BNs cannot easily identify the feedback of agents in the system once changes appear. Hence, ABM was adopted to represent the reaction among stakeholders under changes. The modeling framework is developed as an attempt to gain better understanding about citizen's behavior and factors influencing their decisions in order to reduce uncertainty in the implementation of water management policy.

  8. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques

    Science.gov (United States)

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634

  9. Combination Adaptive Traffic Algorithm and Coordinated Sleeping in Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    M. Udin Harun Al Rasyid

    2014-12-01

    Full Text Available Wireless sensor network (WSN uses a battery as its primary power source, so that WSN will be limited to battery power for long operations. The WSN should be able to save the energy consumption in order to operate in a long time.WSN has the potential to be the future of wireless communications solutions. WSN are small but has a variety of functions that can help human life. WSN has the wide variety of sensors and can communicate quickly making it easier for people to obtain information accurately and quickly. In this study, we combine adaptive traffic algorithms and coordinated sleeping as power‐efficient WSN solution. We compared the performance of our proposed ideas combination adaptive traffic and coordinated sleeping algorithm with non‐adaptive scheme. From the simulation results, our proposed idea has good‐quality data transmission and more efficient in energy consumption, but it has higher delay than that of non‐adaptive scheme. Keywords:WSN,adaptive traffic,coordinated sleeping,beacon order,superframe order.

  10. Entropy based classifier for cross-domain opinion mining

    Directory of Open Access Journals (Sweden)

    Jyoti S. Deshmukh

    2018-01-01

    Full Text Available In recent years, the growth of social network has increased the interest of people in analyzing reviews and opinions for products before they buy them. Consequently, this has given rise to the domain adaptation as a prominent area of research in sentiment analysis. A classifier trained from one domain often gives poor results on data from another domain. Expression of sentiment is different in every domain. The labeling cost of each domain separately is very high as well as time consuming. Therefore, this study has proposed an approach that extracts and classifies opinion words from one domain called source domain and predicts opinion words of another domain called target domain using a semi-supervised approach, which combines modified maximum entropy and bipartite graph clustering. A comparison of opinion classification on reviews on four different product domains is presented. The results demonstrate that the proposed method performs relatively well in comparison to the other methods. Comparison of SentiWordNet of domain-specific and domain-independent words reveals that on an average 72.6% and 88.4% words, respectively, are correctly classified.

  11. Adaptive predictions of the euro/złoty currency exchange rate using state space wavelet networks and forecast combinations

    Directory of Open Access Journals (Sweden)

    Brdyś Mietek A.

    2016-03-01

    Full Text Available The paper considers the forecasting of the euro/Polish złoty (EUR/PLN spot exchange rate by applying state space wavelet network and econometric forecast combination models. Both prediction methods are applied to produce one-trading-day-ahead forecasts of the EUR/PLN exchange rate. The paper presents the general state space wavelet network and forecast combination models as well as their underlying principles. The state space wavelet network model is, in contrast to econometric forecast combinations, a non-parametric prediction technique which does not make any distributional assumptions regarding the underlying input variables. Both methods can be used as forecasting tools in portfolio investment management, asset valuation, IT security and integrated business risk intelligence in volatile market conditions.

  12. Combining ground-based and airborne EM through Artificial Neural Networks for modelling glacial till under saline groundwater conditions

    DEFF Research Database (Denmark)

    Gunnink, J.L.; Bosch, A.; Siemon, B.

    2012-01-01

    Airborne electromagnetic (AEM) methods supply data over large areas in a cost-effective way. We used ArtificialNeural Networks (ANN) to classify the geophysical signal into a meaningful geological parameter. By using examples of known relations between ground-based geophysical data (in this case ...... is acting as a layer that inhibits groundwater flow, due to its high clay-content, and is therefore an important layer in hydrogeological modelling and for predicting the effects of climate change on groundwater quantity and quality....

  13. Communication Behaviour-Based Big Data Application to Classify and Detect HTTP Automated Software

    Directory of Open Access Journals (Sweden)

    Manh Cong Tran

    2016-01-01

    Full Text Available HTTP is recognized as the most widely used protocol on the Internet when applications are being transferred more and more by developers onto the web. Due to increasingly complex computer systems, diversity HTTP automated software (autoware thrives. Unfortunately, besides normal autoware, HTTP malware and greyware are also spreading rapidly in web environment. Consequently, network communication is not just rigorously controlled by users intention. This raises the demand for analyzing HTTP autoware communication behaviour to detect and classify malicious and normal activities via HTTP traffic. Hence, in this paper, based on many studies and analysis of the autoware communication behaviour through access graph, a new method to detect and classify HTTP autoware communication at network level is presented. The proposal system includes combination of MapReduce of Hadoop and MarkLogic NoSQL database along with xQuery to deal with huge HTTP traffic generated each day in a large network. The method is examined with real outbound HTTP traffic data collected through a proxy server of a private network. Experimental results obtained for proposed method showed that promised outcomes are achieved since 95.1% of suspicious autoware are classified and detected. This finding may assist network and system administrator in inspecting early the internal threats caused by HTTP autoware.

  14. On the interpretation of number and classifiers

    NARCIS (Netherlands)

    Cheng, L.L.; Doetjes, J.S.; Sybesma, R.P.E.; Zamparelli, R.

    2012-01-01

    Mandarin and Cantonese, both of which are numeral classifier languages, present an interesting puzzle concerning a compositional account of number in the various forms of nominals. First, bare nouns are number neutral (or vague in number). Second, cl-noun combinations appear to have different

  15. Embedded feature ranking for ensemble MLP classifiers.

    Science.gov (United States)

    Windeatt, Terry; Duangsoithong, Rakkrit; Smith, Raymond

    2011-06-01

    A feature ranking scheme for multilayer perceptron (MLP) ensembles is proposed, along with a stopping criterion based upon the out-of-bootstrap estimate. To solve multi-class problems feature ranking is combined with modified error-correcting output coding. Experimental results on benchmark data demonstrate the versatility of the MLP base classifier in removing irrelevant features.

  16. Network bursting dynamics in excitatory cortical neuron cultures results from the combination of different adaptive mechanisms.

    Directory of Open Access Journals (Sweden)

    Timothée Masquelier

    Full Text Available In the brain, synchronization among cells of an assembly is a common phenomenon, and thought to be functionally relevant. Here we used an in vitro experimental model of cell assemblies, cortical cultures, combined with numerical simulations of a spiking neural network (SNN to investigate how and why spontaneous synchronization occurs. In order to deal with excitation only, we pharmacologically blocked GABAAergic transmission using bicuculline. Synchronous events in cortical cultures tend to involve almost every cell and to display relatively constant durations. We have thus named these "network spikes" (NS. The inter-NS-intervals (INSIs proved to be a more interesting phenomenon. In most cortical cultures NSs typically come in series or bursts ("bursts of NSs", BNS, with short (~1 s INSIs and separated by long silent intervals (tens of s, which leads to bimodal INSI distributions. This suggests that a facilitating mechanism is at work, presumably short-term synaptic facilitation, as well as two fatigue mechanisms: one with a short timescale, presumably short-term synaptic depression, and another one with a longer timescale, presumably cellular adaptation. We thus incorporated these three mechanisms into the SNN, which, indeed, produced realistic BNSs. Next, we systematically varied the recurrent excitation for various adaptation timescales. Strong excitability led to frequent, quasi-periodic BNSs (CV~0, and weak excitability led to rare BNSs, approaching a Poisson process (CV~1. Experimental cultures appear to operate within an intermediate weakly-synchronized regime (CV~0.5, with an adaptation timescale in the 2-8 s range, and well described by a Poisson-with-refractory-period model. Taken together, our results demonstrate that the INSI statistics are indeed informative: they allowed us to infer the mechanisms at work, and many parameters that we cannot access experimentally.

  17. Detecting and classifying faults on transmission systems using a backpropagation neural network; Deteccion y clasificacion de fallas en sistemas de transmision empleando una red neuronal con retropropagacion del error

    Energy Technology Data Exchange (ETDEWEB)

    Rosas Ortiz, German

    2000-01-01

    Fault detection and diagnosis on transmission systems is an interesting area of investigation to Artificial Intelligence (AI) based systems. Neurocomputing is one of fastest growing areas of research in the fields of AI and pattern recognition. This work explores the possible suitability of pattern recognition approach of neural networks for fault detection and classification on power systems. The conventional detection techniques in modern relays are based in digital processing of signals and it need some time (around 1 cycle) to send a tripping signal, also they are likely to make incorrect decisions if the signals are noisy. It's desirable to develop a fast, accurate and robust approach that perform accurately for changing system conditions (like load variations and fault resistance). The aim of this work is to develop a novel technique based on Artificial Neural Networks (ANN), which explores the suitability of a pattern classification approach for fault detection and diagnosis. The suggested approach is based in the fact that when a fault occurs, a change in the system impedance take place and, as a consequence changes in amplitude and phase of line voltage and current signals take place. The ANN-based fault discriminator is trained to detect this changes as indicators of the instant of fault inception. This detector uses instantaneous values of these signals to make decisions. Suitability of using neural network as pattern classifiers for transmission systems fault diagnosis is described in detail a neural network design and simulation environment for real-time is presented. Results showing the performance of this approach are presented and indicate that it is fast, secure and exact enough, and it can be used in high speed fault detection and classification schemes. [Spanish] El diagnostico y la deteccion de fallas en sistemas de transmision es una area de interes en investigacion para sistemas basados en Inteligencia Artificial (IA). El calculo neuronal

  18. Investigating The Fusion of Classifiers Designed Under Different Bayes Errors

    Directory of Open Access Journals (Sweden)

    Fuad M. Alkoot

    2004-12-01

    Full Text Available We investigate a number of parameters commonly affecting the design of a multiple classifier system in order to find when fusing is most beneficial. We extend our previous investigation to the case where unequal classifiers are combined. Results indicate that Sum is not affected by this parameter, however, Vote degrades when a weaker classifier is introduced in the combining system. This is more obvious when estimation error with uniform distribution exists.

  19. Classifier Fusion With Contextual Reliability Evaluation.

    Science.gov (United States)

    Liu, Zhunga; Pan, Quan; Dezert, Jean; Han, Jun-Wei; He, You

    2018-05-01

    Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.

  20. Performance Improvement of Wireless Mesh Networks by Using a Combination of Channel-Bonding and Multi-Channel Techniques

    Science.gov (United States)

    Xu, Liang; Yamamoto, Koji; Murata, Hidekazu; Yoshida, Susumu

    In the present paper, the use of a combination of channel-bonding and multi-channel techniques is proposed to improve the performance of wireless mesh networks (WMNs). It is necessary to increase the network throughput by broadening the bandwidth, and two approaches to effectively utilize the broadened bandwidth can be considered. One is the multi-channel technique, in which multiple separate frequency channels are used simultaneously for information transmission. The other is the channel-bonding technique used in IEEE 802.11n, which joins multiple frequency channels into a single broader channel. The former can reduce the channel traffic to mitigate the effect of packet collision, while the latter can increase the transmission rate. In the present paper, these two approaches are compared and their respective advantages are clarified in terms of the network throughput and delay performance assuming the same total bandwidth and a CSMA protocol. Our numerical and simulation results indicate that under low-traffic conditions, the channel-bonding technique can achieve low delay, while under traffic congestion conditions, the network performance can be improved by using multi-channel technique. Based on this result, the use of a combination of these two techniques is proposed for a WMN, and show that it is better to use a proper channel technique according to the network traffic condition. The findings of the present study also contribute to improving the performance of a multimedia network, which consists of different traffic types of applications.

  1. An OFDM Receiver with Frequency Domain Diversity Combined Impulsive Noise Canceller for Underwater Network.

    Science.gov (United States)

    Saotome, Rie; Hai, Tran Minh; Matsuda, Yasuto; Suzuki, Taisaku; Wada, Tomohisa

    2015-01-01

    In order to explore marine natural resources using remote robotic sensor or to enable rapid information exchange between ROV (remotely operated vehicles), AUV (autonomous underwater vehicle), divers, and ships, ultrasonic underwater communication systems are used. However, if the communication system is applied to rich living creature marine environment such as shallow sea, it suffers from generated Impulsive Noise so-called Shrimp Noise, which is randomly generated in time domain and seriously degrades communication performance in underwater acoustic network. With the purpose of supporting high performance underwater communication, a robust digital communication method for Impulsive Noise environments is necessary. In this paper, we propose OFDM ultrasonic communication system with diversity receiver. The main feature of the receiver is a newly proposed Frequency Domain Diversity Combined Impulsive Noise Canceller. The OFDM receiver utilizes 20-28 KHz ultrasonic channel and subcarrier spacing of 46.875 Hz (MODE3) and 93.750 Hz (MODE2) OFDM modulations. In addition, the paper shows Impulsive Noise distribution data measured at a fishing port in Okinawa and at a barge in Shizuoka prefectures and then proposed diversity OFDM transceivers architecture and experimental results are described. By the proposed Impulsive Noise Canceller, frame bit error rate has been decreased by 20-30%.

  2. An OFDM Receiver with Frequency Domain Diversity Combined Impulsive Noise Canceller for Underwater Network

    Directory of Open Access Journals (Sweden)

    Rie Saotome

    2015-01-01

    Full Text Available In order to explore marine natural resources using remote robotic sensor or to enable rapid information exchange between ROV (remotely operated vehicles, AUV (autonomous underwater vehicle, divers, and ships, ultrasonic underwater communication systems are used. However, if the communication system is applied to rich living creature marine environment such as shallow sea, it suffers from generated Impulsive Noise so-called Shrimp Noise, which is randomly generated in time domain and seriously degrades communication performance in underwater acoustic network. With the purpose of supporting high performance underwater communication, a robust digital communication method for Impulsive Noise environments is necessary. In this paper, we propose OFDM ultrasonic communication system with diversity receiver. The main feature of the receiver is a newly proposed Frequency Domain Diversity Combined Impulsive Noise Canceller. The OFDM receiver utilizes 20–28 KHz ultrasonic channel and subcarrier spacing of 46.875 Hz (MODE3 and 93.750 Hz (MODE2 OFDM modulations. In addition, the paper shows Impulsive Noise distribution data measured at a fishing port in Okinawa and at a barge in Shizuoka prefectures and then proposed diversity OFDM transceivers architecture and experimental results are described. By the proposed Impulsive Noise Canceller, frame bit error rate has been decreased by 20–30%.

  3. Artificial neural network combined with principal component analysis for resolution of complex pharmaceutical formulations.

    Science.gov (United States)

    Ioele, Giuseppina; De Luca, Michele; Dinç, Erdal; Oliverio, Filomena; Ragno, Gaetano

    2011-01-01

    A chemometric approach based on the combined use of the principal component analysis (PCA) and artificial neural network (ANN) was developed for the multicomponent determination of caffeine (CAF), mepyramine (MEP), phenylpropanolamine (PPA) and pheniramine (PNA) in their pharmaceutical preparations without any chemical separation. The predictive ability of the ANN method was compared with the classical linear regression method Partial Least Squares 2 (PLS2). The UV spectral data between 220 and 300 nm of a training set of sixteen quaternary mixtures were processed by PCA to reduce the dimensions of input data and eliminate the noise coming from instrumentation. Several spectral ranges and different numbers of principal components (PCs) were tested to find the PCA-ANN and PLS2 models reaching the best determination results. A two layer ANN, using the first four PCs, was used with log-sigmoid transfer function in first hidden layer and linear transfer function in output layer. Standard error of prediction (SEP) was adopted to assess the predictive accuracy of the models when subjected to external validation. PCA-ANN showed better prediction ability in the determination of PPA and PNA in synthetic samples with added excipients and pharmaceutical formulations. Since both components are characterized by low absorptivity, the better performance of PCA-ANN was ascribed to the ability in considering all non-linear information from noise or interfering excipients.

  4. Combining data and meta-analysis to build Bayesian networks for clinical decision support.

    Science.gov (United States)

    Yet, Barbaros; Perkins, Zane B; Rasmussen, Todd E; Tai, Nigel R M; Marsh, D William R

    2014-12-01

    Complex clinical decisions require the decision maker to evaluate multiple factors that may interact with each other. Many clinical studies, however, report 'univariate' relations between a single factor and outcome. Such univariate statistics are often insufficient to provide useful support for complex clinical decisions even when they are pooled using meta-analysis. More useful decision support could be provided by evidence-based models that take the interaction between factors into account. In this paper, we propose a method of integrating the univariate results of a meta-analysis with a clinical dataset and expert knowledge to construct multivariate Bayesian network (BN) models. The technique reduces the size of the dataset needed to learn the parameters of a model of a given complexity. Supplementing the data with the meta-analysis results avoids the need to either simplify the model - ignoring some complexities of the problem - or to gather more data. The method is illustrated by a clinical case study into the prediction of the viability of severely injured lower extremities. The case study illustrates the advantages of integrating combined evidence into BN development: the BN developed using our method outperformed four different data-driven structure learning methods, and a well-known scoring model (MESS) in this domain. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Classifier Selection with Permutation Tests

    OpenAIRE

    Arias, Marta; Arratia, Argimiro; Duarte-Lopez, Ariel

    2017-01-01

    This work presents a content-based recommender system for machine learning classifier algorithms. Given a new data set, a recommendation of what classifier is likely to perform best is made based on classifier performance over similar known data sets. This similarity is measured according to a data set characterization that includes several state-of-the-art metrics taking into account physical structure, statis- tics, and information theory. A novelty with respect to prior work is the use of ...

  6. Voice Quality Estimation in Combined Radio-VoIP Networks for Dispatching Systems

    Directory of Open Access Journals (Sweden)

    Jiri Vodrazka

    2016-01-01

    Full Text Available The voice quality modelling assessment and planning field is deeply and widely theoretically and practically mastered for common voice communication systems, especially for the public fixed and mobile telephone networks including Next Generation Networks (NGN - internet protocol based networks. This article seeks to contribute voice quality modelling assessment and planning for dispatching communication systems based on Internet Protocol (IP and private radio networks. The network plan, correction in E-model calculation and default values for the model are presented and discussed.

  7. Public Management and the Metagovernance of Hierarchies, Networks and Markets: The feasibility of designing and managing governance style combinations

    NARCIS (Netherlands)

    A.A.M. Meuleman (Louis)

    2008-01-01

    textabstractWhat is modern governance? Is it the battle against "oldfashioned" hierarchy, or is it the restoration of key hierarchical values? Is it optimizing network management, or maximizing the benefits of market thinking in the publicsector? This book argues that it is the combination of all

  8. Mitosis detection in breast cancer pathology images by combining handcrafted and convolutional neural network features

    Science.gov (United States)

    Wang, Haibo; Cruz-Roa, Angel; Basavanhally, Ajay; Gilmore, Hannah; Shih, Natalie; Feldman, Mike; Tomaszewski, John; Gonzalez, Fabio; Madabhushi, Anant

    2014-01-01

    Abstract. Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is the mitotic count, which involves quantifying the number of cells in the process of dividing (i.e., undergoing mitosis) at a specific point in time. Currently, mitosis counting is done manually by a pathologist looking at multiple high power fields (HPFs) on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical, or textural attributes of mitoses or features learned with convolutional neural networks (CNN). Although handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely supervised feature generation methods, there is an appeal in attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. We present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color, and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing the

  9. TEXTURE BASED LAND COVER CLASSIFICATION ALGORITHM USING GABOR WAVELET AND ANFIS CLASSIFIER

    Directory of Open Access Journals (Sweden)

    S. Jenicka

    2016-05-01

    Full Text Available Texture features play a predominant role in land cover classification of remotely sensed images. In this study, for extracting texture features from data intensive remotely sensed image, Gabor wavelet has been used. Gabor wavelet transform filters frequency components of an image through decomposition and produces useful features. For classification of fuzzy land cover patterns in the remotely sensed image, Adaptive Neuro Fuzzy Inference System (ANFIS has been used. The strength of ANFIS classifier is that it combines the merits of fuzzy logic and neural network. Hence in this article, land cover classification of remotely sensed image has been performed using Gabor wavelet and ANFIS classifier. The classification accuracy of the classified image obtained is found to be 92.8%.

  10. Plurality in a Classifier Language.

    Science.gov (United States)

    Li, Yen-Hui Audrey

    1999-01-01

    Argues that a classifier language can have a plural morpheme within a nominal expression, suggesting that -men in Mandarin Chinese is best analyzed as a plural morpheme, in contrast to a regular plural on an element in N, such as the English -s. The paper makes a prediction about the structures of nominal expressions in classifier and…

  11. Comparative efficacy of combination bronchodilator therapies in COPD: a network meta-analysis

    Directory of Open Access Journals (Sweden)

    Huisman EL

    2015-09-01

    Full Text Available Eline L Huisman,1 Sarah M Cockle,2 Afisi S Ismaila,3,4 Andreas Karabis,1 Yogesh Suresh Punekar2 1Mapi Group, Real World Strategy and Analytics and Strategic Market Access, Houten, the Netherlands; 2Value Evidence and Outcomes, GlaxoSmithKline, Uxbridge, UK; 3Value Evidence and Outcomes, GlaxoSmithKline R&D, Research Triangle Park, NC, USA; 4Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada Background: Several new fixed-dose combination bronchodilators have been recently launched, and assessing their efficacy relative to each other, and with open dual combinations is desirable. This network meta-analysis (NMA assessed the efficacy of umeclidinium and vilanterol (UMEC/VI with that of available dual bronchodilators in single/separate inhalers. Methods: A systematic literature review identified randomized controlled trials of ≥10 weeks among chronic obstructive pulmonary disease patients (≥40 years, assessing the efficacy of combination bronchodilators in single or separate inhalers. Comparative assessment was conducted on change from baseline in trough forced expiratory volume in 1 second (FEV1, St George’s Respiratory Questionnaire (SGRQ total scores, transitional dyspnea index (TDI focal scores, and rescue medication use at 12 weeks and 24 weeks using an NMA within a Bayesian framework. Results: A systematic literature review identified 77 articles of 26 trials comparing UMEC/VI, indacaterol/glycopyrronium (QVA149, formoterol plus tiotropium (TIO 18 µg, salmeterol plus TIO, or indacaterol plus TIO, with TIO and placebo as common comparators at 12 weeks and approximately 24 weeks. The NMA showed that at 24 weeks, efficacy of UMEC/VI was not significantly different compared with QVA149 on trough FEV1 (14.1 mL [95% credible interval: -14.2, 42.3], SGRQ total score (0.18 [-1.28, 1.63], TDI focal score (-0.30 [-0.73, 0.13], and rescue medication use (0.02 [-0.27, 0.32]; compared with salmeterol plus

  12. Altered temporal features of intrinsic connectivity networks in boys with combined type of attention deficit hyperactivity disorder

    International Nuclear Information System (INIS)

    Wang, Xun-Heng; Li, Lihua

    2015-01-01

    Highlights: • Temporal patterns within ICNs provide new way to investigate ADHD brains. • ADHD exhibits enhanced temporal activities within and between ICNs. • Network-wise ALFF influences functional connectivity between ICNs. • Univariate patterns within ICNs are correlated to behavior scores. - Abstract: Purpose: Investigating the altered temporal features within and between intrinsic connectivity networks (ICNs) for boys with attention-deficit/hyperactivity disorder (ADHD); and analyzing the relationships between altered temporal features within ICNs and behavior scores. Materials and methods: A cohort of boys with combined type of ADHD and a cohort of age-matched healthy boys were recruited from ADHD-200 Consortium. All resting-state fMRI datasets were preprocessed and normalized into standard brain space. Using general linear regression, 20 ICNs were taken as spatial templates to analyze the time-courses of ICNs for each subject. Amplitude of low frequency fluctuations (ALFFs) were computed as univariate temporal features within ICNs. Pearson correlation coefficients and node strengths were computed as bivariate temporal features between ICNs. Additional correlation analysis was performed between temporal features of ICNs and behavior scores. Results: ADHD exhibited more activated network-wise ALFF than normal controls in attention and default mode-related network. Enhanced functional connectivities between ICNs were found in ADHD. The network-wise ALFF within ICNs might influence the functional connectivity between ICNs. The temporal pattern within posterior default mode network (pDMN) was positively correlated to inattentive scores. The subcortical network, fusiform-related DMN and attention-related networks were negatively correlated to Intelligence Quotient (IQ) scores. Conclusion: The temporal low frequency oscillations of ICNs in boys with ADHD were more activated than normal controls during resting state; the temporal features within ICNs could

  13. Altered temporal features of intrinsic connectivity networks in boys with combined type of attention deficit hyperactivity disorder

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xun-Heng, E-mail: xhwang@hdu.edu.cn [College of Life Information Science and Instrument Engineering, Hangzhou Dianzi University, Hangzhou 310018 (China); School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096 (China); Li, Lihua [College of Life Information Science and Instrument Engineering, Hangzhou Dianzi University, Hangzhou 310018 (China)

    2015-05-15

    Highlights: • Temporal patterns within ICNs provide new way to investigate ADHD brains. • ADHD exhibits enhanced temporal activities within and between ICNs. • Network-wise ALFF influences functional connectivity between ICNs. • Univariate patterns within ICNs are correlated to behavior scores. - Abstract: Purpose: Investigating the altered temporal features within and between intrinsic connectivity networks (ICNs) for boys with attention-deficit/hyperactivity disorder (ADHD); and analyzing the relationships between altered temporal features within ICNs and behavior scores. Materials and methods: A cohort of boys with combined type of ADHD and a cohort of age-matched healthy boys were recruited from ADHD-200 Consortium. All resting-state fMRI datasets were preprocessed and normalized into standard brain space. Using general linear regression, 20 ICNs were taken as spatial templates to analyze the time-courses of ICNs for each subject. Amplitude of low frequency fluctuations (ALFFs) were computed as univariate temporal features within ICNs. Pearson correlation coefficients and node strengths were computed as bivariate temporal features between ICNs. Additional correlation analysis was performed between temporal features of ICNs and behavior scores. Results: ADHD exhibited more activated network-wise ALFF than normal controls in attention and default mode-related network. Enhanced functional connectivities between ICNs were found in ADHD. The network-wise ALFF within ICNs might influence the functional connectivity between ICNs. The temporal pattern within posterior default mode network (pDMN) was positively correlated to inattentive scores. The subcortical network, fusiform-related DMN and attention-related networks were negatively correlated to Intelligence Quotient (IQ) scores. Conclusion: The temporal low frequency oscillations of ICNs in boys with ADHD were more activated than normal controls during resting state; the temporal features within ICNs could

  14. Novel amphiphilic poly(dimethylsiloxane) based polyurethane networks tethered with carboxybetaine and their combined antibacterial and anti-adhesive property

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Jingxian; Fu, Yuchen; Zhang, Qinghua, E-mail: qhzhang@zju.edu.cn; Zhan, Xiaoli; Chen, Fengqiu

    2017-08-01

    Highlights: • An amphiphilic poly(dimethylsiloxane) (PDMS) based polyurethane (PU) network tethered with carboxybetaine is prepared. • The surface distribution of PDMS and zwitterionic segments produces an obvious amphiphilic heterogeneous surface. • This designed PDMS-based amphiphilic PU network exhibits combined antibacterial and anti-adhesive properties. - Abstract: The traditional nonfouling materials are powerless against bacterial cells attachment, while the hydrophobic bactericidal surfaces always suffer from nonspecific protein adsorption and dead bacterial cells accumulation. Here, amphiphilic polyurethane (PU) networks modified with poly(dimethylsiloxane) (PDMS) and cationic carboxybetaine diol through simple crosslinking reaction were developed, which had an antibacterial efficiency of 97.7%. Thereafter, the hydrolysis of carboxybetaine ester into zwitterionic groups brought about anti-adhesive properties against bacteria and proteins. The surface chemical composition and wettability performance of the PU network surfaces were investigated by attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR), X-ray photoelectron spectroscopy (XPS) and contact angle analysis. The surface distribution of PDMS and zwitterionic segments produced an obvious amphiphilic heterogeneous surface, which was demonstrated by atomic force microscopy (AFM). Enzyme-linked immunosorbent assays (ELISA) were used to test the nonspecific protein adsorption behaviors. With the advantages of the transition from excellent bactericidal performance to anti-adhesion and the combination of fouling resistance and fouling release property, the designed PDMS-based amphiphilic PU network shows great application potential in biomedical devices and marine facilities.

  15. Material discovery by combining stochastic surface walking global optimization with a neural network.

    Science.gov (United States)

    Huang, Si-Da; Shang, Cheng; Zhang, Xiao-Jie; Liu, Zhi-Pan

    2017-09-01

    While the underlying potential energy surface (PES) determines the structure and other properties of a material, it has been frustrating to predict new materials from theory even with the advent of supercomputing facilities. The accuracy of the PES and the efficiency of PES sampling are two major bottlenecks, not least because of the great complexity of the material PES. This work introduces a "Global-to-Global" approach for material discovery by combining for the first time a global optimization method with neural network (NN) techniques. The novel global optimization method, named the stochastic surface walking (SSW) method, is carried out massively in parallel for generating a global training data set, the fitting of which by the atom-centered NN produces a multi-dimensional global PES; the subsequent SSW exploration of large systems with the analytical NN PES can provide key information on the thermodynamics and kinetics stability of unknown phases identified from global PESs. We describe in detail the current implementation of the SSW-NN method with particular focuses on the size of the global data set and the simultaneous energy/force/stress NN training procedure. An important functional material, TiO 2 , is utilized as an example to demonstrate the automated global data set generation, the improved NN training procedure and the application in material discovery. Two new TiO 2 porous crystal structures are identified, which have similar thermodynamics stability to the common TiO 2 rutile phase and the kinetics stability for one of them is further proved from SSW pathway sampling. As a general tool for material simulation, the SSW-NN method provides an efficient and predictive platform for large-scale computational material screening.

  16. Combined Exposure to Simulated Microgravity and Acute or Chronic Radiation Reduces Neuronal Network Integrity and Survival.

    Science.gov (United States)

    Pani, Giuseppe; Verslegers, Mieke; Quintens, Roel; Samari, Nada; de Saint-Georges, Louis; van Oostveldt, Patrick; Baatout, Sarah; Benotmane, Mohammed Abderrafi

    2016-01-01

    During orbital or interplanetary space flights, astronauts are exposed to cosmic radiations and microgravity. However, most earth-based studies on the potential health risks of space conditions have investigated the effects of these two conditions separately. This study aimed at assessing the combined effect of radiation exposure and microgravity on neuronal morphology and survival in vitro. In particular, we investigated the effects of simulated microgravity after acute (X-rays) or during chronic (Californium-252) exposure to ionizing radiation using mouse mature neuron cultures. Acute exposure to low (0.1 Gy) doses of X-rays caused a delay in neurite outgrowth and a reduction in soma size, while only the high dose impaired neuronal survival. Of interest, the strongest effect on neuronal morphology and survival was evident in cells exposed to microgravity and in particular in cells exposed to both microgravity and radiation. Removal of neurons from simulated microgravity for a period of 24 h was not sufficient to recover neurite length, whereas the soma size showed a clear re-adaptation to normal ground conditions. Genome-wide gene expression analysis confirmed a modulation of genes involved in neurite extension, cell survival and synaptic communication, suggesting that these changes might be responsible for the observed morphological effects. In general, the observed synergistic changes in neuronal network integrity and cell survival induced by simulated space conditions might help to better evaluate the astronaut's health risks and underline the importance of investigating the central nervous system and long-term cognition during and after a space flight.

  17. Combined Exposure to Simulated Microgravity and Acute or Chronic Radiation Reduces Neuronal Network Integrity and Survival.

    Directory of Open Access Journals (Sweden)

    Giuseppe Pani

    Full Text Available During orbital or interplanetary space flights, astronauts are exposed to cosmic radiations and microgravity. However, most earth-based studies on the potential health risks of space conditions have investigated the effects of these two conditions separately. This study aimed at assessing the combined effect of radiation exposure and microgravity on neuronal morphology and survival in vitro. In particular, we investigated the effects of simulated microgravity after acute (X-rays or during chronic (Californium-252 exposure to ionizing radiation using mouse mature neuron cultures. Acute exposure to low (0.1 Gy doses of X-rays caused a delay in neurite outgrowth and a reduction in soma size, while only the high dose impaired neuronal survival. Of interest, the strongest effect on neuronal morphology and survival was evident in cells exposed to microgravity and in particular in cells exposed to both microgravity and radiation. Removal of neurons from simulated microgravity for a period of 24 h was not sufficient to recover neurite length, whereas the soma size showed a clear re-adaptation to normal ground conditions. Genome-wide gene expression analysis confirmed a modulation of genes involved in neurite extension, cell survival and synaptic communication, suggesting that these changes might be responsible for the observed morphological effects. In general, the observed synergistic changes in neuronal network integrity and cell survival induced by simulated space conditions might help to better evaluate the astronaut's health risks and underline the importance of investigating the central nervous system and long-term cognition during and after a space flight.

  18. Classified

    CERN Multimedia

    Computer Security Team

    2011-01-01

    In the last issue of the Bulletin, we have discussed recent implications for privacy on the Internet. But privacy of personal data is just one facet of data protection. Confidentiality is another one. However, confidentiality and data protection are often perceived as not relevant in the academic environment of CERN.   But think twice! At CERN, your personal data, e-mails, medical records, financial and contractual documents, MARS forms, group meeting minutes (and of course your password!) are all considered to be sensitive, restricted or even confidential. And this is not all. Physics results, in particular when being preliminary and pending scrutiny, are sensitive, too. Just recently, an ATLAS collaborator copy/pasted the abstract of an ATLAS note onto an external public blog, despite the fact that this document was clearly marked as an "Internal Note". Such an act was not only embarrassing to the ATLAS collaboration, and had negative impact on CERN’s reputation --- i...

  19. Combining social and genetic networks to study HIV transmission in mixing risk groups

    NARCIS (Netherlands)

    Zarrabi, N.; Prosperi, M.C.F.; Belleman, R.G.; Di Giambenedetto, S.; Fabbiani, M.; De Luca, A.; Sloot, P.M.A.

    2013-01-01

    Reconstruction of HIV transmission networks is important for understanding and preventing the spread of the virus and drug resistant variants. Mixing risk groups is important in network analysis of HIV in order to assess the role of transmission between risk groups in the HIV epidemic. Most of the

  20. Combining epidemiological and genetic networks signifies the importance of early treatment in HIV-1 transmission

    NARCIS (Netherlands)

    Zarrabi, N.; Prosperi, M.; Belleman, R.G.; Colafigli, M.; De Luca, A.; Sloot, P.M.A.

    2012-01-01

    Inferring disease transmission networks is important in epidemiology in order to understand and prevent the spread of infectious diseases. Reconstruction of the infection transmission networks requires insight into viral genome data as well as social interactions. For the HIV-1 epidemic, current

  1. Combining Host-based and network-based intrusion detection system

    African Journals Online (AJOL)

    These attacks were simulated using hping. The proposed system is implemented in Java. The results show that the proposed system is able to detect attacks both from within (host-based) and outside sources (network-based). Key Words: Intrusion Detection System (IDS), Host-based, Network-based, Signature, Security log.

  2. Combining Transcranial Magnetic Stimulation and fMRI to Examine the Default Mode Network

    OpenAIRE

    Halko, Mark A.; Eldaief, Mark C.; Horvath, Jared C.; Pascual-Leone, Alvaro

    2010-01-01

    The default mode network is a group of brain regions that are active when an individual is not focused on the outside world and the brain is at "wakeful rest."1,2,3 It is thought the default mode network corresponds to self-referential or "internal mentation".2,3

  3. Method for Constructing Composite Response Surfaces by Combining Neural Networks with other Interpolation or Estimation Techniques

    Science.gov (United States)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2003-01-01

    A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.

  4. Neural Classifier Construction using Regularization, Pruning

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan

    1998-01-01

    In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunctio...... with optimal brain damage pruning, a test error estimate is used to select the network architecture. The scheme is evaluated on four classification problems.......In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunction...

  5. Classifying objects in LWIR imagery via CNNs

    Science.gov (United States)

    Rodger, Iain; Connor, Barry; Robertson, Neil M.

    2016-10-01

    The aim of the presented work is to demonstrate enhanced target recognition and improved false alarm rates for a mid to long range detection system, utilising a Long Wave Infrared (LWIR) sensor. By exploiting high quality thermal image data and recent techniques in machine learning, the system can provide automatic target recognition capabilities. A Convolutional Neural Network (CNN) is trained and the classifier achieves an overall accuracy of > 95% for 6 object classes related to land defence. While the highly accurate CNN struggles to recognise long range target classes, due to low signal quality, robust target discrimination is achieved for challenging candidates. The overall performance of the methodology presented is assessed using human ground truth information, generating classifier evaluation metrics for thermal image sequences.

  6. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification.

    Science.gov (United States)

    Jing, Yaqi; Meng, Qinghao; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen

    2014-05-01

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.

  7. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    International Nuclear Information System (INIS)

    Jing, Yaqi; Meng, Qinghao; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen

    2014-01-01

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively

  8. Combined flatland ST radar and digital-barometer network observations of mesoscale processes

    Science.gov (United States)

    Clark, W. L.; Vanzandt, T. E.; Gage, K. S.; Einaudi, F. E.; Rottman, J. W.; Hollinger, S. E.

    1991-01-01

    The paper describes a six-station digital-barometer network centered on the Flatland ST radar to support observational studies of gravity waves and other mesoscale features at the Flatland Atmospheric Observatory in central Illinois. The network's current mode of operation is examined, and a preliminary example of an apparent group of waves evident throughout the network as well as throughout the troposphere is presented. Preliminary results demonstrate the capabilities of the current operational system to study wave convection, wave-front, and other coherent mesoscale interactions and processes throughout the troposphere. Unfiltered traces for the pressure and horizontal zonal wind, for days 351 to 353 UT, 1990, are illustrated.

  9. Network meta-analysis of Chinese herbal injections combined with the chemotherapy for the treatment of pancreatic cancer.

    Science.gov (United States)

    Zhang, Dan; Wu, Jiarui; Liu, Shi; Zhang, Xiaomeng; Zhang, Bing

    2017-05-01

    This study sought to use a network meta-analysis to assess the effectiveness and safety of Chinese herbal injections (CHIs) combined with the chemotherapy for the treatment of pancreatic cancer. Randomized controlled trials (RCTs) regarding CHIs to treat pancreatic cancer were searched in PubMed, the Cochrane library, Embase, the China National Knowledge Infrastructure Database (CNKI), the Wan-Fang Database, the Chinese Scientific Journals Full-text Database (VIP), and the Chinese Biomedical Literature Database (SinoMed) up to November 2016. The quality assessment was conducted by the Cochrane risk of bias tool and network meta-analysis was performed to compare the effectiveness and safety of different CHIs combined with the chemotherapy. Data were analyzed using STATA 12.0 and Win-BUGS 1.4 software. A total of 278 records were searched, and 22 eligible RCTs involving 1329 patients and 9 CHIs were included. The results of the network meta-analysis demonstrated that compared with the chemotherapy alone, Compound Kushen, Kangai or Kanglaite injection combined with chemotherapy yielded significantly higher probability of improving performance status. Aidi injection combined with chemotherapy was more effective in relieving leucopenia than using chemotherapy single. And these between-group differences were statistically significant. However, CHIs combined with chemotherapy could not achieve a better effect in the total clinical effect, nausea and vomiting. As for the cluster analysis for the adverse reactions (ADRs), the chemotherapy alone and Huachansu injection combined with the chemotherapy were inferior to relieve ADRs than the other CHIs plus chemotherapy for patients with pancreatic cancer. The current evidence showed that using CHIs on the basis of the chemotherapy could be beneficial for patients with pancreatic cancer in improving performance status and reducing the ADRs.

  10. IAEA safeguards and classified materials

    International Nuclear Information System (INIS)

    Pilat, J.F.; Eccleston, G.W.; Fearey, B.L.; Nicholas, N.J.; Tape, J.W.; Kratzer, M.

    1997-01-01

    The international community in the post-Cold War period has suggested that the International Atomic Energy Agency (IAEA) utilize its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials, some of which are classified, under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring classified materials. A traditional safeguards approach, based on nuclear material accountancy, would seem unavoidably to reveal classified information. However, further analysis of the IAEA's safeguards approaches is warranted in order to understand fully the scope and nature of any problems. The issues are complex and difficult, and it is expected that common technical understandings will be essential for their resolution. Accordingly, this paper examines and compares traditional safeguards item accounting of fuel at a nuclear power station (especially spent fuel) with the challenges presented by inspections of classified materials. This analysis is intended to delineate more clearly the problems as well as reveal possible approaches, techniques, and technologies that could allow the adaptation of safeguards to the unprecedented task of inspecting classified materials. It is also hoped that a discussion of these issues can advance ongoing political-technical debates on international inspections of excess classified materials

  11. Classifying transcription factor targets and discovering relevant biological features

    Directory of Open Access Journals (Sweden)

    DeLisi Charles

    2008-05-01

    Full Text Available Abstract Background An important goal in post-genomic research is discovering the network of interactions between transcription factors (TFs and the genes they regulate. We have previously reported the development of a supervised-learning approach to TF target identification, and used it to predict targets of 104 transcription factors in yeast. We now include a new sequence conservation measure, expand our predictions to include 59 new TFs, introduce a web-server, and implement an improved ranking method to reveal the biological features contributing to regulation. The classifiers combine 8 genomic datasets covering a broad range of measurements including sequence conservation, sequence overrepresentation, gene expression, and DNA structural properties. Principal Findings (1 Application of the method yields an amplification of information about yeast regulators. The ratio of total targets to previously known targets is greater than 2 for 11 TFs, with several having larger gains: Ash1(4, Ino2(2.6, Yaf1(2.4, and Yap6(2.4. (2 Many predicted targets for TFs match well with the known biology of their regulators. As a case study we discuss the regulator Swi6, presenting evidence that it may be important in the DNA damage response, and that the previously uncharacterized gene YMR279C plays a role in DNA damage response and perhaps in cell-cycle progression. (3 A procedure based on recursive-feature-elimination is able to uncover from the large initial data sets those features that best distinguish targets for any TF, providing clues relevant to its biology. An analysis of Swi6 suggests a possible role in lipid metabolism, and more specifically in metabolism of ceramide, a bioactive lipid currently being investigated for anti-cancer properties. (4 An analysis of global network properties highlights the transcriptional network hubs; the factors which control the most genes and the genes which are bound by the largest set of regulators. Cell-cycle and

  12. Governing networks in the hollow state: contracting out, process management or a combination of the two?

    NARCIS (Netherlands)

    E-H. Klijn (Erik-Hans)

    2010-01-01

    markdownabstract__Abstract__ The hollow state is characterised by governing through networks. In this article, we explore the nature of the hollow state and trace and illustrate three basic uncertainties in the decision making process which create complexity: knowledge uncertainty,

  13. Comparing Attentional Networks in fetal alcohol spectrum disorder and the inattentive and combined subtypes of attention deficit hyperactivity disorder.

    Science.gov (United States)

    Kooistra, Libbe; Crawford, Susan; Gibbard, Ben; Kaplan, Bonnie J; Fan, Jin

    2011-01-01

    The Attention Network Test (ANT) was used to examine alerting, orienting, and executive control in fetal alcohol spectrum disorder (FASD) versus attention deficit hyperactivity disorder (ADHD). Participants were 113 children aged 7 to 10 years (31 ADHD-Combined, 16 ADHD-Primarily Inattentive, 28 FASD, 38 controls). Incongruent flanker trials triggered slower responses in both the ADHD-Combined and the FASD groups. Abnormal conflict scores in these same two groups provided additional evidence for the presence of executive function deficits. The ADHD-Primarily Inattentive group was indistinguishable from the controls on all three ANT indices, which highlights the possibility that this group constitutes a pathologically distinct entity.

  14. A grey neural network and input-output combined forecasting model. Primary energy consumption forecasts in Spanish economic sectors

    International Nuclear Information System (INIS)

    Liu, Xiuli; Moreno, Blanca; García, Ana Salomé

    2016-01-01

    A combined forecast of Grey forecasting method and neural network back propagation model, which is called Grey Neural Network and Input-Output Combined Forecasting Model (GNF-IO model), is proposed. A real case of energy consumption forecast is used to validate the effectiveness of the proposed model. The GNF-IO model predicts coal, crude oil, natural gas, renewable and nuclear primary energy consumption volumes by Spain's 36 sub-sectors from 2010 to 2015 according to three different GDP growth scenarios (optimistic, baseline and pessimistic). Model test shows that the proposed model has higher simulation and forecasting accuracy on energy consumption than Grey models separately and other combination methods. The forecasts indicate that the primary energies as coal, crude oil and natural gas will represent on average the 83.6% percent of the total of primary energy consumption, raising concerns about security of supply and energy cost and adding risk for some industrial production processes. Thus, Spanish industry must speed up its transition to an energy-efficiency economy, achieving a cost reduction and increase in the level of self-supply. - Highlights: • Forecasting System Using Grey Models combined with Input-Output Models is proposed. • Primary energy consumption in Spain is used to validate the model. • The grey-based combined model has good forecasting performance. • Natural gas will represent the majority of the total of primary energy consumption. • Concerns about security of supply, energy cost and industry competitiveness are raised.

  15. Assessing sensory versus optogenetic network activation by combining (o)fMRI with optical Ca2+ recordings

    Science.gov (United States)

    Schmid, Florian; Wachsmuth, Lydia; Schwalm, Miriam; Prouvot, Pierre-Hugues; Jubal, Eduardo Rosales; Fois, Consuelo; Pramanik, Gautam; Zimmer, Claus; Stroh, Albrecht

    2015-01-01

    Encoding of sensory inputs in the cortex is characterized by sparse neuronal network activation. Optogenetic stimulation has previously been combined with fMRI (ofMRI) to probe functional networks. However, for a quantitative optogenetic probing of sensory-driven sparse network activation, the level of similarity between sensory and optogenetic network activation needs to be explored. Here, we complement ofMRI with optic fiber-based population Ca2+ recordings for a region-specific readout of neuronal spiking activity in rat brain. Comparing Ca2+ responses to the blood oxygenation level-dependent signal upon sensory stimulation with increasing frequencies showed adaptation of Ca2+ transients contrasted by an increase of blood oxygenation level-dependent responses, indicating that the optical recordings convey complementary information on neuronal network activity to the corresponding hemodynamic response. To study the similarity of optogenetic and sensory activation, we quantified the density of cells expressing channelrhodopsin-2 and modeled light propagation in the tissue. We estimated the effectively illuminated volume and numbers of optogenetically stimulated neurons, being indicative of sparse activation. At the functional level, upon either sensory or optogenetic stimulation we detected single-peak short-latency primary Ca2+ responses with similar amplitudes and found that blood oxygenation level-dependent responses showed similar time courses. These data suggest that ofMRI can serve as a representative model for functional brain mapping. PMID:26661247

  16. Kriging-Based Parameter Estimation Algorithm for Metabolic Networks Combined with Single-Dimensional Optimization and Dynamic Coordinate Perturbation.

    Science.gov (United States)

    Wang, Hong; Wang, Xicheng; Li, Zheng; Li, Keqiu

    2016-01-01

    The metabolic network model allows for an in-depth insight into the molecular mechanism of a particular organism. Because most parameters of the metabolic network cannot be directly measured, they must be estimated by using optimization algorithms. However, three characteristics of the metabolic network model, i.e., high nonlinearity, large amount parameters, and huge variation scopes of parameters, restrict the application of many traditional optimization algorithms. As a result, there is a growing demand to develop efficient optimization approaches to address this complex problem. In this paper, a Kriging-based algorithm aiming at parameter estimation is presented for constructing the metabolic networks. In the algorithm, a new infill sampling criterion, named expected improvement and mutual information (EI&MI), is adopted to improve the modeling accuracy by selecting multiple new sample points at each cycle, and the domain decomposition strategy based on the principal component analysis is introduced to save computing time. Meanwhile, the convergence speed is accelerated by combining a single-dimensional optimization method with the dynamic coordinate perturbation strategy when determining the new sample points. Finally, the algorithm is applied to the arachidonic acid metabolic network to estimate its parameters. The obtained results demonstrate the effectiveness of the proposed algorithm in getting precise parameter values under a limited number of iterations.

  17. Assessing sensory versus optogenetic network activation by combining (o)fMRI with optical Ca2+ recordings.

    Science.gov (United States)

    Schmid, Florian; Wachsmuth, Lydia; Schwalm, Miriam; Prouvot, Pierre-Hugues; Jubal, Eduardo Rosales; Fois, Consuelo; Pramanik, Gautam; Zimmer, Claus; Faber, Cornelius; Stroh, Albrecht

    2016-11-01

    Encoding of sensory inputs in the cortex is characterized by sparse neuronal network activation. Optogenetic stimulation has previously been combined with fMRI (ofMRI) to probe functional networks. However, for a quantitative optogenetic probing of sensory-driven sparse network activation, the level of similarity between sensory and optogenetic network activation needs to be explored. Here, we complement ofMRI with optic fiber-based population Ca 2+ recordings for a region-specific readout of neuronal spiking activity in rat brain. Comparing Ca 2+ responses to the blood oxygenation level-dependent signal upon sensory stimulation with increasing frequencies showed adaptation of Ca 2+ transients contrasted by an increase of blood oxygenation level-dependent responses, indicating that the optical recordings convey complementary information on neuronal network activity to the corresponding hemodynamic response. To study the similarity of optogenetic and sensory activation, we quantified the density of cells expressing channelrhodopsin-2 and modeled light propagation in the tissue. We estimated the effectively illuminated volume and numbers of optogenetically stimulated neurons, being indicative of sparse activation. At the functional level, upon either sensory or optogenetic stimulation we detected single-peak short-latency primary Ca 2+ responses with similar amplitudes and found that blood oxygenation level-dependent responses showed similar time courses. These data suggest that ofMRI can serve as a representative model for functional brain mapping. © The Author(s) 2015.

  18. A Hybrid Combination Scheme for Cooperative Spectrum Sensing in Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Changhua Yao

    2014-01-01

    Full Text Available We propose a novel hybrid combination scheme in cooperative spectrum sensing (CSS, which utilizes the diversity of reporting channels to achieve better throughput performance. Secondary users (SUs with good reporting channel quality transmit quantized local observation statistics to fusion center (FC, while others report their local decisions. FC makes the final decision by carrying out hybrid combination. We derive the closed-form expressions of throughput and detection performance as a function of the number of SUs which report local observation statistics. The simulation and numerical results show that the hybrid combination scheme can achieve better throughput performance than hard combination scheme and soft combination scheme.

  19. Protein secondary structure prediction using a small training set (compact model) combined with a Complex-valued neural network approach.

    Science.gov (United States)

    Rashid, Shamima; Saraswathi, Saras; Kloczkowski, Andrzej; Sundaram, Suresh; Kolinski, Andrzej

    2016-09-13

    Protein secondary structure prediction (SSP) has been an area of intense research interest. Despite advances in recent methods conducted on large datasets, the estimated upper limit accuracy is yet to be reached. Since the predictions of SSP methods are applied as input to higher-level structure prediction pipelines, even small errors may have large perturbations in final models. Previous works relied on cross validation as an estimate of classifier accuracy. However, training on large numbers of protein chains compromises the classifier ability to generalize to new sequences. This prompts a novel approach to training and an investigation into the possible structural factors that lead to poor predictions. Here, a small group of 55 proteins termed the compact model is selected from the CB513 dataset using a heuristics-based approach. In a prior work, all sequences were represented as probability matrices of residues adopting each of Helix, Sheet and Coil states, based on energy calculations using the C-Alpha, C-Beta, Side-chain (CABS) algorithm. The functional relationship between the conformational energies computed with CABS force-field and residue states is approximated using a classifier termed the Fully Complex-valued Relaxation Network (FCRN). The FCRN is trained with the compact model proteins. The performance of the compact model is compared with traditional cross-validated accuracies and blind-tested on a dataset of G Switch proteins, obtaining accuracies of ∼81 %. The model demonstrates better results when compared to several techniques in the literature. A comparative case study of the worst performing chain identifies hydrogen bond contacts that lead to Coil ⇔ Sheet misclassifications. Overall, mispredicted Coil residues have a higher propensity to participate in backbone hydrogen bonding than correctly predicted Coils. The implications of these findings are: (i) the choice of training proteins is important in preserving the generalization of a

  20. Performance of classification confidence measures in dynamic classifier systems

    Czech Academy of Sciences Publication Activity Database

    Štefka, D.; Holeňa, Martin

    2013-01-01

    Roč. 23, č. 4 (2013), s. 299-319 ISSN 1210-0552 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : classifier combining * dynamic classifier systems * classification confidence Subject RIV: IN - Informatics, Computer Science Impact factor: 0.412, year: 2013

  1. The XCNN flow meter - a combined cross-correlation and neural network model

    International Nuclear Information System (INIS)

    Roverso, Davide

    2004-05-01

    In this report we propose the XCNN flow meter model, which consists of an integration of a cross-correlator (XC) of pressure measurements and an ensemble of neural network (NN) estimators. Since pressure information does not only travel with the fluid, like for example particles, bubbles, eddies and, to a big extent, temperature, but also through the fluid, the transit time of a pressure disturbance estimated by cross-correlation needs to be corrected to take into account the propagation velocity of pressure differentials in the fluid. This correction is performed by the neural network models, which in this case are simple single input single output three layer feed-forward neural networks. Instead of a single neural network an ensemble is used to reduce the variance of the estimate. The proposed method involves several stages where pressure transmitter data is first filtered, then fed to the cross-correlator whose result is interpolated and filtered again before being fed to the ensemble of neural networks, which produce the final flow estimate. An average accuracy of 0.29% (with 0.18 standard deviation) of a reference ultrasonic meter has been obtained on experimental measurements performed at Tecnatom s.a. This report marks the conclusion of the Virtual Sensors for Feedwater Flow Measurement project at the HRP, which run in the 2001-2003 period. (Author)

  2. Is Congenital Amusia a Disconnection Syndrome? A Study Combining Tract- and Network-Based Analysis

    Directory of Open Access Journals (Sweden)

    Jieqiong Wang

    2017-09-01

    Full Text Available Previous studies on congenital amusia mainly focused on the impaired fronto-temporal pathway. It is possible that neural pathways of amusia patients on a larger scale are affected. In this study, we investigated changes in structural connections by applying both tract-based and network-based analysis to DTI data of 12 subjects with congenital amusia and 20 demographic-matched normal controls. TBSS (tract-based spatial statistics was used to detect microstructural changes. The results showed that amusics had higher diffusivity indices in the corpus callosum, the right inferior/superior longitudinal fasciculus, and the right inferior frontal-occipital fasciculus (IFOF. The axial diffusivity values of the right IFOF were negatively correlated with musical scores in the amusia group. Network-based analysis showed that the efficiency of the brain network was reduced in amusics. The impairments of WM tracts were also found to be correlated with reduced network efficiency in amusics. This suggests that impaired WM tracts may lead to the reduced network efficiency seen in amusics. Our findings suggest that congenital amusia is a disconnection syndrome.

  3. Is Congenital Amusia a Disconnection Syndrome? A Study Combining Tract- and Network-Based Analysis.

    Science.gov (United States)

    Wang, Jieqiong; Zhang, Caicai; Wan, Shibiao; Peng, Gang

    2017-01-01

    Previous studies on congenital amusia mainly focused on the impaired fronto-temporal pathway. It is possible that neural pathways of amusia patients on a larger scale are affected. In this study, we investigated changes in structural connections by applying both tract-based and network-based analysis to DTI data of 12 subjects with congenital amusia and 20 demographic-matched normal controls. TBSS (tract-based spatial statistics) was used to detect microstructural changes. The results showed that amusics had higher diffusivity indices in the corpus callosum, the right inferior/superior longitudinal fasciculus, and the right inferior frontal-occipital fasciculus (IFOF). The axial diffusivity values of the right IFOF were negatively correlated with musical scores in the amusia group. Network-based analysis showed that the efficiency of the brain network was reduced in amusics. The impairments of WM tracts were also found to be correlated with reduced network efficiency in amusics. This suggests that impaired WM tracts may lead to the reduced network efficiency seen in amusics. Our findings suggest that congenital amusia is a disconnection syndrome.

  4. Combining SDM-Based Circuit Switching with Packet Switching in a Router for On-Chip Networks

    Directory of Open Access Journals (Sweden)

    Angelo Kuti Lusala

    2012-01-01

    Full Text Available A Hybrid router architecture for Networks-on-Chip “NoC” is presented, it combines Spatial Division Multiplexing “SDM” based circuit switching and packet switching in order to efficiently and separately handle both streaming and best-effort traffic generated in real-time applications. Furthermore the SDM technique is combined with Time Division Multiplexing “TDM” technique in the circuit switching part in order to increase path diversity, thus improving throughput while sharing communication resources among multiple connections. Combining these two techniques allows mitigating the poor resource usage inherent to circuit switching. In this way Quality of Service “QoS” is easily provided for the streaming traffic through the circuit-switched sub-router while the packet-switched sub-router handles best-effort traffic. The proposed hybrid router architectures were synthesized, placed and routed on an FPGA. Results show that a practicable Network-on-Chip “NoC” can be built using the proposed router architectures. 7 × 7 mesh NoCs were simulated in SystemC. Simulation results show that the probability of establishing paths through the NoC increases with the number of sub-channels and has its highest value when combining SDM with TDM, thereby significantly reducing contention in the NoC.

  5. Neural Networks for Predicting Conditional Probability Densities: Improved Training Scheme Combining EM and RVFL.

    Science.gov (United States)

    Taylor, John G.; Husmeier, Dirk

    1998-01-01

    Predicting conditional probability densities with neural networks requires complex (at least two-hidden-layer) architectures, which normally leads to rather long training times. By adopting the RVFL concept and constraining a subset of the parameters to randomly chosen initial values (such that the EM-algorithm can be applied), the training process can be accelerated by about two orders of magnitude. This allows training of a whole ensemble of networks at the same computational costs as would be required otherwise for training a single model. The simulations performed suggest that in this way a significant improvement of the generalization performance can be achieved. Copyright 1997 Elsevier Science Ltd.

  6. Fuzzy Integral and Cuckoo Search Based Classifier Fusion for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    AYDIN, I.

    2018-02-01

    Full Text Available The human activity recognition is an important issue for sports analysis and health monitoring. The early recognition of human actions is used in areas such as detection of criminal activities, fall detection, and action recognition in rehabilitation centers. Especially, the detection of the falls in elderly people is very important for rapid intervention. Mobile phones can be used for action recognition with their built-in accelerometer sensor. In this study, a new combined method based on fuzzy integral and cuckoo search is proposed for classifying human actions. The signals are acquired from three axes of acceleration sensor of a mobile phone and the features are extracted by applying signal processing methods. Our approach utilizes from linear discriminant analysis (LDA, support vector machines (SVM, and neural networks (NN techniques and aggregates their outputs by using fuzzy integral. The cuckoo search method adjusts the parameters for assignment of optimal confidence levels of the classifiers. The experimental results show that our model provides better performance than the individual classifiers. In addition, appropriate selection of the confidence levels improves the performance of the combined classifiers.

  7. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  8. Identification of Linkages between EDCs in Personal Care Products and Breast Cancer through Data Integration Combined with Gene Network Analysis.

    Science.gov (United States)

    Jeong, Hyeri; Kim, Jongwoon; Kim, Youngjun

    2017-09-30

    Approximately 1000 chemicals have been reported to possibly have endocrine disrupting effects, some of which are used in consumer products, such as personal care products (PCPs) and cosmetics. We conducted data integration combined with gene network analysis to: (i) identify causal molecular mechanisms between endocrine disrupting chemicals (EDCs) used in PCPs and breast cancer; and (ii) screen candidate EDCs associated with breast cancer. Among EDCs used in PCPs, four EDCs having correlation with breast cancer were selected, and we curated 27 common interacting genes between those EDCs and breast cancer to perform the gene network analysis. Based on the gene network analysis, ESR1, TP53, NCOA1, AKT1, and BCL6 were found to be key genes to demonstrate the molecular mechanisms of EDCs in the development of breast cancer. Using GeneMANIA, we additionally predicted 20 genes which could interact with the 27 common genes. In total, 47 genes combining the common and predicted genes were functionally grouped with the gene ontology and KEGG pathway terms. With those genes, we finally screened candidate EDCs for their potential to increase breast cancer risk. This study highlights that our approach can provide insights to understand mechanisms of breast cancer and identify potential EDCs which are in association with breast cancer.

  9. Energy Efficiency of Ultra-Low-Power Bicycle Wireless Sensor Networks Based on a Combination of Power Reduction Techniques

    Directory of Open Access Journals (Sweden)

    Sadik Kamel Gharghan

    2016-01-01

    Full Text Available In most wireless sensor network (WSN applications, the sensor nodes (SNs are battery powered and the amount of energy consumed by the nodes in the network determines the network lifespan. For future Internet of Things (IoT applications, reducing energy consumption of SNs has become mandatory. In this paper, an ultra-low-power nRF24L01 wireless protocol is considered for a bicycle WSN. The power consumption of the mobile node on the cycle track was modified by combining adjustable data rate, sleep/wake, and transmission power control (TPC based on two algorithms. The first algorithm was a TPC-based distance estimation, which adopted a novel hybrid particle swarm optimization-artificial neural network (PSO-ANN using the received signal strength indicator (RSSI, while the second algorithm was a novel TPC-based accelerometer using inclination angle of the bicycle on the cycle track. Based on the second algorithm, the power consumption of the mobile and master nodes can be improved compared with the first algorithm and constant transmitted power level. In addition, an analytical model is derived to correlate the power consumption and data rate of the mobile node. The results indicate that the power savings based on the two algorithms outperformed the conventional operation (i.e., without power reduction algorithm by 78%.

  10. Combined neural network/Phillips–Tikhonov approach to aerosol retrievals over land from the NASA Research Scanning Polarimeter

    Directory of Open Access Journals (Sweden)

    A. Di Noia

    2017-11-01

    Full Text Available In this paper, an algorithm for the retrieval of aerosol and land surface properties from airborne spectropolarimetric measurements – combining neural networks and an iterative scheme based on Phillips–Tikhonov regularization – is described. The algorithm – which is an extension of a scheme previously designed for ground-based retrievals – is applied to measurements from the Research Scanning Polarimeter (RSP on board the NASA ER-2 aircraft. A neural network, trained on a large data set of synthetic measurements, is applied to perform aerosol retrievals from real RSP data, and the neural network retrievals are subsequently used as a first guess for the Phillips–Tikhonov retrieval. The resulting algorithm appears capable of accurately retrieving aerosol optical thickness, fine-mode effective radius and aerosol layer height from RSP data. Among the advantages of using a neural network as initial guess for an iterative algorithm are a decrease in processing time and an increase in the number of converging retrievals.

  11. NEpiC: a network-assisted algorithm for epigenetic studies using mean and variance combined signals.

    Science.gov (United States)

    Ruan, Peifeng; Shen, Jing; Santella, Regina M; Zhou, Shuigeng; Wang, Shuang

    2016-09-19

    DNA methylation plays an important role in many biological processes. Existing epigenome-wide association studies (EWAS) have successfully identified aberrantly methylated genes in many diseases and disorders with most studies focusing on analysing methylation sites one at a time. Incorporating prior biological information such as biological networks has been proven to be powerful in identifying disease-associated genes in both gene expression studies and genome-wide association studies (GWAS) but has been under studied in EWAS. Although recent studies have noticed that there are differences in methylation variation in different groups, only a few existing methods consider variance signals in DNA methylation studies. Here, we present a network-assisted algorithm, NEpiC, that combines both mean and variance signals in searching for differentially methylated sub-networks using the protein-protein interaction (PPI) network. In simulation studies, we demonstrate the power gain from using both the prior biological information and variance signals compared to using either of the two or neither information. Applications to several DNA methylation datasets from the Cancer Genome Atlas (TCGA) project and DNA methylation data on hepatocellular carcinoma (HCC) from the Columbia University Medical Center (CUMC) suggest that the proposed NEpiC algorithm identifies more cancer-related genes and generates better replication results. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Classifying Cereal Data (Earlier Methods)

    Science.gov (United States)

    The DSQ includes questions about cereal intake and allows respondents up to two responses on which cereals they consume. We classified each cereal reported first by hot or cold, and then along four dimensions: density of added sugars, whole grains, fiber, and calcium.

  13. Knowledge Uncertainty and Composed Classifier

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2007-01-01

    Roč. 1, č. 2 (2007), s. 101-105 ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Boosting architecture * contextual modelling * composed classifier * knowledge management , * knowledge * uncertainty Subject RIV: IN - Informatics, Computer Science

  14. Classifying polynomials and identity testing

    Indian Academy of Sciences (India)

    hard to compute [3,4]! Therefore, the solution to. PIT problem has a key role in our attempt to com- putationally classify polynomials. In this article, we will focus on this connection between PIT and polynomial classification. We now formally define arithmetic circuits and the identity testing problem. 1.1 Problem definition.

  15. Correlation Dimension-Based Classifier

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    2014-01-01

    Roč. 44, č. 12 (2014), s. 2253-2263 ISSN 2168-2267 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : classifier * multidimensional data * correlation dimension * scaling exponent * polynomial expansion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014

  16. ANALYSE THE PERFORMANCE OF ENSEMBLE CLASSIFIERS USING SAMPLING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    M. Balamurugan

    2016-07-01

    Full Text Available In Ensemble classifiers, the Combination of multiple prediction models of classifiers is important for making progress in a variety of difficult prediction problems. Ensemble of classifiers proved potential in getting higher accuracy compared to single classifier. Even though by the usage ensemble classifiers, still there is in-need to improve its performance. There are many possible ways available to increase the performance of ensemble classifiers. One of the ways is sampling, which plays a major role for improving the quality of ensemble classifier. Since, it helps in reducing the bias in input data set of ensemble. Sampling is the process of extracting the subset of samples from the original dataset. In this research work, analysis is done on sampling techniques for ensemble classifiers. In ensemble classifier, specifically one of the probability based sampling techniques is being always used. Samples are gathered in a process which gives all the individuals in the population of equal chances, such that, sampling bias is removed. In this paper, analyse the performance of ensemble classifiers by using various sampling techniques and list out their drawbacks.

  17. A Combination of Central Pattern Generator-based and Reflex-based Neural Networks for Dynamic, Adaptive, Robust Bipedal Locomotion

    DEFF Research Database (Denmark)

    Di Canio, Giuliano; Larsen, Jørgen Christian; Wörgötter, Florentin

    2016-01-01

    Robotic systems inspired from humans have always been lightening up the curiosity of engineers and scientists. Of many challenges, human locomotion is a very difficult one where a number of different systems needs to interact in order to generate a correct and balanced pattern. To simulate...... the interaction of these systems, implementations with reflexbased or central pattern generator (CPG)-based controllers have been tested on bipedal robot systems. In this paper we will combine the two controller types, into a controller that works with both reflex and CPG signals. We use a reflex-based neural...... network to generate basic walking patterns of a dynamic bipedal walking robot (DACBOT) and then a CPG-based neural network to ensure robust walking behavior...

  18. Identification of T1D susceptibility genes within the MHC region by combining protein interaction networks and SNP genotyping data

    DEFF Research Database (Denmark)

    Brorsson, C.; Hansen, Niclas Tue; Hansen, Kasper Lage

    2009-01-01

    genes. We have developed a novel method that combines single nucleotide polymorphism (SNP) genotyping data with protein-protein interaction (ppi) networks to identify disease-associated network modules enriched for proteins encoded from the MHC region. Approximately 2500 SNPs located in the 4 Mb MHC......To develop novel methods for identifying new genes that contribute to the risk of developing type 1 diabetes within the Major Histocompatibility Complex (MHC) region on chromosome 6, independently of the known linkage disequilibrium (LD) between human leucocyte antigen (HLA)-DRB1, -DQA1, -DQB1...... are well known in the pathogenesis of T1D, but the modules also contain additional candidates that have been implicated in beta-cell development and diabetic complications. The extensive LD within the MHC region makes it important to develop new methods for analysing genotyping data for identification...

  19. Combined exposure to simulated microgravity and acute or chronic radiation reduces neuronal network integrity and cell survival

    Science.gov (United States)

    Benotmane, Rafi

    During orbital or interplanetary space flights, astronauts are exposed to cosmic radiations and microgravity. This study aimed at assessing the effect of these combined conditions on neuronal network density, cell morphology and survival, using well-connected mouse cortical neuron cultures. To this end, neurons were exposed to acute low and high doses of low LET (X-rays) radiation or to chronic low dose-rate of high LET neutron irradiation (Californium-252), under the simulated microgravity generated by the Random Positioning Machine (RPM, Dutch space). High content image analysis of cortical neurons positive for the neuronal marker βIII-tubulin unveiled a reduced neuronal network integrity and connectivity, and an altered cell morphology after exposure to acute/chronic radiation or to simulated microgravity. Additionally, in both conditions, a defect in DNA-repair efficiency was revealed by an increased number of γH2AX-positive foci, as well as an increased number of Annexin V-positive apoptotic neurons. Of interest, when combining both simulated space conditions, we noted a synergistic effect on neuronal network density, neuronal morphology, cell survival and DNA repair. Furthermore, these observations are in agreement with preliminary gene expression data, revealing modulations in cytoskeletal and apoptosis-related genes after exposure to simulated microgravity. In conclusion, the observed in vitro changes in neuronal network integrity and cell survival induced by space simulated conditions provide us with mechanistic understanding to evaluate health risks and the development of countermeasures to prevent neurological disorders in astronauts over long-term space travels. Acknowledgements: This work is supported partly by the EU-FP7 projects CEREBRAD (n° 295552)

  20. Optical alignment procedure utilizing neural networks combined with Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Adil, Fatime Zehra; Konukseven, Erhan İlhan; Balkan, Tuna; Adil, Ömer Faruk

    2017-05-01

    In the design of pilot helmets with night vision capability, to not limit or block the sight of the pilot, a transparent visor is used. The reflected image from the coated part of the visor must coincide with the physical human sight image seen through the nonreflecting regions of the visor. This makes the alignment of the visor halves critical. In essence, this is an alignment problem of two optical parts that are assembled together during the manufacturing process. Shack-Hartmann wavefront sensor is commonly used for the determination of the misalignments through wavefront measurements, which are quantified in terms of the Zernike polynomials. Although the Zernike polynomials provide very useful feedback about the misalignments, the corrective actions are basically ad hoc. This stems from the fact that there exists no easy inverse relation between the misalignment measurements and the physical causes of the misalignments. This study aims to construct this inverse relation by making use of the expressive power of the neural networks in such complex relations. For this purpose, a neural network is designed and trained in MATLAB® regarding which types of misalignments result in which wavefront measurements, quantitatively given by Zernike polynomials. This way, manual and iterative alignment processes relying on trial and error will be replaced by the trained guesses of a neural network, so the alignment process is reduced to applying the counter actions based on the misalignment causes. Such a training requires data containing misalignment and measurement sets in fine detail, which is hard to obtain manually on a physical setup. For that reason, the optical setup is completely modeled in Zemax® software, and Zernike polynomials are generated for misalignments applied in small steps. The performance of the neural network is experimented and found promising in the actual physical setup.

  1. Combining evolutionary game theory and network theory to analyze human cooperation patterns

    International Nuclear Information System (INIS)

    Scatà, Marialisa; Di Stefano, Alessandro; La Corte, Aurelio; Liò, Pietro; Catania, Emanuele; Guardo, Ermanno; Pagano, Salvatore

    2016-01-01

    Highlights: • We investigate the evolutionary dynamics of human cooperation in a social network. • We introduce the concepts of “Critical Mass”, centrality measure and homophily. • The emergence of cooperation is affected by the spatial choice of the “Critical Mass”. • Our findings show that homophily speeds up the convergence towards cooperation. • Centrality and “Critical Mass” spatial choice partially offset the impact of homophily. - Abstract: As natural systems continuously evolve, the human cooperation dilemma represents an increasingly more challenging question. Humans cooperate in natural and social systems, but how it happens and what are the mechanisms which rule the emergence of cooperation, represent an open and fascinating issue. In this work, we investigate the evolution of cooperation through the analysis of the evolutionary dynamics of behaviours within the social network, where nodes can choose to cooperate or defect following the classical social dilemmas represented by Prisoner’s Dilemma and Snowdrift games. To this aim, we introduce a sociological concept and statistical estimator, “Critical Mass”, to detect the minimum initial seed of cooperators able to trigger the diffusion process, and the centrality measure to select within the social network. Selecting different spatial configurations of the Critical Mass nodes, we highlight how the emergence of cooperation can be influenced by this spatial choice of the initial core in the network. Moreover, we target to shed light how the concept of homophily, a social shaping factor for which “birds of a feather flock together”, can affect the evolutionary process. Our findings show that homophily allows speeding up the diffusion process and make quicker the convergence towards human cooperation, while centrality measure and thus the Critical Mass selection, play a key role in the evolution showing how the spatial configurations can create some hidden patterns, partially

  2. TraitMap: an XML-based genetic-map database combining multigenic loci and biomolecular networks.

    Science.gov (United States)

    Heida, Naohiko; Hasegawa, Yoshikazu; Mochizuki, Yoshiki; Hirosawa, Katsura; Konagaya, Akihiko; Toyoda, Tetsuro

    2004-08-04

    Most ordinary traits are well described by multiple measurable parameters. Thus, in the course of elucidating the genes responsible for a given trait, it is necessary to conduct and integrate the genetic mapping of each parameter. However, the integration of multiple mapping results from different publications is prevented by the fact that they are conventionally published and accumulated in printed forms or graphics which are difficult for computers to reuse for further analyses. We have defined an XML-based schema as a container of genetic mapping results, and created a database named TraitMap containing curator-checked data records based on published papers of mapping results in Homosapiens, Mus musculus, and Arabidopsis thaliana. TraitMap is the first database of mapping charts in genetics, and is integrated in a web-based retrieval framework: termed Genome Phenome Superhighway (GPS) system, where it is possible to combine and visualize multiple mapping records in a two-dimensional display. Since most traits are regulated by multiple genes, the system associates every combination of genetic loci to biomolecular networks, and thus helps us to estimate molecular-level candidate networks responsible for a given trait. It is demonstrated that a combined analysis of two diabetes-related traits (susceptibility to insulin resistance and non-HDL cholesterol level) suggests that molecular-level relationships such as the interaction among leptin receptor (Lepr), peroxisome proliferators-activated receptor-gamma (Pparg) and insulin receptor substrate 1 (Irs1), are candidate causal networks affecting the traits in a multigenic manner. TraitMap database and GPS are accessible at http://omicspace.riken.jp/gps/

  3. An expert-based approach to forest road network planning by combining Delphi and spatial multi-criteria evaluation.

    Science.gov (United States)

    Hayati, Elyas; Majnounian, Baris; Abdi, Ehsan; Sessions, John; Makhdoum, Majid

    2013-02-01

    Changes in forest landscapes resulting from road construction have increased remarkably in the last few years. On the other hand, the sustainable management of forest resources can only be achieved through a well-organized road network. In order to minimize the environmental impacts of forest roads, forest road managers must design the road network efficiently and environmentally as well. Efficient planning methodologies can assist forest road managers in considering the technical, economic, and environmental factors that affect forest road planning. This paper describes a three-stage methodology using the Delphi method for selecting the important criteria, the Analytic Hierarchy Process for obtaining the relative importance of the criteria, and finally, a spatial multi-criteria evaluation in a geographic information system (GIS) environment for identifying the lowest-impact road network alternative. Results of the Delphi method revealed that ground slope, lithology, distance from stream network, distance from faults, landslide susceptibility, erosion susceptibility, geology, and soil texture are the most important criteria for forest road planning in the study area. The suitability map for road planning was then obtained by combining the fuzzy map layers of these criteria with respect to their weights. Nine road network alternatives were designed using PEGGER, an ArcView GIS extension, and finally, their values were extracted from the suitability map. Results showed that the methodology was useful for identifying road that met environmental and cost considerations. Based on this work, we suggest future work in forest road planning using multi-criteria evaluation and decision making be considered in other regions and that the road planning criteria identified in this study may be useful.

  4. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles J.; Faraj, Daniel A.; Inglett, Todd A.; Ratterman, Joseph D.

    2018-01-30

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  5. Downregulation of GNA13-ERK network in prefrontal cortex of schizophrenia brain identified by combined focused and targeted quantitative proteomics.

    Science.gov (United States)

    Hirayama-Kurogi, Mio; Takizawa, Yohei; Kunii, Yasuto; Matsumoto, Junya; Wada, Akira; Hino, Mizuki; Akatsu, Hiroyasu; Hashizume, Yoshio; Yamamoto, Sakon; Kondo, Takeshi; Ito, Shingo; Tachikawa, Masanori; Niwa, Shin-Ichi; Yabe, Hirooki; Terasaki, Tetsuya; Setou, Mitsutoshi; Ohtsuki, Sumio

    2017-03-31

    Schizophrenia is a disabling mental illness associated with dysfunction of the prefrontal cortex, which affects cognition and emotion. The purpose of the present study was to identify altered molecular networks in the prefrontal cortex of schizophrenia patients by comparing protein expression levels in autopsied brains of patients and controls, using a combination of targeted and focused quantitative proteomics. We selected 125 molecules possibly related to schizophrenia for quantification by knowledge-based targeted proteomics. Among the quantified molecules, GRIK4 and MAO-B were significantly decreased in plasma membrane and cytosolic fractions, respectively, of prefrontal cortex. Focused quantitative proteomics identified 15 increased and 39 decreased proteins. Network analysis identified "GNA13-ERK1-eIF4G2 signaling" as a downregulated network, and proteins involved in this network were significantly decreased. Furthermore, searching downstream of eIF4G2 revealed that eIF4A1/2 and CYFIP1 were decreased, suggesting that downregulation of the network suppresses expression of CYFIP1, which regulates actin remodeling and is involved in axon outgrowth and spine formation. Downregulation of this signaling seems likely to impair axon formation and synapse plasticity of neuronal cells, and could be associated with development of cognitive impairment in the pathology of schizophrenia. The present study compared the proteome of the prefrontal cortex between schizophrenia patients and healthy controls by means of targeted proteomics and global quantitative proteomics. Targeted proteomics revealed that GRIK4 and MAOB were significantly decreased among 125 putatively schizophrenia-related proteins in prefrontal cortex of schizophrenia patients. Global quantitative proteomics identified 54 differentially expressed proteins in schizophrenia brains. The protein profile indicates attenuation of "GNA13-ERK signaling" in schizophrenia brain. In particular, EIF4G2 and CYFIP1

  6. Predicting targeted drug combinations based on Pareto optimal patterns of coexpression network connectivity.

    Science.gov (United States)

    Penrod, Nadia M; Greene, Casey S; Moore, Jason H

    2014-01-01

    Molecularly targeted drugs promise a safer and more effective treatment modality than conventional chemotherapy for cancer patients. However, tumors are dynamic systems that readily adapt to these agents activating alternative survival pathways as they evolve resistant phenotypes. Combination therapies can overcome resistance but finding the optimal combinations efficiently presents a formidable challenge. Here we introduce a new paradigm for the design of combination therapy treatment strategies that exploits the tumor adaptive process to identify context-dependent essential genes as druggable targets. We have developed a framework to mine high-throughput transcriptomic data, based on differential coexpression and Pareto optimization, to investigate drug-induced tumor adaptation. We use this approach to identify tumor-essential genes as druggable candidates. We apply our method to a set of ER(+) breast tumor samples, collected before (n = 58) and after (n = 60) neoadjuvant treatment with the aromatase inhibitor letrozole, to prioritize genes as targets for combination therapy with letrozole treatment. We validate letrozole-induced tumor adaptation through coexpression and pathway analyses in an independent data set (n = 18). We find pervasive differential coexpression between the untreated and letrozole-treated tumor samples as evidence of letrozole-induced tumor adaptation. Based on patterns of coexpression, we identify ten genes as potential candidates for combination therapy with letrozole including EPCAM, a letrozole-induced essential gene and a target to which drugs have already been developed as cancer therapeutics. Through replication, we validate six letrozole-induced coexpression relationships and confirm the epithelial-to-mesenchymal transition as a process that is upregulated in the residual tumor samples following letrozole treatment. To derive the greatest benefit from molecularly targeted drugs it is critical to design combination

  7. Combined Geometric and Neural Network Approach to Generic Fault Diagnosis in Satellite Reaction Wheels

    DEFF Research Database (Denmark)

    Baldi, P.; Blanke, Mogens; Castaldi, P.

    2015-01-01

    This paper suggests a novel diagnosis scheme for detection, isolation and estimation of faults affecting satellite reaction wheels. Both spin rate measurements and actuation torque defects are dealt with. The proposed system consists of a fault detection and isolation module composed by a bank...... of residual filters organized in a generalized scheme, followed by a fault estimation module consisting of a bank of adaptive estimation filters. The residuals are decoupled from aerodynamic disturbances thanks to the Nonlinear Geometric Approach. The use of Radial Basis Function Neural Networks is shown...

  8. Combining nonlinear dimensionality reduction with wavelet network to solve EEG inverse problem.

    Science.gov (United States)

    Wu, Qing; Shi, Lukui; Wu, Youxi; Xu, Guizhi; Li, Ying; Yan, Weili

    2006-01-01

    An integrated multi-method system to analyze the neuroelectric source parameters of electroencephalography (EEG) signal is presented. In order to handle the large-scale high dimension data efficiently and provide a real-time localizer in EEG inverse problem, an improved isometric mapping algorithm is used to find the low dimensional manifolds from high dimensional recorded EEG. Then, based on reduced dimension data, a single-scaling radial-basis wavelet network module is employed to determine the parameters of different type of EEG source models. In our simulation experiments, satisfactory results are obtained.

  9. Combining Wired and Wireless Networks for a QoS-Aware Broadband Infrastructure

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Riaz, Muhammad Tahir; Knudsen, Thomas Phillip

    2004-01-01

    households and businesses. Two factors are considered particularly important when designing an infrastructure: The chosen technology must allow for sufficient bandwidth, and the physical network structures must allow for independent paths between any pair of nodes. A final solution is obtained by using Fiber......We show how integrated planning of wired and wireless infrastructures can be used to build a QoS-aware broadband infrastructure. The outset is a case study of the municipality of Hals, a rural community in Denmark, where the objective is to establish a broadband infrastructure reaching all...

  10. The spatial decision-supporting system combination of RBR & CBR based on artificial neural network and association rules

    Science.gov (United States)

    Tian, Yangge; Bian, Fuling

    2007-06-01

    The technology of artificial intelligence should be imported on the basis of the geographic information system to bring up the spatial decision-supporting system (SDSS). The paper discusses the structure of SDSS, after comparing the characteristics of RBR and CBR, the paper brings up the frame of a spatial decisional system that combines RBR and CBR, which has combined the advantages of them both. And the paper discusses the CBR in agriculture spatial decisions, the application of ANN (Artificial Neural Network) in CBR, and enriching the inference rule base based on association rules, etc. And the paper tests and verifies the design of this system with the examples of the evaluation of the crops' adaptability.

  11. Comparison of strategies for combining dynamic linear models with artificial neural networks for detecting diarrhea in slaughter pigs

    DEFF Research Database (Denmark)

    Jensen, Dan Børge; Kristensen, Anders Ringgaard

    2016-01-01

    The drinking behavior of healthy pigs is known to follow predictable diurnal patterns, and these patterns are further known to change in relation to undesired events such as diarrhea. We therefore expect that automatic monitoring of slaughter pig drinking behavior, combined with machine learning......, can provide early and automatic detection of diarrhea. To determine the best approach to achieve this goal, we compared 36 different strategies for combining a multivariate dynamic linear model (DLM) with an artificial neural network (ANN). We used data collected in 16 pens between November 2013...... and December 2014 at a commercial Danish pig farm. The pen level water flow (liters/hour/pig) and drinking bouts frequency (bouts/hour/pig) were monitored. Staff registrations of diarrhea were the events of interest. Mean water flow and drinking bouts frequency were each modeled using three harmonic waves...

  12. Evaluation of a Real-Time Control System for Combined Sewer Networks

    OpenAIRE

    WADA, Yasuhiko; OZAKI, Taira; MURAOKA, Motoi

    2007-01-01

    In this study, we evaluated the amount of reduction of the combined sewer overflow (CSO) load using real-time control (RTC) for a combined sewer system region where a storage basin had been constructed. Reduction of the load is especially high when the amount of rainfall is 10mm. Moreover, the amount of BOD load was reduced by 18-26%, and the overflow frequency by 14-29% using on RTC system based on annual analysis. In addition, it was clarified that the effect of the reduction in cost of the...

  13. Combined, Independent Small Molecule Release and Shape Memory via Nanogel-Coated Thiourethane Polymer Networks.

    Science.gov (United States)

    Dailing, Eric A; Nair, Devatha P; Setterberg, Whitney K; Kyburz, Kyle A; Yang, Chun; D'Ovidio, Tyler; Anseth, Kristi S; Stansbury, Jeffrey W

    2016-01-28

    Drug releasing shape memory polymers (SMPs) were prepared from poly(thiourethane) networks that were coated with drug loaded nanogels through a UV initiated, surface mediated crosslinking reaction. Multifunctional thiol and isocyanate monomers were crosslinked through a step-growth mechanism to produce polymers with a homogeneous network structure that exhibited a sharp glass transition with 97% strain recovery and 96% shape fixity. Incorporating a small stoichiometric excess of thiol groups left pendant functionality for a surface coating reaction. Nanogels with diameter of approximately 10 nm bearing allyl and methacrylate groups were prepared separately via solution free radical polymerization. Coatings with thickness of 10-30 μm were formed via dip-coating and subsequent UV-initiated thiol-ene crosslinking between the SMP surface and the nanogel, and through inter-nanogel methacrylate homopolymerization. No significant change in mechanical properties or shape memory behavior was observed after the coating process, indicating that functional coatings can be integrated into an SMP without altering its original performance. Drug bioactivity was confirmed via in vitro culturing of human mesenchymal stem cells with SMPs coated with dexamethasone-loaded nanogels. This article offers a new strategy to independently tune multiple functions on a single polymeric device, and has broad application toward implantable, minimally invasive medical devices such as vascular stents and ocular shunts, where local drug release can greatly prolong device function.

  14. The development of artificial neural networks to predict virological response to combination HIV therapy

    NARCIS (Netherlands)

    Larder, Brendan; Wang, Dechao; Revell, Andrew; Montaner, Julio; Harrigan, Richard; de Wolf, Frank; Lange, Joep; Wegner, Scott; Ruiz, Lidia; Pérez-Elías, Maria Jésus; Emery, Sean; Gatell, Jose; D'Arminio Monforte, Antonella; Torti, Carlo; Zazzi, Maurizio; Lane, Clifford

    2007-01-01

    When used in combination, antiretroviral drugs are highly effective for suppressing HIV replication. Nevertheless, treatment failure commonly occurs and is generally associated with viral drug resistance. The choice of an alternative regimen may be guided by a drug-resistance test. However,

  15. 76 FR 34761 - Classified National Security Information

    Science.gov (United States)

    2011-06-14

    ... MARINE MAMMAL COMMISSION Classified National Security Information [Directive 11-01] AGENCY: Marine... Commission's (MMC) policy on classified information, as directed by Information Security Oversight Office... of Executive Order 13526, ``Classified National Security Information,'' and 32 CFR part 2001...

  16. Exploring the Impact of Food on the Gut Ecosystem Based on the Combination of Machine Learning and Network Visualization.

    Science.gov (United States)

    Shima, Hideaki; Masuda, Shizuka; Date, Yasuhiro; Shino, Amiu; Tsuboi, Yuuri; Kajikawa, Mizuho; Inoue, Yoshihiro; Kanamoto, Taisei; Kikuchi, Jun

    2017-12-01

    Prebiotics and probiotics strongly impact the gut ecosystem by changing the composition and/or metabolism of the microbiota to improve the health of the host. However, the composition of the microbiota constantly changes due to the intake of daily diet. This shift in the microbiota composition has a considerable impact; however, non-pre/probiotic foods that have a low impact are ignored because of the lack of a highly sensitive evaluation method. We performed comprehensive acquisition of data using existing measurements (nuclear magnetic resonance, next-generation DNA sequencing, and inductively coupled plasma-optical emission spectroscopy) and analyses based on a combination of machine learning and network visualization, which extracted important factors by the Random Forest approach, and applied these factors to a network module. We used two pteridophytes, Pteridium aquilinum and Matteuccia struthiopteris , for the representative daily diet. This novel analytical method could detect the impact of a small but significant shift associated with Matteuccia struthiopteris but not Pteridium aquilinum intake, using the functional network module. In this study, we proposed a novel method that is useful to explore a new valuable food to improve the health of the host as pre/probiotics.

  17. Exploring the Impact of Food on the Gut Ecosystem Based on the Combination of Machine Learning and Network Visualization

    Directory of Open Access Journals (Sweden)

    Hideaki Shima

    2017-12-01

    Full Text Available Prebiotics and probiotics strongly impact the gut ecosystem by changing the composition and/or metabolism of the microbiota to improve the health of the host. However, the composition of the microbiota constantly changes due to the intake of daily diet. This shift in the microbiota composition has a considerable impact; however, non-pre/probiotic foods that have a low impact are ignored because of the lack of a highly sensitive evaluation method. We performed comprehensive acquisition of data using existing measurements (nuclear magnetic resonance, next-generation DNA sequencing, and inductively coupled plasma-optical emission spectroscopy and analyses based on a combination of machine learning and network visualization, which extracted important factors by the Random Forest approach, and applied these factors to a network module. We used two pteridophytes, Pteridium aquilinum and Matteuccia struthiopteris, for the representative daily diet. This novel analytical method could detect the impact of a small but significant shift associated with Matteuccia struthiopteris but not Pteridium aquilinum intake, using the functional network module. In this study, we proposed a novel method that is useful to explore a new valuable food to improve the health of the host as pre/probiotics.

  18. Combination of Markov state models and kinetic networks for the analysis of molecular dynamics simulations of peptide folding.

    Science.gov (United States)

    Radford, Isolde H; Fersht, Alan R; Settanni, Giovanni

    2011-06-09

    Atomistic molecular dynamics simulations of the TZ1 beta-hairpin peptide have been carried out using an implicit model for the solvent. The trajectories have been analyzed using a Markov state model defined on the projections along two significant observables and a kinetic network approach. The Markov state model allowed for an unbiased identification of the metastable states of the system, and provided the basis for commitment probability calculations performed on the kinetic network. The kinetic network analysis served to extract the main transition state for folding of the peptide and to validate the results from the Markov state analysis. The combination of the two techniques allowed for a consistent and concise characterization of the dynamics of the peptide. The slowest relaxation process identified is the exchange between variably folded and denatured species, and the second slowest process is the exchange between two different subsets of the denatured state which could not be otherwise identified by simple inspection of the projected trajectory. The third slowest process is the exchange between a fully native and a partially folded intermediate state characterized by a native turn with a proximal backbone H-bond, and frayed side-chain packing and termini. The transition state for the main folding reaction is similar to the intermediate state, although a more native like side-chain packing is observed.

  19. Novel amphiphilic poly(dimethylsiloxane) based polyurethane networks tethered with carboxybetaine and their combined antibacterial and anti-adhesive property

    Science.gov (United States)

    Jiang, Jingxian; Fu, Yuchen; Zhang, Qinghua; Zhan, Xiaoli; Chen, Fengqiu

    2017-08-01

    The traditional nonfouling materials are powerless against bacterial cells attachment, while the hydrophobic bactericidal surfaces always suffer from nonspecific protein adsorption and dead bacterial cells accumulation. Here, amphiphilic polyurethane (PU) networks modified with poly(dimethylsiloxane) (PDMS) and cationic carboxybetaine diol through simple crosslinking reaction were developed, which had an antibacterial efficiency of 97.7%. Thereafter, the hydrolysis of carboxybetaine ester into zwitterionic groups brought about anti-adhesive properties against bacteria and proteins. The surface chemical composition and wettability performance of the PU network surfaces were investigated by attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR), X-ray photoelectron spectroscopy (XPS) and contact angle analysis. The surface distribution of PDMS and zwitterionic segments produced an obvious amphiphilic heterogeneous surface, which was demonstrated by atomic force microscopy (AFM). Enzyme-linked immunosorbent assays (ELISA) were used to test the nonspecific protein adsorption behaviors. With the advantages of the transition from excellent bactericidal performance to anti-adhesion and the combination of fouling resistance and fouling release property, the designed PDMS-based amphiphilic PU network shows great application potential in biomedical devices and marine facilities.

  20. Combining wireless sensor networks and semantic middleware for an Internet of Things-based sportsman/woman monitoring application.

    Science.gov (United States)

    Rodríguez-Molina, Jesús; Martínez, José-Fernán; Castillejo, Pedro; López, Lourdes

    2013-01-31

    Wireless Sensor Networks (WSNs) are spearheading the efforts taken to build and deploy systems aiming to accomplish the ultimate objectives of the Internet of Things. Due to the sensors WSNs nodes are provided with, and to their ubiquity and pervasive capabilities, these networks become extremely suitable for many applications that so-called conventional cabled or wireless networks are unable to handle. One of these still underdeveloped applications is monitoring physical parameters on a person. This is an especially interesting application regarding their age or activity, for any detected hazardous parameter can be notified not only to the monitored person as a warning, but also to any third party that may be helpful under critical circumstances, such as relatives or healthcare centers. We propose a system built to monitor a sportsman/woman during a workout session or performing a sport-related indoor activity. Sensors have been deployed by means of several nodes acting as the nodes of a WSN, along with a semantic middleware development used for hardware complexity abstraction purposes. The data extracted from the environment, combined with the information obtained from the user, will compose the basis of the services that can be obtained.

  1. Development of a platform to combine sensor networks and home robots to improve fall detection in the home environment.

    Science.gov (United States)

    Della Toffola, Luca; Patel, Shyamal; Chen, Bor-rong; Ozsecen, Yalgin M; Puiatti, Alessandro; Bonato, Paolo

    2011-01-01

    Over the last decade, significant progress has been made in the development of wearable sensor systems for continuous health monitoring in the home and community settings. One of the main areas of application for these wearable sensor systems is in detecting emergency events such as falls. Wearable sensors like accelerometers are increasingly being used to monitor daily activities of individuals at a risk of falls, detect emergency events and send alerts to caregivers. However, such systems tend to have a high rate of false alarms, which leads to low compliance levels. Home robots can enable caregivers with the ability to quickly make an assessment and intervene if an emergency event is detected. This can provide an additional layer for detecting false positives, which can lead to improve compliance. In this paper, we present preliminary work on the development of a fall detection system based on a combination sensor networks and home robots. The sensor network architecture comprises of body worn sensors and ambient sensors distributed in the environment. We present the software architecture and conceptual design home robotic platform. We also perform preliminary characterization of the sensor network in terms of latencies and battery lifetime.

  2. Combining Wireless Sensor Networks and Semantic Middleware for an Internet of Things-Based Sportsman/Woman Monitoring Application

    Directory of Open Access Journals (Sweden)

    Lourdes López

    2013-01-01

    Full Text Available Wireless Sensor Networks (WSNs are spearheading the efforts taken to build and deploy systems aiming to accomplish the ultimate objectives of the Internet of Things. Due to the sensors WSNs nodes are provided with, and to their ubiquity and pervasive capabilities, these networks become extremely suitable for many applications that so-called conventional cabled or wireless networks are unable to handle. One of these still underdeveloped applications is monitoring physical parameters on a person. This is an especially interesting application regarding their age or activity, for any detected hazardous parameter can be notified not only to the monitored person as a warning, but also to any third party that may be helpful under critical circumstances, such as relatives or healthcare centers. We propose a system built to monitor a sportsman/woman during a workout session or performing a sport-related indoor activity. Sensors have been deployed by means of several nodes acting as the nodes of a WSN, along with a semantic middleware development used for hardware complexity abstraction purposes. The data extracted from the environment, combined with the information obtained from the user, will compose the basis of the services that can be obtained.

  3. The combined geodetic network adjusted on the reference ellipsoid – a comparison of three functional models for GNSS observations

    Directory of Open Access Journals (Sweden)

    Kadaj Roman

    2016-12-01

    Full Text Available The adjustment problem of the so-called combined (hybrid, integrated network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients. While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional

  4. Application of 1 D Finite Element Method in Combination with Laminar Solution Method for Pipe Network Analysis

    Science.gov (United States)

    Dudar, O. I.; Dudar, E. S.

    2017-11-01

    The features of application of the 1D dimensional finite element method (FEM) in combination with the laminar solutions method (LSM) for the calculation of underground ventilating networks are considered. In this case the processes of heat and mass transfer change the properties of a fluid (binary vapour-air mix). Under the action of gravitational forces it leads to such phenomena as natural draft, local circulation, etc. The FEM relations considering the action of gravity, the mass conservation law, the dependence of vapour-air mix properties on the thermodynamic parameters are derived so that it allows one to model the mentioned phenomena. The analogy of the elastic and plastic rod deformation processes to the processes of laminar and turbulent flow in a pipe is described. Owing to this analogy, the guaranteed convergence of the elastic solutions method for the materials of plastic type means the guaranteed convergence of the LSM for any regime of a turbulent flow in a rough pipe. By means of numerical experiments the convergence rate of the FEM - LSM is investigated. This convergence rate appeared much higher than the convergence rate of the Cross - Andriyashev method. Data of other authors on the convergence rate comparison for the finite element method, the Newton method and the method of gradient are provided. These data allow one to conclude that the FEM in combination with the LSM is one of the most effective methods of calculation of hydraulic and ventilating networks. The FEM - LSM has been used for creation of the research application programme package “MineClimate” allowing to calculate the microclimate parameters in the underground ventilating networks.

  5. Two channel EEG thought pattern classifier.

    Science.gov (United States)

    Craig, D A; Nguyen, H T; Burchey, H A

    2006-01-01

    This paper presents a real-time electro-encephalogram (EEG) identification system with the goal of achieving hands free control. With two EEG electrodes placed on the scalp of the user, EEG signals are amplified and digitised directly using a ProComp+ encoder and transferred to the host computer through the RS232 interface. Using a real-time multilayer neural network, the actual classification for the control of a powered wheelchair has a very fast response. It can detect changes in the user's thought pattern in 1 second. Using only two EEG electrodes at positions O(1) and C(4) the system can classify three mental commands (forward, left and right) with an accuracy of more than 79 %

  6. Semiautomated tremor detection using a combined cross-correlation and neural network approach

    Science.gov (United States)

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2013-01-01

    Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low‒amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross‒correlation technique, followed by a Self‒Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being “semiautomated”. We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal‒to‒noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal‒to‒noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.

  7. Growing adaptive machines combining development and learning in artificial neural networks

    CERN Document Server

    Bredeche, Nicolas; Doursat, René

    2014-01-01

    The pursuit of artificial intelligence has been a highly active domain of research for decades, yielding exciting scientific insights and productive new technologies. In terms of generating intelligence, however, this pursuit has yielded only limited success. This book explores the hypothesis that adaptive growth is a means of moving forward. By emulating the biological process of development, we can incorporate desirable characteristics of natural neural systems into engineered designs, and thus move closer towards the creation of brain-like systems. The particular focus is on how to design artificial neural networks for engineering tasks. The book consists of contributions from 18 researchers, ranging from detailed reviews of recent domains by senior scientists, to exciting new contributions representing the state of the art in machine learning research. The book begins with broad overviews of artificial neurogenesis and bio-inspired machine learning, suitable both as an introduction to the domains and as a...

  8. Combining convolutional neural networks and Hough Transform for classification of images containing lines

    Science.gov (United States)

    Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy

    2017-03-01

    In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.

  9. Fcoused crawler bused on Bayesian classifier

    Directory of Open Access Journals (Sweden)

    JIA Haijun

    2013-12-01

    Full Text Available With the rapid development of the network,its information resources are increasingly large and faced a huge amount of information database,search engine plays an important role.Focused crawling technique,as the main core portion of search engine,is used to calculate the relationship between search results and search topics,which is called correlation.Normally,focused crawling method is used only to calculate the correlation between web content and search related topics.In this paper,focused crawling method is used to compute the importance of links through link content and anchor text,then Bayesian classifier is used to classify the links,and finally cosine similarity function is used to calculate the relevance of web pages.If the correlation value is greater than the threshold the page is considered to be associated with the predetermined topics,otherwise not relevant.Experimental results show that a high accuracy can be obtained by using the proposed crawling approach.

  10. Design and Performance Investigation for the Optical Combinational Networks at High Data Rate

    Science.gov (United States)

    Tripathi, Devendra Kr.

    2017-05-01

    This article explores performance study for optical combinational designs based on nonlinear characteristics with semiconductor optical amplifier (SOA). Two configurations for optical half-adder with non-return-to-zero modulation pattern altogether with Mach-Zehnder modulator, interferometer at 50-Gbps data rate have been successfully realized. Accordingly, SUM and CARRY outputs have been concurrently executed and verified for their output waveforms. Numerical simulations for variation of data rate and key design parameters have been effectively executed outcome with optimum performance. Investigations depict overall good performance of the design in terms of the extinction factor. It also inferred that all-optical realization based on SOA is competent scheme, as it circumvents costly optoelectronic translation. This could be well supportive to erect larger complex optical combinational circuits.

  11. Network-assisted investigation of combined causal signals from genome-wide association studies in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Peilin Jia

    Full Text Available With the recent success of genome-wide association studies (GWAS, a wealth of association data has been accomplished for more than 200 complex diseases/traits, proposing a strong demand for data integration and interpretation. A combinatory analysis of multiple GWAS datasets, or an integrative analysis of GWAS data and other high-throughput data, has been particularly promising. In this study, we proposed an integrative analysis framework of multiple GWAS datasets by overlaying association signals onto the protein-protein interaction network, and demonstrated it using schizophrenia datasets. Building on a dense module search algorithm, we first searched for significantly enriched subnetworks for schizophrenia in each single GWAS dataset and then implemented a discovery-evaluation strategy to identify module genes with consistent association signals. We validated the module genes in an independent dataset, and also examined them through meta-analysis of the related SNPs using multiple GWAS datasets. As a result, we identified 205 module genes with a joint effect significantly associated with schizophrenia; these module genes included a number of well-studied candidate genes such as DISC1, GNA12, GNA13, GNAI1, GPR17, and GRIN2B. Further functional analysis suggested these genes are involved in neuronal related processes. Additionally, meta-analysis found that 18 SNPs in 9 module genes had P(meta<1 × 10⁻⁴, including the gene HLA-DQA1 located in the MHC region on chromosome 6, which was reported in previous studies using the largest cohort of schizophrenia patients to date. These results demonstrated our bi-directional network-based strategy is efficient for identifying disease-associated genes with modest signals in GWAS datasets. This approach can be applied to any other complex diseases/traits where multiple GWAS datasets are available.

  12. Classifier-guided sampling for discrete variable, discontinuous design space exploration: Convergence and computational performance

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shahan, David W. [HRL Labs., LLC, Malibu, CA (United States); Seepersad, Carolyn Conner [Univ. of Texas, Austin, TX (United States)

    2014-04-22

    A classifier-guided sampling (CGS) method is introduced for solving engineering design optimization problems with discrete and/or continuous variables and continuous and/or discontinuous responses. The method merges concepts from metamodel-guided sampling and population-based optimization algorithms. The CGS method uses a Bayesian network classifier for predicting the performance of new designs based on a set of known observations or training points. Unlike most metamodeling techniques, however, the classifier assigns a categorical class label to a new design, rather than predicting the resulting response in continuous space, and thereby accommodates nondifferentiable and discontinuous functions of discrete or categorical variables. The CGS method uses these classifiers to guide a population-based sampling process towards combinations of discrete and/or continuous variable values with a high probability of yielding preferred performance. Accordingly, the CGS method is appropriate for discrete/discontinuous design problems that are ill-suited for conventional metamodeling techniques and too computationally expensive to be solved by population-based algorithms alone. In addition, the rates of convergence and computational properties of the CGS method are investigated when applied to a set of discrete variable optimization problems. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, when compared to genetic algorithms.

  13. Regional brain network organization distinguishes the combined and inattentive subtypes of Attention Deficit Hyperactivity Disorder

    OpenAIRE

    Jacqueline F. Saad; Kristi R. Griffiths; Michael R. Kohn; Simon Clarke; Leanne M. Williams; Mayuresh S. Korgaonkar

    2017-01-01

    Attention Deficit Hyperactivity Disorder (ADHD) is characterized clinically by hyperactive/impulsive and/or inattentive symptoms which determine diagnostic subtypes as Predominantly Hyperactive-Impulsive (ADHD-HI), Predominantly Inattentive (ADHD-I), and Combined (ADHD-C). Neuroanatomically though we do not yet know if these clinical subtypes reflect distinct aberrations in underlying brain organization. We imaged 34 ADHD participants defined using DSM-IV criteria as ADHD-I (n?=?16) or as ADH...

  14. Different combined oral contraceptives and the risk of venous thrombosis: systematic review and network meta-analysis

    Science.gov (United States)

    Stegeman, Bernardine H; de Bastos, Marcos; Rosendaal, Frits R; van Hylckama Vlieg, A; Helmerhorst, Frans M; Stijnen, Theo

    2013-01-01

    Objective To provide a comprehensive overview of the risk of venous thrombosis in women using different combined oral contraceptives. Design Systematic review and network meta-analysis. Data sources PubMed, Embase, Web of Science, Cochrane, Cumulative Index to Nursing and Allied Health Literature, Academic Search Premier, and ScienceDirect up to 22 April 2013. Review methods Observational studies that assessed the effect of combined oral contraceptives on venous thrombosis in healthy women. The primary outcome of interest was a fatal or non-fatal first event of venous thrombosis with the main focus on deep venous thrombosis or pulmonary embolism. Publications with at least 10 events in total were eligible. The network meta-analysis was performed using an extension of frequentist random effects models for mixed multiple treatment comparisons. Unadjusted relative risks with 95% confidence intervals were reported. The requirement for crude numbers did not allow adjustment for potential confounding variables. Results 3110 publications were retrieved through a search strategy; 25 publications reporting on 26 studies were included. Incidence of venous thrombosis in non-users from two included cohorts was 1.9 and 3.7 per 10 000 woman years, in line with previously reported incidences of 1-6 per 10 000 woman years. Use of combined oral contraceptives increased the risk of venous thrombosis compared with non-use (relative risk 3.5, 95% confidence interval 2.9 to 4.3). The relative risk of venous thrombosis for combined oral contraceptives with 30-35 µg ethinylestradiol and gestodene, desogestrel, cyproterone acetate, or drospirenone were similar and about 50-80% higher than for combined oral contraceptives with levonorgestrel. A dose related effect of ethinylestradiol was observed for gestodene, desogestrel, and levonorgestrel, with higher doses being associated with higher thrombosis risk. Conclusion All combined oral contraceptives investigated in this analysis were

  15. A diversity compression and combining technique based on channel shortening for cooperative networks

    KAUST Repository

    Hussain, Syed Imtiaz

    2012-02-01

    The cooperative relaying process with multiple relays needs proper coordination among the communicating and the relaying nodes. This coordination and the required capabilities may not be available in some wireless systems where the nodes are equipped with very basic communication hardware. We consider a scenario where the source node transmits its signal to the destination through multiple relays in an uncoordinated fashion. The destination captures the multiple copies of the transmitted signal through a Rake receiver. We analyze a situation where the number of Rake fingers N is less than that of the relaying nodes L. In this case, the receiver can combine N strongest signals out of L. The remaining signals will be lost and act as interference to the desired signal components. To tackle this problem, we develop a novel signal combining technique based on channel shortening principles. This technique proposes a processing block before the Rake reception which compresses the energy of L signal components over N branches while keeping the noise level at its minimum. The proposed scheme saves the system resources and makes the received signal compatible to the available hardware. Simulation results show that it outperforms the selection combining scheme. © 2012 IEEE.

  16. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    International Nuclear Information System (INIS)

    Blanco, A; Rodriguez, R; Martinez-Maranon, I

    2014-01-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity

  17. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    Science.gov (United States)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  18. Using multiple classifiers for predicting the risk of endovascular aortic aneurysm repair re-intervention through hybrid feature selection.

    Science.gov (United States)

    Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter Je; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong

    2017-11-01

    Feature selection is essential in medical area; however, its process becomes complicated with the presence of censoring which is the unique character of survival analysis. Most survival feature selection methods are based on Cox's proportional hazard model, though machine learning classifiers are preferred. They are less employed in survival analysis due to censoring which prevents them from directly being used to survival data. Among the few work that employed machine learning classifiers, partial logistic artificial neural network with auto-relevance determination is a well-known method that deals with censoring and perform feature selection for survival data. However, it depends on data replication to handle censoring which leads to unbalanced and biased prediction results especially in highly censored data. Other methods cannot deal with high censoring. Therefore, in this article, a new hybrid feature selection method is proposed which presents a solution to high level censoring. It combines support vector machine, neural network, and K-nearest neighbor classifiers using simple majority voting and a new weighted majority voting method based on survival metric to construct a multiple classifier system. The new hybrid feature selection process uses multiple classifier system as a wrapper method and merges it with iterated feature ranking filter method to further reduce features. Two endovascular aortic repair datasets containing 91% censored patients collected from two centers were used to construct a multicenter study to evaluate the performance of the proposed approach. The results showed the proposed technique outperformed individual classifiers and variable selection methods based on Cox's model such as Akaike and Bayesian information criterions and least absolute shrinkage and selector operator in p values of the log-rank test, sensitivity, and concordance index. This indicates that the proposed classifier is more powerful in correctly predicting the risk of

  19. The Closing of the Classified Catalog at Boston University

    Science.gov (United States)

    Hazen, Margaret Hindle

    1974-01-01

    Although the classified catalog at Boston University libraries has been a useful research tool, it has proven too expensive to keep current. The library has converted to a traditional alphabetic subject catalog and will recieve catalog cards from the Ohio College Library Center through the New England Library Network. (Author/LS)

  20. Diagnosis of Broiler Livers by Classifying Image Patches

    DEFF Research Database (Denmark)

    Jørgensen, Anders; Fagertun, Jens; Moeslund, Thomas B.

    2017-01-01

    The manual health inspection are becoming the bottleneck at poultry processing plants. We present a computer vision method for automatic diagnosis of broiler livers. The non-rigid livers, of varying shape and sizes, are classified in patches by a convolutional neural network, outputting maps...

  1. Dimensionality Reduction Through Classifier Ensembles

    Science.gov (United States)

    Oza, Nikunj C.; Tumer, Kagan; Norwig, Peter (Technical Monitor)

    1999-01-01

    In data mining, one often needs to analyze datasets with a very large number of attributes. Performing machine learning directly on such data sets is often impractical because of extensive run times, excessive complexity of the fitted model (often leading to overfitting), and the well-known "curse of dimensionality." In practice, to avoid such problems, feature selection and/or extraction are often used to reduce data dimensionality prior to the learning step. However, existing feature selection/extraction algorithms either evaluate features by their effectiveness across the entire data set or simply disregard class information altogether (e.g., principal component analysis). Furthermore, feature extraction algorithms such as principal components analysis create new features that are often meaningless to human users. In this article, we present input decimation, a method that provides "feature subsets" that are selected for their ability to discriminate among the classes. These features are subsequently used in ensembles of classifiers, yielding results superior to single classifiers, ensembles that use the full set of features, and ensembles based on principal component analysis on both real and synthetic datasets.

  2. The interventional effect of new drugs combined with the Stupp protocol on glioblastoma: A network meta-analysis.

    Science.gov (United States)

    Li, Mei; Song, Xiangqi; Zhu, Jun; Fu, Aijun; Li, Jianmin; Chen, Tong

    2017-08-01

    New therapeutic agents in combination with the standard Stupp protocol (a protocol about the temozolomide combined with radiotherapy treatment with glioblastoma was research by Stupp R in 2005) were assessed to evaluate whether they were superior to the Stupp protocol alone, to determine the optimum treatment regimen for patients with newly diagnosed glioblastoma. We implemented a search strategy to identify studies in the following databases: PubMed, Cochrane Library, EMBASE, CNKI, CBM, Wanfang, and VIP, and assessed the quality of extracted data from the trials included. Statistical software was used to perform network meta-analysis. The use of novel therapeutic agents in combination with the Stupp protocol were all shown to be superior than the Stupp protocol alone for the treatment of newly diagnosed glioblastoma, ranked as follows: cilengitide 2000mg/5/week, bevacizumab in combination with irinotecan, nimotuzumab, bevacizumab, cilengitide 2000mg/2/week, cytokine-induced killer cell immunotherapy, and the Stupp protocol. In terms of serious adverse effects, the intervention group showed a 29% increase in the incidence of adverse events compared with the control group (patients treated only with Stupp protocol) with a statistically significant difference (RR=1.29; 95%CI 1.17-1.43; P<0.001). The most common adverse events were thrombocytopenia, lymphopenia, neutropenia, pneumonia, nausea, and vomiting, none of which were significantly different between the groups except for neutropenia, pneumonia, and embolism. All intervention drugs evaluated in our study were superior to the Stupp protocol alone when used in combination with it. However, we could not conclusively confirm whether cilengitide 2000mg/5/week was the optimum regime, as only one trial using this protocol was included in our study. Copyright © 2017. Published by Elsevier B.V.

  3. Compression and Combining Based on Channel Shortening and Rank Reduction Technique for Cooperative Wireless Sensor Networks

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-12-18

    This paper investigates and compares the performance of wireless sensor networks where sensors operate on the principles of cooperative communications. We consider a scenario where the source transmits signals to the destination with the help of L sensors. As the destination has the capacity of processing only U out of these L signals, the strongest U signals are selected while the remaining (L?U) signals are suppressed. A preprocessing block similar to channel-shortening is proposed in this contribution. However, this preprocessing block employs a rank-reduction technique instead of channel-shortening. By employing this preprocessing, we are able to decrease the computational complexity of the system without affecting the bit error rate (BER) performance. From our simulations, it can be shown that these schemes outperform the channel-shortening schemes in terms of computational complexity. In addition, the proposed schemes have a superior BER performance as compared to channel-shortening schemes when sensors employ fixed gain amplification. However, for sensors which employ variable gain amplification, a tradeoff exists in terms of BER performance between the channel-shortening and these schemes. These schemes outperform channel-shortening scheme for lower signal-to-noise ratio.

  4. A hybrid accident simulation methodology for nuclear power plant by combining thermal-hydraulic program and artificial neural networks

    International Nuclear Information System (INIS)

    Choi, Young Joon

    2004-02-01

    Compact simulators for nuclear power plants can be used as cost-effective training or analysis tools; generally, they demonstrate overall responses of transients or accidents in real time or faster. In the thermal-hydraulic models of compact simulators, governing equations are simplified with reasonable assumptions and empirical correlations, and approximate solutions are obtained by using appropriate numerical schemes. Moreover, many physical control volumes in plant modeling are lumped to reduce the computing time. The simplification of equations and reduction of control volume numbers usually degrade the accuracy of solutions. A hybrid accident simulation methodology is proposed to enhance the capabilities of a compact simulator by introducing artificial neural networks. A simplified thermal-hydraulic program, playing the role of compact simulator, is designed to calculate the overall responses of transients and accidents. Two neural networks are designed and trained with the target values obtained from the analyses of detailed computer codes and trained results are combined with the simplified thermal-hydraulic program to perform the following roles: (I) compensation for inaccuracy of a simplified thermal-hydraulic program occurring from simplified governing equation and small number of physical control volumes: the auto-associative neural network (AANN), trained with the target values obtained from RELAP5/MOD3 code analyses, improves the calculated results of the simplified thermal-hydraulic program, and (II) prediction of the critical parameter usually calculated from the sophisticated computer code: the back propagation neural network (BPN), trained with the target values obtained from COBRA-IV code analyses, predicts the minimum departure from nucleate boiling ratio (DNBR) which is not calculated in simplified thermal-hydraulic program. Simulations for the several accidents are carried out to verify the applicability of the proposed methodology. The

  5. Evolving fuzzy rules in a learning classifier system

    Science.gov (United States)

    Valenzuela-Rendon, Manuel

    1993-01-01

    The fuzzy classifier system (FCS) combines the ideas of fuzzy logic controllers (FLC's) and learning classifier systems (LCS's). It brings together the expressive powers of fuzzy logic as it has been applied in fuzzy controllers to express relations between continuous variables, and the ability of LCS's to evolve co-adapted sets of rules. The goal of the FCS is to develop a rule-based system capable of learning in a reinforcement regime, and that can potentially be used for process control.

  6. Risk assessment of 170 kV GIS connected to combined cable/OHL network

    DEFF Research Database (Denmark)

    Bak, Claus Leth; Kessel, Jakob; Atlason, Vidir

    2009-01-01

    and BFO. Overvoltages are evaluated for varying front times of the lightning surge, different soil resistivities at the surge arrester grounding in the overhead line/cable transition point and a varying length of the connection cable between the transformer and the GIS busbar with a SA implemented......This paper concerns different investigations of lightning simulation of a combined 170 kV overhead line/cable connected GIS. This is interesting due to the increasing amount of underground cables and GIS in the Danish transmission system. This creates a different system with respect to lightning...... performance, compared to a system consisting solely of AIS connected through overhead lines. The main purpose is to investigate whether overvoltage protection is necessary at the GIS busbar. The analysis is conducted by implementing a simulation model in PSCAD/EMTDC. Simulations are conducted for both SF...

  7. Combining CFD simulations with blockoriented heatflow-network model for prediction of photovoltaic energy-production

    International Nuclear Information System (INIS)

    Haber, I E; Farkas, I

    2011-01-01

    The exterior factors which influencing the working circumstances of photovoltaic modules are the irradiation, the optical air layer (Air Mass - AM), the irradiation angle, the environmental temperature and the cooling effect of the wind. The efficiency of photovoltaic (PV) devices is inversely proportional to the cell temperature and therefore the mounting of the PV modules can have a big affect on the cooling, due to wind flow-around and naturally convection. The construction of the modules could be described by a heatflow-network model, and that can define the equation which determines the cells temperature. An equation like this can be solved as a block oriented model with hybrid-analogue simulator such as Matlab-Simulink. In view of the flow field and the heat transfer, witch was calculated numerically, the heat transfer coefficients can be determined. Five inflow rates were set up for both pitched and flat roof cases, to let the trend of the heat transfer coefficient know, while these functions can be used for the Matlab/Simulink model. To model the free convection flows, the Boussinesq-approximation were used, integrated into the Navier-Stokes equations and the energy equation. It has been found that under a constant solar heat gain, the air velocity around the modules and behind the pitched-roof mounted module is increasing, proportionately to the wind velocities, and as result the heat transfer coefficient increases linearly, and can be described by a function in both cases. To the block based model the meteorological parameters and the results of the CFD simulations as single functions were attached. The final aim was to make a model that could be used for planning photovoltaic systems, and define their accurate performance for better sizing of an array of modules.

  8. PRODIAG: Combined expert system/neural network for process fault diagnosis. Volume 1, Theory

    Energy Technology Data Exchange (ETDEWEB)

    Reifman, J.; Wei, T.Y.C.; Vitela, J.E.

    1995-09-01

    The function of the PRODIAG code is to diagnose on-line the root cause of a thermal-hydraulic (T-H) system transient with trace back to the identification of the malfunctioning component using the T-H instrumentation signals exclusively. The code methodology is based on the Al techniques of automated reasoning/expert systems (ES) and artificial neural networks (ANN). The research and development objective is to develop a generic code methodology which would be plant- and T-H-system-independent. For the ES part the only plant or T-H system specific code requirements would be implemented through input only and at that only through a Piping and Instrumentation Diagram (PID) database. For the ANN part the only plant or T-H system specific code requirements would be through the ANN training data for normal component characteristics and the same PID database information. PRODIAG would, therefore, be generic and portable from T-H system to T-H system and from plant to plant without requiring any code-related modifications except for the PID database and the ANN training with the normal component characteristics. This would give PRODIAG the generic feature which numerical simulation plant codes such as TRAC or RELAP5 have. As the code is applied to different plants and different T-H systems, only the connectivity information, the operating conditions and the normal component characteristics are changed, and the changes are made entirely through input. Verification and validation of PRODIAG would, be T-H system independent and would be performed only ``once``.

  9. Combining affinity propagation clustering and mutual information network to investigate key genes in fibroid.

    Science.gov (United States)

    Chen, Qian-Song; Wang, Dan; Liu, Bao-Lian; Gao, Shu-Feng; Gao, Dan-Li; Li, Gui-Rong

    2017-07-01

    The aim of the present study was to investigate key genes in fibroids based on the multiple affinity propogation-Krzanowski and Lai (mAP-KL) method, which included the maxT multiple hypothesis, Krzanowski and Lai (KL) cluster quality index, affinity propagation (AP) clustering algorithm and mutual information network (MIN) constructed by the context likelihood of relatedness (CLR) algorithm. In order to achieve this goal, mAP-KL was initially implemented to investigate exemplars in fibroid, and the maxT function was employed to rank the genes of training and test sets, and the top 200 genes were obtained for further study. In addition, the KL cluster index was applied to determine the quantity of clusters and the AP clustering algorithm was conducted to identify the clusters and their exemplars. Subsequently, the support vector machine (SVM) model was selected to evaluate the classification performance of mAP-KL. Finally, topological properties (degree, closeness, betweenness and transitivity) of exemplars in MIN constructed according to the CLR algorithm were assessed to investigate key genes in fibroid. The SVM model validated that the classification between normal controls and fibroid patients by mAP-KL had a good performance. A total of 9 clusters and exemplars were identified based on mAP-KL, which were comprised of CALCOCO2 , COL4A2 , COPS8 , SNCG , PA2G4 , C17orf70 , MARK3 , BTNL3 and TBC1D13 . By accessing the topological analysis for exemplars in MIN, SNCG and COL4A2 were identified as the two most significant genes of four types of methods, and they were denoted as key genes in the progress of fibroid. In conclusion, two key genes ( SNCG and COL4A2 ) and 9 exemplars were successfully investigated, and these may be potential biomarkers for the detection and treatment of fibroid.

  10. Combining advanced networked technology and pedagogical methods to improve collaborative distance learning.

    Science.gov (United States)

    Staccini, Pascal; Dufour, Jean-Charles; Raps, Hervé; Fieschi, Marius

    2005-01-01

    Making educational material be available on a network cannot be reduced to merely implementing hypermedia and interactive resources on a server. A pedagogical schema has to be defined to guide students for learning and to provide teachers with guidelines to prepare valuable and upgradeable resources. Components of a learning environment, as well as interactions between students and other roles such as author, tutor and manager, can be deduced from cognitive foundations of learning, such as the constructivist approach. Scripting the way a student will to navigate among information nodes and interact with tools to build his/her own knowledge can be a good way of deducing the features of the graphic interface related to the management of the objects. We defined a typology of pedagogical resources, their data model and their logic of use. We implemented a generic and web-based authoring and publishing platform (called J@LON for Join And Learn On the Net) within an object-oriented and open-source programming environment (called Zope) embedding a content management system (called Plone). Workflow features have been used to mark the progress of students and to trace the life cycle of resources shared by the teaching staff. The platform integrated advanced on line authoring features to create interactive exercises and support live courses diffusion. The platform engine has been generalized to the whole curriculum of medical studies in our faculty; it also supports an international master of risk management in health care and will be extent to all other continuous training diploma.

  11. GANN: Genetic algorithm neural networks for the detection of conserved combinations of features in DNA

    Directory of Open Access Journals (Sweden)

    Beiko Robert G

    2005-02-01

    Full Text Available Abstract Background The multitude of motif detection algorithms developed to date have largely focused on the detection of patterns in primary sequence. Since sequence-dependent DNA structure and flexibility may also play a role in protein-DNA interactions, the simultaneous exploration of sequence- and structure-based hypotheses about the composition of binding sites and the ordering of features in a regulatory region should be considered as well. The consideration of structural features requires the development of new detection tools that can deal with data types other than primary sequence. Results GANN (available at http://bioinformatics.org.au/gann is a machine learning tool for the detection of conserved features in DNA. The software suite contains programs to extract different regions of genomic DNA from flat files and convert these sequences to indices that reflect sequence and structural composition or the presence of specific protein binding sites. The machine learning component allows the classification of different types of sequences based on subsamples of these indices, and can identify the best combinations of indices and machine learning architecture for sequence discrimination. Another key feature of GANN is the replicated splitting of data into training and test sets, and the implementation of negative controls. In validation experiments, GANN successfully merged important sequence and structural features to yield good predictive models for synthetic and real regulatory regions. Conclusion GANN is a flexible tool that can search through large sets of sequence and structural feature combinations to identify those that best characterize a set of sequences.

  12. Combining Multivariate Analysis and Pollen Count to Classify Honey Samples Accordingly to Different Botanical Origins Clasificación del Origen Botánico de la Miel Mediante la Combinación de Análisis Multivariado y Recuento de Polen

    Directory of Open Access Journals (Sweden)

    Eduardo Corbella

    2008-03-01

    Full Text Available This study reports the combination of multivariate techniques and pollen count analysis to classify honey samples accordingly to botanical sources, in samples from Uruguay. Honey samples from different botanical origins, namely Eucalyptus spp. (n = 10, Lotus spp. (n = 12, Salix spp. (n = 5, “mil flores” (Myrtaceae spp. (n = 12 and coronilla (Scutia buxifolia Reissek (n = 10 were analysed using Melissopalynology (pollen identification. Principal component analysis (PCA and linear discriminant analysis (LDA were used to classify the honey samples accordingly to their botanical origin based on a pollen count. Honey samples of higher percentage (> 70% of Eucalyptus, Lotus and Scutia pollen were 100% correctly classified, whilst samples from Myrtaceaespp. and Salix were 80 and 66% correctly classified, respectively. The use of PCA and LDA combined with pollen identification proved useful in characterizing honey samples from different botanical originsEste estudio reporta la combinación de técnicas de análisis multivariado y de polen para clasificar el origen botánico de muestras de miel provenientes de Uruguay. Muestras de miel de diversos orígenes botánicos, a saber Eucaliptus spp. (n = 10, Lotus spp. (n = 12, Salix spp. (n = 5, mil flores (Myrtaceae spp. (n = 12 y coronilla (Scutia buxifolia Reissek (n = 10 fueron analizadas usando Melisopalinología (identificación del polen. Análisis de componentes principales (APC y de discriminantes lineales (ADL fueron utilizados para clasificar las muestras de la miel de acuerdo a su origen botánico basado en el conteo de polen. Las muestras de miel que contenían más de un 70% de polen de Eucaliptus, Lotus y Scutia buxifolia fueron clasificadas correctamente en un 100% de los casos. Mientras que las muestras de miel identificadas como de Myrtaceae spp. y Salix fueron clasificadas correctamente en un 80 y 66% de los casos. El uso de APC y de ADL combinado con la identificación del polen prob

  13. Combining genetic algorithm and Levenberg-Marquardt algorithm in training neural network for hypoglycemia detection using EEG signals.

    Science.gov (United States)

    Nguyen, Lien B; Nguyen, Anh V; Ling, Sai Ho; Nguyen, Hung T

    2013-01-01

    Hypoglycemia is the most common but highly feared complication induced by the intensive insulin therapy in patients with type 1 diabetes mellitus (T1DM). Nocturnal hypoglycemia is dangerous because sleep obscures early symptoms and potentially leads to severe episodes which can cause seizure, coma, or even death. It is shown that the hypoglycemia onset induces early changes in electroencephalography (EEG) signals which can be detected non-invasively. In our research, EEG signals from five T1DM patients during an overnight clamp study were measured and analyzed. By applying a method of feature extraction using Fast Fourier Transform (FFT) and classification using neural networks, we establish that hypoglycemia can be detected efficiently using EEG signals from only two channels. This paper demonstrates that by implementing a training process of combining genetic algorithm and Levenberg-Marquardt algorithm, the classification results are improved markedly up to 75% sensitivity and 60% specificity on a separate testing set.

  14. Impact of dam failure-induced flood on road network using combined remote sensing and geospatial approach

    Science.gov (United States)

    Foumelis, Michael

    2017-01-01

    The applicability of the normalized difference water index (NDWI) to the delineation of dam failure-induced floods is demonstrated for the case of the Sparmos dam (Larissa, Central Greece). The approach followed was based on the differentiation of NDWI maps to accurately define the extent of the inundated area over different time spans using multimission Earth observation optical data. Besides using Landsat data, for which the index was initially designed, higher spatial resolution data from Sentinel-2 mission were also successfully exploited. A geospatial analysis approach was then introduced to rapidly identify potentially affected segments of the road network. This allowed for further correlation to actual damages in the following damage assessment and remediation activities. The proposed combination of geographic information systems and remote sensing techniques can be easily implemented by local authorities and civil protection agencies for mapping and monitoring flood events.

  15. Study of Aided Diagnosis of Hepatic Carcinoma Based on Artificial Neural Network Combined with Tumor Marker Group

    Science.gov (United States)

    Tan, Shanjuan; Feng, Feifei; Wu, Yongjun; Wu, Yiming

    To develop a computer-aided diagnostic scheme by using an artificial neural network (ANN) combined with tumor markers for diagnosis of hepatic carcinoma (HCC) as a clinical assistant method. 140 serum samples (50 malignant, 40 benign and 50 normal) were analyzed for α-fetoprotein (AFP), carbohydrate antigen 125 (CA125), carcinoembryonic antigen (CEA), sialic acid (SA) and calcium (Ca). The five tumor marker values were then used as ANN inputs data. The result of ANN was compared with that of discriminant analysis by receiver operating characteristic (ROC) curve (AUC) analysis. The diagnostic accuracy of ANN and discriminant analysis among all samples of the test group was 95.5% and 79.3%, respectively. Analysis of multiple tumor markers based on ANN may be a better choice than the traditional statistical methods for differentiating HCC from benign or normal.

  16. Combined use of BP neural network and computational integral imaging reconstruction for optical multiple-image security

    Science.gov (United States)

    Li, Xiao Wei; Cho, Sung Jin; Kim, Seok Tae

    2014-03-01

    Integral imaging can provide a feasible and efficient technique for multiple-image encoding system. The computational integral imaging reconstruction (CIIR) technique reconstructs a set of plane images along the output plane, whereas the resolution of the reconstructed images will degrade due to the partial occlusion of other reconstructed images. Meanwhile, CIIR is a pixel-overlapping reconstruction method, in which the superimposition causes the undesirable interference. To overcome these problems, we first utilize the block matching algorithm to eliminate the occlusion-disturbance and introduce the back-propagation neural network algorithm to compensate for the low-resolution image. In the encryption, a computational integral imaging pickup technique is employed to record the multiple-image simultaneously to form an elemental image array (EIA). The EIA is then encrypted by combining the use of maximum length cellular automata (CA) and the double random phase encoding algorithm. Some numerical simulations have been made to demonstrate the performance of this encryption algorithm.

  17. A Practical Application Combining Wireless Sensor Networks and Internet of Things: Safety Management System for Tower Crane Groups

    Directory of Open Access Journals (Sweden)

    Dexing Zhong

    2014-07-01

    Full Text Available The so-called Internet of Things (IoT has attracted increasing attention in the field of computer and information science. In this paper, a specific application of IoT, named Safety Management System for Tower Crane Groups (SMS-TC, is proposed for use in the construction industry field. The operating status of each tower crane was detected by a set of customized sensors, including horizontal and vertical position sensors for the trolley, angle sensors for the jib and load, tilt and wind speed sensors for the tower body. The sensor data is collected and processed by the Tower Crane Safety Terminal Equipment (TC-STE installed in the driver’s operating room. Wireless communication between each TC-STE and the Local Monitoring Terminal (LMT at the ground worksite were fulfilled through a Zigbee wireless network. LMT can share the status information of the whole group with each TC-STE, while the LMT records the real-time data and reports it to the Remote Supervision Platform (RSP through General Packet Radio Service (GPRS. Based on the global status data of the whole group, an anti-collision algorithm was executed in each TC-STE to ensure the safety of each tower crane during construction. Remote supervision can be fulfilled using our client software installed on a personal computer (PC or smartphone. SMS-TC could be considered as a promising practical application that combines a Wireless Sensor Network with the Internet of Things.

  18. Combining Image and Non-Image Data for Automatic Detection of Retina Disease in a Telemedicine Network

    Energy Technology Data Exchange (ETDEWEB)

    Aykac, Deniz [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK); Fox, Karen [Delta Health Alliance; Garg, Seema [University of North Carolina; Giancardo, Luca [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Nichols, Trent L [ORNL; Tobin Jr, Kenneth William [ORNL

    2011-01-01

    A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion/anomaly detection is a low-cost way of achieving broad-based screening for diabetic retinopathy (DR) and other eye diseases. In the process of a routine eye-screening examination, other non-image data is often available which may be useful in automated diagnosis of disease. In this work, we report on the results of combining this non-image data with image data, using the protocol and processing steps of a prototype system for automated disease diagnosis of retina examinations from a telemedicine network. The system includes quality assessments, automated physiology detection, and automated lesion detection to create an archive of known cases. Non-image data such as diabetes onset date and hemoglobin A1c (HgA1c) for each patient examination are included as well, and the system is used to create a content-based image retrieval engine capable of automated diagnosis of disease into 'normal' and 'abnormal' categories. The system achieves a sensitivity and specificity of 91.2% and 71.6% using hold-one-out validation testing.

  19. A New Method for the Determination of Potassium Sorbate Combining Fluorescence Spectra Method with PSO-BP Neural Network.

    Science.gov (United States)

    Wang, Shu-tao; Chen, Dong-ying; Wang, Xing-long; Wei, Meng; Wang, Zhi-fang

    2015-12-01

    In this paper, fluorescence spectra properties of potassium sorbate in aqueous solution and orange juice are studied, and the result.shows that in two solution there are many difference in fluorescence spectra of potassium sorbate, but the fluorescence characteristic peak exists in λ(ex)/λ(em) = 375/490 nm. It can be seen from the two dimensional fluorescence spectra that the relationship between the fluorescence intensity and the concentration of potassium sorbate is very complex, so there is no linear relationship between them. To determine the concentration of potassium sorbate in orange juice, a new method combining Particle Swarm Optimization (PSO) algorithm with Back Propagation (BP) neural network is proposed. The relative error of two predicted concentrations is 1.83% and 1.53% respectively, which indicate that the method is feasible. The PSO-BP neural network can accurately measure the concentration of potassium sorbate in orange juice in the range of 0.1-2.0 g · L⁻¹.

  20. Assessment of the service performance of drainage system and transformation of pipeline network based on urban combined sewer system model.

    Science.gov (United States)

    Peng, Hai-Qin; Liu, Yan; Wang, Hong-Wu; Ma, Lu-Ming

    2015-10-01

    In recent years, due to global climate change and rapid urbanization, extreme weather events occur to the city at an increasing frequency. Waterlogging is common because of heavy rains. In this case, the urban drainage system can no longer meet the original design requirements, resulting in traffic jams and even paralysis and post a threat to urban safety. Therefore, it provides a necessary foundation for urban drainage planning and design to accurately assess the capacity of the drainage system and correctly simulate the transport effect of drainage network and the carrying capacity of drainage facilities. This study adopts InfoWorks Integrated Catchment Management (ICM) to present the two combined sewer drainage systems in Yangpu District, Shanghai (China). The model can assist the design of the drainage system. Model calibration is performed based on the historical rainfall events. The calibrated model is used for the assessment of the outlet drainage and pipe loads for the storm scenario currently existing or possibly occurring in the future. The study found that the simulation and analysis results of the drainage system model were reliable. They could fully reflect the service performance of the drainage system in the study area and provide decision-making support for regional flood control and transformation of pipeline network.

  1. A practical application combining wireless sensor networks and Internet of Things: Safety Management System for Tower Crane Groups.

    Science.gov (United States)

    Zhong, Dexing; Lv, Hongqiang; Han, Jiuqiang; Wei, Quanrui

    2014-07-30

    The so-called Internet of Things (IoT) has attracted increasing attention in the field of computer and information science. In this paper, a specific application of IoT, named Safety Management System for Tower Crane Groups (SMS-TC), is proposed for use in the construction industry field. The operating status of each tower crane was detected by a set of customized sensors, including horizontal and vertical position sensors for the trolley, angle sensors for the jib and load, tilt and wind speed sensors for the tower body. The sensor data is collected and processed by the Tower Crane Safety Terminal Equipment (TC-STE) installed in the driver's operating room. Wireless communication between each TC-STE and the Local Monitoring Terminal (LMT) at the ground worksite were fulfilled through a Zigbee wireless network. LMT can share the status information of the whole group with each TC-STE, while the LMT records the real-time data and reports it to the Remote Supervision Platform (RSP) through General Packet Radio Service (GPRS). Based on the global status data of the whole group, an anti-collision algorithm was executed in each TC-STE to ensure the safety of each tower crane during construction. Remote supervision can be fulfilled using our client software installed on a personal computer (PC) or smartphone. SMS-TC could be considered as a promising practical application that combines a Wireless Sensor Network with the Internet of Things.

  2. On Singularities and Black Holes in Combination-Driven Models of Technological Innovation Networks.

    Science.gov (United States)

    Solé, Ricard; Amor, Daniel R; Valverde, Sergi

    2016-01-01

    It has been suggested that innovations occur mainly by combination: the more inventions accumulate, the higher the probability that new inventions are obtained from previous designs. Additionally, it has been conjectured that the combinatorial nature of innovations naturally leads to a singularity: at some finite time, the number of innovations should diverge. Although these ideas are certainly appealing, no general models have been yet developed to test the conditions under which combinatorial technology should become explosive. Here we present a generalised model of technological evolution that takes into account two major properties: the number of previous technologies needed to create a novel one and how rapidly technology ages. Two different models of combinatorial growth are considered, involving different forms of ageing. When long-range memory is used and thus old inventions are available for novel innovations, singularities can emerge under some conditions with two phases separated by a critical boundary. If the ageing has a characteristic time scale, it is shown that no singularities will be observed. Instead, a "black hole" of old innovations appears and expands in time, making the rate of invention creation slow down into a linear regime.

  3. On Singularities and Black Holes in Combination-Driven Models of Technological Innovation Networks.

    Directory of Open Access Journals (Sweden)

    Ricard Solé

    Full Text Available It has been suggested that innovations occur mainly by combination: the more inventions accumulate, the higher the probability that new inventions are obtained from previous designs. Additionally, it has been conjectured that the combinatorial nature of innovations naturally leads to a singularity: at some finite time, the number of innovations should diverge. Although these ideas are certainly appealing, no general models have been yet developed to test the conditions under which combinatorial technology should become explosive. Here we present a generalised model of technological evolution that takes into account two major properties: the number of previous technologies needed to create a novel one and how rapidly technology ages. Two different models of combinatorial growth are considered, involving different forms of ageing. When long-range memory is used and thus old inventions are available for novel innovations, singularities can emerge under some conditions with two phases separated by a critical boundary. If the ageing has a characteristic time scale, it is shown that no singularities will be observed. Instead, a "black hole" of old innovations appears and expands in time, making the rate of invention creation slow down into a linear regime.

  4. Placement of Combined Heat, Power and Hydrogen Production Fuel Cell Power Plants in a Distribution Network

    Directory of Open Access Journals (Sweden)

    Bahman Bahmanifirouzi

    2012-03-01

    Full Text Available This paper presents a new Fuzzy Adaptive Modified Particle Swarm Optimization algorithm (FAMPSO for the placement of Fuel Cell Power Plants (FCPPs in distribution systems. FCPPs, as Distributed Generation (DG units, can be considered as Combined sources of Heat, Power, and Hydrogen (CHPH. CHPH operation of FCPPs can improve overall system efficiency, as well as produce hydrogen which can be stored for the future use of FCPPs or can be sold for profit. The objective functions investigated are minimizing the operating costs of electrical energy generation of distribution substations and FCPPs, minimizing the voltage deviation and minimizing the total emission. In this regard, this paper just considers the placement of CHPH FCPPs while investment cost of devices is not considered. Considering the fact that the objectives are different, non-commensurable and nonlinear, it is difficult to solve the problem using conventional approaches that may optimize a single objective. Moreover, the placement of FCPPs in distribution systems is a mixed integer problem. Therefore, this paper uses the FAMPSO algorithm to overcome these problems. For solving the proposed multi-objective problem, this paper utilizes the Pareto Optimality idea to obtain a set of solution in the multi-objective problem instead of only one. Also, a fuzzy system is used to tune parameters of FAMPSO algorithm such as inertia weight. The efficacy of the proposed approach is validated on a 69-bus distribution system.

  5. Integrative analysis of kinase networks in TRAIL-induced apoptosis provides a source of potential targets for combination therapy

    DEFF Research Database (Denmark)

    So, Jonathan; Pasculescu, Adrian; Dai, Anna Y.

    2015-01-01

    -induced apoptosis in the colon adenocarcinoma cell line DLD-1. We classified the kinases as sensitizers or resistors or modulators, depending on the effect that knockdown and overexpression had on TRAIL-induced apoptosis. Two of these kinases that were classified as resistors were PX domain-containing serine...

  6. A method to detect single and multiple delamination problems using a combined neural network technique and genetic algorithm optimization

    Science.gov (United States)

    Le, Hieu The

    This thesis develops a new method to detect delaminations in composite laminates using a combination of finite element method, artificial neural networks, and genetic algorithms. Next, this newly developed method is applied to successfully solve delamination detection problems. Delaminations in a composite laminate with various sizes and locations are considered in the present studies. The improved layerwise shear deformation theory is implemented into the finite element method and used to calculate responses of laminates with single and multiple delaminations. Mappings between the natural frequencies and delamination characteristics are first determined from the developed models. These data are then used to train artificial neural networks of multiplayer perceptron using back-propagation. These trained artificial neural networks are in turn used as an approximate tool to calculate the responses of the delaminated laminates and to feed the data to the delamination detection process. Two different approaches for handling the neural network models are applied in the work and are presented for comparison. The delamination detection problem is formulated as an optimization problem with mixed type design variables. A genetic algorithm, which is a guided probabilistic search technique based on the simulation of Darwin's principle of evolution and natural selection, is developed to solve this optimization problem. Single through-the-width delamination, single internal delamination, and multiple through-the-width delaminations are separately considered for detection study. At last, the application is extended to the most challenging problem, which is the detection of general delamination. Various factors affecting the detection process such as the finite element convergence factor and the laminate geometry factor are also examined. Case studies are made and the findings are summarized in detail in each chapter of the dissertation. It is found that the newly developed

  7. Deep learning approach for classifying, detecting and predicting photometric redshifts of quasars in the Sloan Digital Sky Survey stripe 82

    Science.gov (United States)

    Pasquet-Itam, J.; Pasquet, J.

    2018-04-01

    We have applied a convolutional neural network (CNN) to classify and detect quasars in the Sloan Digital Sky Survey Stripe 82 and also to predict the photometric redshifts of quasars. The network takes the variability of objects into account by converting light curves into images. The width of the images, noted w, corresponds to the five magnitudes ugriz and the height of the images, noted h, represents the date of the observation. The CNN provides good results since its precision is 0.988 for a recall of 0.90, compared to a precision of 0.985 for the same recall with a random forest classifier. Moreover 175 new quasar candidates are found with the CNN considering a fixed recall of 0.97. The combination of probabilities given by the CNN and the random forest makes good performance even better with a precision of 0.99 for a recall of 0.90. For the redshift predictions, the CNN presents excellent results which are higher than those obtained with a feature extraction step and different classifiers (a K-nearest-neighbors, a support vector machine, a random forest and a Gaussian process classifier). Indeed, the accuracy of the CNN within |Δz| http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A97

  8. A comparative study of pattern recognition classifiers to predict physical activities using smartphones and wearable body sensors.

    Science.gov (United States)

    Kouris, Ioannis; Koutsouris, Dimitris

    2012-01-01

    This paper presents a wireless body area network platform that performs physical activities recognition using accelerometers, biosignals and smartphones. Multiple classifiers and sensor combinations were examined to identify the classifier with the best recognition performance for the static and dynamic activities. The Functional Trees classifier proved to provide the best results among the classifiers evaluated (Naive Bayes, Bayesian Networks, Support Vector Machines and Decision Trees [C4.5, Random Forest]) and was used to train the model which was implemented for the real time activity recognition on the smartphone. The identified patterns of daily physical activities were used to examine conformance with medical advice, regarding physical activity guidelines. An algorithm based on Skip Chain Conditional Random Fields, received as inputs the recognized activities and data retrieved from the GPS receiver of the smartphone to develop dynamic daily patterns that enhance prediction results. The presented platform can be extended to be used in the prevention of short-term complications of metabolic diseases such as diabetes.

  9. High-performance combination method of electric network frequency and phase for audio forgery detection in battery-powered devices.

    Science.gov (United States)

    Savari, Maryam; Abdul Wahab, Ainuddin Wahid; Anuar, Nor Badrul

    2016-09-01

    Audio forgery is any act of tampering, illegal copy and fake quality in the audio in a criminal way. In the last decade, there has been increasing attention to the audio forgery detection due to a significant increase in the number of forge in different type of audio. There are a number of methods for forgery detection, which electric network frequency (ENF) is one of the powerful methods in this area for forgery detection in terms of accuracy. In spite of suitable accuracy of ENF in a majority of plug-in powered devices, the weak accuracy of ENF in audio forgery detection for battery-powered devices, especially in laptop and mobile phone, can be consider as one of the main obstacles of the ENF. To solve the ENF problem in terms of accuracy in battery-powered devices, a combination method of ENF and phase feature is proposed. From experiment conducted, ENF alone give 50% and 60% accuracy for forgery detection in mobile phone and laptop respectively, while the proposed method shows 88% and 92% accuracy respectively, for forgery detection in battery-powered devices. The results lead to higher accuracy for forgery detection with the combination of ENF and phase feature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Classifying Transition Behaviour in Postural Activity Monitoring

    Directory of Open Access Journals (Sweden)

    James BRUSEY

    2009-10-01

    Full Text Available A few accelerometers positioned on different parts of the body can be used to accurately classify steady state behaviour, such as walking, running, or sitting. Such systems are usually built using supervised learning approaches. Transitions between postures are, however, difficult to deal with using posture classification systems proposed to date, since there is no label set for intermediary postures and also the exact point at which the transition occurs can sometimes be hard to pinpoint. The usual bypass when using supervised learning to train such systems is to discard a section of the dataset around each transition. This leads to poorer classification performance when the systems are deployed out of the laboratory and used on-line, particularly if the regimes monitored involve fast paced activity changes. Time-based filtering that takes advantage of sequential patterns is a potential mechanism to improve posture classification accuracy in such real-life applications. Also, such filtering should reduce the number of event messages needed to be sent across a wireless network to track posture remotely, hence extending the system’s life. To support time-based filtering, understanding transitions, which are the major event generators in a classification system, is a key. This work examines three approaches to post-process the output of a posture classifier using time-based filtering: a naïve voting scheme, an exponentially weighted voting scheme, and a Bayes filter. Best performance is obtained from the exponentially weighted voting scheme although it is suspected that a more sophisticated treatment of the Bayes filter might yield better results.

  11. Embedding global barrier and collective in torus network with each node combining input from receivers according to class map for output to senders

    Science.gov (United States)

    Chen, Dong; Coteus, Paul W; Eisley, Noel A; Gara, Alan; Heidelberger, Philip; Senger, Robert M; Salapura, Valentina; Steinmacher-Burow, Burkhard; Sugawara, Yutaka; Takken, Todd E

    2013-08-27

    Embodiments of the invention provide a method, system and computer program product for embedding a global barrier and global interrupt network in a parallel computer system organized as a torus network. The computer system includes a multitude of nodes. In one embodiment, the method comprises taking inputs from a set of receivers of the nodes, dividing the inputs from the receivers into a plurality of classes, combining the inputs of each of the classes to obtain a result, and sending said result to a set of senders of the nodes. Embodiments of the invention provide a method, system and computer program product for embedding a collective network in a parallel computer system organized as a torus network. In one embodiment, the method comprises adding to a torus network a central collective logic to route messages among at least a group of nodes in a tree structure.

  12. Passive and Active Analysis in DSR-Based Ad Hoc Networks

    Science.gov (United States)

    Dempsey, Tae; Sahin, Gokhan; Morton, Y. T. (Jade)

    Security and vulnerabilities in wireless ad hoc networks have been considered at different layers, and many attack strategies have been proposed, including denial of service (DoS) through the intelligent jamming of the most critical packet types of flows in a network. This paper investigates the effectiveness of intelligent jamming in wireless ad hoc networks using the Dynamic Source Routing (DSR) and TCP protocols and introduces an intelligent classifier to facilitate the jamming of such networks. Assuming encrypted packet headers and contents, our classifier is based solely on the observable characteristics of size, inter-arrival timing, and direction and classifies packets with up to 99.4% accuracy in our experiments. Furthermore, we investigate active analysis, which is the combination of a classifier and intelligent jammer to invoke specific responses from a victim network.

  13. Networking

    OpenAIRE

    Rauno Lindholm, Daniel; Boisen Devantier, Lykke; Nyborg, Karoline Lykke; Høgsbro, Andreas; Fries, de; Skovlund, Louise

    2016-01-01

    The purpose of this project was to examine what influencing factor that has had an impact on the presumed increasement of the use of networking among academics on the labour market and how it is expressed. On the basis of the influence from globalization on the labour market it can be concluded that the globalization has transformed the labour market into a market based on the organization of networks. In this new organization there is a greater emphasis on employees having social qualificati...

  14. Malignancy and Abnormality Detection of Mammograms using Classifier Ensembling

    Directory of Open Access Journals (Sweden)

    Nawazish Naveed

    2011-07-01

    Full Text Available The breast cancer detection and diagnosis is a critical and complex procedure that demands high degree of accuracy. In computer aided diagnostic systems, the breast cancer detection is a two stage procedure. First, to classify the malignant and benign mammograms, while in second stage, the type of abnormality is detected. In this paper, we have developed a novel architecture to enhance the classification of malignant and benign mammograms using multi-classification of malignant mammograms into six abnormality classes. DWT (Discrete Wavelet Transformation features are extracted from preprocessed images and passed through different classifiers. To improve accuracy, results generated by various classifiers are ensembled. The genetic algorithm is used to find optimal weights rather than assigning weights to the results of classifiers on the basis of heuristics. The mammograms declared as malignant by ensemble classifiers are divided into six classes. The ensemble classifiers are further used for multiclassification using one-against-all technique for classification. The output of all ensemble classifiers is combined by product, median and mean rule. It has been observed that the accuracy of classification of abnormalities is more than 97% in case of mean rule. The Mammographic Image Analysis Society dataset is used for experimentation.

  15. Frog sound identification using extended k-nearest neighbor classifier

    Science.gov (United States)

    Mukahar, Nordiana; Affendi Rosdi, Bakhtiar; Athiar Ramli, Dzati; Jaafar, Haryati

    2017-09-01

    Frog sound identification based on the vocalization becomes important for biological research and environmental monitoring. As a result, different types of feature extractions and classifiers have been employed to evaluate the accuracy of frog sound identification. This paper presents a frog sound identification with Extended k-Nearest Neighbor (EKNN) classifier. The EKNN classifier integrates the nearest neighbors and mutual sharing of neighborhood concepts, with the aims of improving the classification performance. It makes a prediction based on who are the nearest neighbors of the testing sample and who consider the testing sample as their nearest neighbors. In order to evaluate the classification performance in frog sound identification, the EKNN classifier is compared with competing classifier, k -Nearest Neighbor (KNN), Fuzzy k -Nearest Neighbor (FKNN) k - General Nearest Neighbor (KGNN)and Mutual k -Nearest Neighbor (MKNN) on the recorded sounds of 15 frog species obtained in Malaysia forest. The recorded sounds have been segmented using Short Time Energy and Short Time Average Zero Crossing Rate (STE+STAZCR), sinusoidal modeling (SM), manual and the combination of Energy (E) and Zero Crossing Rate (ZCR) (E+ZCR) while the features are extracted by Mel Frequency Cepstrum Coefficient (MFCC). The experimental results have shown that the EKNCN classifier exhibits the best performance in terms of accuracy compared to the competing classifiers, KNN, FKNN, GKNN and MKNN for all cases.

  16. Classifying publications from the clinical and translational science award program along the translational research spectrum: a machine learning approach.

    Science.gov (United States)

    Surkis, Alisa; Hogle, Janice A; DiazGranados, Deborah; Hunt, Joe D; Mazmanian, Paul E; Connors, Emily; Westaby, Kate; Whipple, Elizabeth C; Adamus, Trisha; Mueller, Meridith; Aphinyanaphongs, Yindalon

    2016-08-05

    curves (AUC), with an AUC of 0.94 for the T0 classifier, 0.84 for T1/T2, and 0.92 for T3/T4. The combination of definitions agreed upon by five CTSA hubs, a checklist that facilitates more uniform definition interpretation, and algorithms that perform well in classifying publications along the translational spectrum provide a basis for establishing and applying uniform definitions of translational research categories. The classification algorithms allow publication analyses that would not be feasible with manual classification, such as assessing the distribution and trends of publications across the CTSA network and comparing the categories of publications and their citations to assess knowledge transfer across the translational research spectrum.

  17. Combining Observations of a Digital Camera Network, Satellite Remote Sensing, and Micrometeorology for Improved Understanding of Forest Phenology

    Science.gov (United States)

    Braswell, B. H.; Richardson, A. D.; Ollinger, S. V.; Friedl, M. A.; Hollinger, D. Y.

    2009-04-01

    The observed phenological behavior of terrestrial ecosystems is a result of the seasonality of climatic forcing superposed with physical and biological responses of the plant-soil system. Biogeochemical models that represent rapid time scale phenomena well tend to simulate interannual variability and trends in productivity more accurately when phenology is prescribed, suggesting a gap in our understanding of the underlying processes or a generic means to represent their emergent behavior. Specifically, questions surround environmental triggers of leaf turnover, the relative importance of internal nutrient cycling, and the potential for generalization across broadly defined biome types. Satellite observations provide a spatially comprehensive record of the seasonality of land vegetation characteristics, but are most valuable when combined with direct measurements of ecosystem state. Time series of meteorology and fluxes (e.g. from eddy covariance tower sites) are one such data source, providing a valuable means to estimate productivity, but not a view of the state of the vegetation canopy. We have begun to assemble a network of digital cameras ('webcams') by deploying camera systems at existing research sites, and by harvesting imagery from collaborating sites and institutions. There are currently 80 cameras in the network, 17 of which are 'core' locations that are located at flux towers or field stations. We process and analyze the camera imagery as remote sensing data, utilizing the red, green, and blue, channels as a means to stratify the scenes and quantify relative vegetation 'greenness'. Our initial analyses have shown that these images do yield hourly-to-daily information about the seasonal cycle of vegetation state as compared both to fluxes and satellite indices. This presentation will summarize the current findings of the project, specifically focusing on (a) insights into controls on interannual variability at sites with long records (2000-present), and

  18. Aggregation Operator Based Fuzzy Pattern Classifier Design

    DEFF Research Database (Denmark)

    Mönks, Uwe; Larsen, Henrik Legind; Lohweg, Volker

    2009-01-01

    This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable. The perfor...

  19. 15 CFR 4.8 - Classified Information.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily the...

  20. A native Bayesian classifier based routing protocol for VANETS

    Science.gov (United States)

    Bao, Zhenshan; Zhou, Keqin; Zhang, Wenbo; Gong, Xiaolei

    2016-12-01

    Geographic routing protocols are one of the most hot research areas in VANET (Vehicular Ad-hoc Network). However, there are few routing protocols can take both the transmission efficient and the usage of ratio into account. As we have noticed, different messages in VANET may ask different quality of service. So we raised a Native Bayesian Classifier based routing protocol (Naive Bayesian Classifier-Greedy, NBC-Greedy), which can classify and transmit different messages by its emergency degree. As a result, we can balance the transmission efficient and the usage of ratio with this protocol. Based on Matlab simulation, we can draw a conclusion that NBC-Greedy is more efficient and stable than LR-Greedy and GPSR.

  1. Hybrid feature vector extraction in unsupervised learning neural classifier.

    Science.gov (United States)

    Kostka, P S; Tkacz, E J; Komorowski, D

    2005-01-01

    Feature extraction and selection method as a preliminary stage of heart rate variability (HRV) signals unsupervised learning neural classifier is presented. Multi-domain, mixed new feature vector is created from time, frequency and time-frequency parameters of HRV analysis. The optimal feature set for given classification task was chosen as a result of feature ranking, obtained after computing the class separability measure for every independent feature. Such prepared a new signal representation in reduced feature space is the input to neural classifier based on introduced by Grosberg Adaptive Resonance Theory (ART2) structure. Test of proposed method carried out on the base of 62 patients with coronary artery disease divided into learning and verifying set allowed to chose these features, which gave the best results. Classifier performance measures obtained for unsupervised learning ART2 neural network was comparable with these reached for multiplayer perceptron structures.

  2. Hallucination- and speech-specific hypercoupling in frontotemporal auditory and language networks in schizophrenia using combined task-based fMRI data: An fBIRN study.

    Science.gov (United States)

    Lavigne, Katie M; Woodward, Todd S

    2017-12-21

    Hypercoupling of activity in speech-perception-specific brain networks has been proposed to play a role in the generation of auditory-verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task-based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI-CPCA), which allowed for comparison of task-related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory-motor, (b) language processing, and (c) default-mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory-motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non-AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech-perception-related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation. © 2017 Wiley Periodicals, Inc.

  3. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D...... image, respectively and this system is referred as triplanar convolutional neural network in the thesis. We applied the triplanar CNN for segmenting articular cartilage in knee MRI and compared its performance with the same state-of-the-art method which was used as a benchmark for cascaded classifier...... contextualized convolutional neural network (SCCNN) which incorporates the labels of the neighbouring pixels/voxels while training the network. We demonstrate its application for the 2D problem of segmenting horses from the Weizmann horses database using 2D CNN and our 3D problem of segmenting tibial cartilage...

  4. Economic competitiveness of underground coal gasification combined with carbon capture and storage in the Bulgarian energy network

    Energy Technology Data Exchange (ETDEWEB)

    Nakaten, Natalie Christine

    2014-11-15

    Underground coal gasification (UCG) allows for exploitation of deep-seated coal seams not economically exploitable by conventional coal mining. Aim of the present study is to examine UCG economics based on coal conversion into a synthesis gas to fuel a combined cycle gas turbine power plant (CCGT) with CO2 capture and storage (CCS). Thereto, a techno-economic model is developed for UCG-CCGT-CCS costs of electricity (COE) determination which, considering sitespecific data of a selected target area in Bulgaria, sum up to 72 Euro/MWh in total. To quantify the impact of model constraints on COE, sensitivity analyses are undertaken revealing that varying geological model constraints impact COE with 0.4% to 4%, chemical with 13%, technical with 8% to 17% and market-dependent with 2% to 25%. Besides site-specific boundary conditions, UCG-CCGT-CCS economics depend on resources availability and infrastructural characteristics of the overall energy system. Assessing a model based implementation of UCG-CCGT-CCS and CCS power plants into the Bulgarian energy network revealed that both technologies provide essential and economically competitive options to achieve the EU environmental targets and a complete substitution of gas imports by UCG synthesis gas production.

  5. A robust observer based on H∞ filtering with parameter uncertainties combined with Neural Networks for estimation of vehicle roll angle

    Science.gov (United States)

    Boada, Beatriz L.; Boada, Maria Jesus L.; Vargas-Melendez, Leandro; Diaz, Vicente

    2018-01-01

    Nowadays, one of the main objectives in road transport is to decrease the number of accident victims. Rollover accidents caused nearly 33% of all deaths from passenger vehicle crashes. Roll Stability Control (RSC) systems prevent vehicles from untripped rollover accidents. The lateral load transfer is the main parameter which is taken into account in the RSC systems. This parameter is related to the roll angle, which can be directly measured from a dual-antenna GPS. Nevertheless, this is a costly technique. For this reason, roll angle has to be estimated. In this paper, a novel observer based on H∞ filtering in combination with a neural network (NN) for the vehicle roll angle estimation is proposed. The design of this observer is based on four main criteria: to use a simplified vehicle model, to use signals of sensors which are installed onboard in current vehicles, to consider the inaccuracy in the system model and to attenuate the effect of the external disturbances. Experimental results show the effectiveness of the proposed observer.

  6. Economic competitiveness of underground coal gasification combined with carbon capture and storage in the Bulgarian energy network

    International Nuclear Information System (INIS)

    Nakaten, Natalie Christine

    2014-01-01

    Underground coal gasification (UCG) allows for exploitation of deep-seated coal seams not economically exploitable by conventional coal mining. Aim of the present study is to examine UCG economics based on coal conversion into a synthesis gas to fuel a combined cycle gas turbine power plant (CCGT) with CO2 capture and storage (CCS). Thereto, a techno-economic model is developed for UCG-CCGT-CCS costs of electricity (COE) determination which, considering sitespecific data of a selected target area in Bulgaria, sum up to 72 Euro/MWh in total. To quantify the impact of model constraints on COE, sensitivity analyses are undertaken revealing that varying geological model constraints impact COE with 0.4% to 4%, chemical with 13%, technical with 8% to 17% and market-dependent with 2% to 25%. Besides site-specific boundary conditions, UCG-CCGT-CCS economics depend on resources availability and infrastructural characteristics of the overall energy system. Assessing a model based implementation of UCG-CCGT-CCS and CCS power plants into the Bulgarian energy network revealed that both technologies provide essential and economically competitive options to achieve the EU environmental targets and a complete substitution of gas imports by UCG synthesis gas production.

  7. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    Science.gov (United States)

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Developing and Multi-Objective Optimization of a Combined Energy Absorber Structure Using Polynomial Neural Networks and Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Amir Najibi

    Full Text Available Abstract In this study a newly developed thin-walled structure with the combination of circular and square sections is investigated in term of crashworthiness. The results of the experimental tests are utilized to validate the Abaqus/ExplicitTM finite element simulations and analysis of the crush phenomenon. Three polynomial meta-models based on the evolved group method of data handling (GMDH neural networks are employed to simply represent the specific energy absorption (SEA, the initial peak crushing load (P1 and the secondary peak crushing load (P2 with respect to the geometrical variables. The training and testing data are extracted from the finite element analysis. The modified genetic algorithm NSGA-II, is used in multi-objective optimisation of the specific energy absorption, primary and secondary peak crushing load according to the geometrical variables. Finally, in each optimisation process, the optimal section energy absorptions are compared with the results of the finite element analysis. The nearest to ideal point and TOPSIS optimisation methods are applied to choose the optimal points.

  9. An improved predictive association rule based classifier using gain ...

    Indian Academy of Sciences (India)

    times, Associative Classification (AC), the combination of association rule mining and classification has received ... This paper synthesizes the existing work carried out in AC, and also discusses the factors that influence the per- .... Classifier performance is wrapped on the ability of the learning algorithm which exploits ...

  10. The Relationship Between Diversity and Accuracy in Multiple Classifier Systems

    Science.gov (United States)

    2012-03-22

    for majority voting fusion, and thus is expected to show a relationship with majority voting error [38]. Shipp and 34 Kuncheva consider a large number... Shipp , Catherine A. and Ludmila Kuncheva. “Relationships between combina- tion methods and measures of diversity in combining classifiers.”, Information

  11. The Human Kinome Targeted by FDA Approved Multi-Target Drugs and Combination Products: A Comparative Study from the Drug-Target Interaction Network Perspective.

    Science.gov (United States)

    Li, Ying Hong; Wang, Pan Pan; Li, Xiao Xu; Yu, Chun Yan; Yang, Hong; Zhou, Jin; Xue, Wei Wei; Tan, Jun; Zhu, Feng

    2016-01-01

    The human kinome is one of the most productive classes of drug target, and there is emerging necessity for treating complex diseases by means of polypharmacology (multi-target drugs and combination products). However, the advantages of the multi-target drugs and the combination products are still under debate. A comparative analysis between FDA approved multi-target drugs and combination products, targeting the human kinome, was conducted by mapping targets onto the phylogenetic tree of the human kinome. The approach of network medicine illustrating the drug-target interactions was applied to identify popular targets of multi-target drugs and combination products. As identified, the multi-target drugs tended to inhibit target pairs in the human kinome, especially the receptor tyrosine kinase family, while the combination products were able to against targets of distant homology relationship. This finding asked for choosing the combination products as a better solution for designing drugs aiming at targets of distant homology relationship. Moreover, sub-networks of drug-target interactions in specific disease were generated, and mechanisms shared by multi-target drugs and combination products were identified. In conclusion, this study performed an analysis between approved multi-target drugs and combination products against the human kinome, which could assist the discovery of next generation polypharmacology.

  12. Molecular Determinants Underlying Binding Specificities of the ABL Kinase Inhibitors: Combining Alanine Scanning of Binding Hot Spots with Network Analysis of Residue Interactions and Coevolution

    Science.gov (United States)

    Tse, Amanda; Verkhivker, Gennady M.

    2015-01-01

    Quantifying binding specificity and drug resistance of protein kinase inhibitors is of fundamental importance and remains highly challenging due to complex interplay of structural and thermodynamic factors. In this work, molecular simulations and computational alanine scanning are combined with the network-based approaches to characterize molecular determinants underlying binding specificities of the ABL kinase inhibitors. The proposed theoretical framework unveiled a relationship between ligand binding and inhibitor-mediated changes in the residue interaction networks. By using topological parameters, we have described the organization of the residue interaction networks and networks of coevolving residues in the ABL kinase structures. This analysis has shown that functionally critical regulatory residues can simultaneously embody strong coevolutionary signal and high network centrality with a propensity to be energetic hot spots for drug binding. We have found that selective (Nilotinib) and promiscuous (Bosutinib, Dasatinib) kinase inhibitors can use their energetic hot spots to differentially modulate stability of the residue interaction networks, thus inhibiting or promoting conformational equilibrium between inactive and active states. According to our results, Nilotinib binding may induce a significant network-bridging effect and enhance centrality of the hot spot residues that stabilize structural environment favored by the specific kinase form. In contrast, Bosutinib and Dasatinib can incur modest changes in the residue interaction network in which ligand binding is primarily coupled only with the identity of the gate-keeper residue. These factors may promote structural adaptability of the active kinase states in binding with these promiscuous inhibitors. Our results have related ligand-induced changes in the residue interaction networks with drug resistance effects, showing that network robustness may be compromised by targeted mutations of key mediating

  13. Combination of DTI and fMRI reveals the white matter changes correlating with the decline of default-mode network activity in Alzheimer's disease

    Science.gov (United States)

    Wu, Xianjun; Di, Qian; Li, Yao; Zhao, Xiaojie

    2009-02-01

    Recently, evidences from fMRI studies have shown that there was decreased activity among the default-mode network in Alzheimer's disease (AD), and DTI researches also demonstrated that demyelinations exist in white matter of AD patients. Therefore, combining these two MRI methods may help to reveal the relationship between white matter damages and alterations of the resting state functional connectivity network. In the present study, we tried to address this issue by means of correlation analysis between DTI and resting state fMRI images. The default-mode networks of AD and normal control groups were compared to find the areas with significantly declined activity firstly. Then, the white matter regions whose fractional anisotropy (FA) value correlated with this decline were located through multiple regressions between the FA values and the BOLD response of the default networks. Among these correlating white matter regions, those whose FA values also declined were found by a group comparison between AD patients and healthy elderly control subjects. Our results showed that the areas with decreased activity among default-mode network included left posterior cingulated cortex (PCC), left medial temporal gyrus et al. And the damaged white matter areas correlated with the default-mode network alterations were located around left sub-gyral temporal lobe. These changes may relate to the decreased connectivity between PCC and medial temporal lobe (MTL), and thus correlate with the deficiency of default-mode network activity.

  14. Optimizing Observation Networks Combining Ships of Opportunity, Gliders, Moored Buoys and FerryBox in the Bay of Biscay and English Channel

    Science.gov (United States)

    Charria, G.; Lamouroux, J.; De Mey, P. J.; Raynaud, S.; Heyraud, C.; Craneguy, P.; Dumas, F.; Le Henaff, M.

    2016-02-01

    Designing optimal observation networks in coastal oceans remains one of the major challenges towards the implementation of future Integrated Ocean Observing Systems to monitor the coastal environment. In the Bay of Biscay and the English Channel, the diversity of involved processes requires to adapt observing systems to the specific targeted environments. Also important is the requirement for those systems to sustain coastal applications. An efficient way to measure the hydrological content of the water column over the continental shelf is to consider ships of opportunity. In the French observation strategy, the RECOPESCA program, as a component of the High frequency Observation network for the environment in coastal SEAs (HOSEA), aims to collect environmental observations from sensors attached to fishing nets. In the present study, we assess that network performances using the ArM method (Le Hénaff et al., 2009). A reference network, based on fishing vessels observations in 2008, is assessed using that method. Moreover, three scenarios, based on the reference network, a denser network in 2010 and a fictive network aggregated from a pluri-annual collection of profiles, are also analyzed. Two other observational network design experiments have been implemented for the spring season in two regions: 1) the Loire River plume (northern part of the Bay of Biscay) to explore different possible glider endurance lines combined with a fixed mooring to monitor temperature and salinity and 2) the Western English Channel using a glider below FerryBox measurements. These experiments combining existing and future observing systems, as well as numerical ensemble simulations, highlight the key issue of monitoring the whole water column in and close to river plumes (e.g. using gliders), the efficiency of the surface high frequency sampling from FerryBoxes in macrotidal regions and the importance of sampling key regions instead of increasing the number of Voluntary Observing Ships.

  15. Improved Diagnostic Accuracy of Alzheimer's Disease by Combining Regional Cortical Thickness and Default Mode Network Functional Connectivity: Validated in the Alzheimer's Disease Neuroimaging Initiative Set.

    Science.gov (United States)

    Park, Ji Eun; Park, Bumwoo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Chai; Oh, Joo Young; Lee, Jae-Hong; Roh, Jee Hoon; Shim, Woo Hyun

    2017-01-01

    To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal ( p Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease.

  16. Error minimizing algorithms for nearest eighbor classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory; Zimmer, G. Beate [TEXAS A& M

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.

  17. Structural and functional abnormalities of default mode network in minimal hepatic encephalopathy: a study combining DTI and fMRI.

    Directory of Open Access Journals (Sweden)

    Rongfeng Qi

    Full Text Available BACKGROUND AND PURPOSE: Live failure can cause brain edema and aberrant brain function in cirrhotic patients. In particular, decreased functional connectivity within the brain default-mode network (DMN has been recently reported in overt hepatic encephalopathy (HE patients. However, so far, little is known about the connectivity among the DMN in the minimal HE (MHE, the mildest form of HE. Here, we combined diffusion tensor imaging (DTI and resting-state functional MRI (rs-fMRI to test our hypothesis that both structural and functional connectivity within the DMN were disturbed in MHE. MATERIALS AND METHODS: Twenty MHE patients and 20 healthy controls participated in the study. We explored the changes of structural (path length, tracts count, fractional anisotropy [FA] and mean diffusivity [MD] derived from DTI tractography and functional (temporal correlation coefficient derived from rs-fMRI connectivity of the DMN in MHE patients. Pearson correlation analysis was performed between the structural/functional indices and venous blood ammonia levels/neuropsychological tests scores of patients. All thresholds were set at P<0.05, Bonferroni corrected. RESULTS: Compared to the healthy controls, MHE patients showed both decreased FA and increased MD in the tract connecting the posterior cingulate cortex/precuneus (PCC/PCUN to left parahippocampal gyrus (PHG, and decreased functional connectivity between the PCC/PCUN and left PHG, and medial prefrontal cortex (MPFC. MD values of the tract connecting PCC/PCUN to the left PHG positively correlated to the ammonia levels, the temporal correlation coefficients between the PCC/PCUN and the MPFC showed positive correlation to the digital symbol tests scores of patients. CONCLUSION: MHE patients have both disturbed structural and functional connectivity within the DMN. The decreased functional connectivity was also detected between some regions without abnormal structural connectivity, suggesting that the

  18. Comparative efficacy of inhaled corticosteroid and long-acting beta agonist combinations in preventing COPD exacerbations: a Bayesian network meta-analysis.

    Science.gov (United States)

    Oba, Yuji; Lone, Nazir A

    2014-01-01

    A combination therapy with inhaled corticosteroid (ICS) and a long-acting beta agonist (LABA) is recommended in severe chronic obstructive pulmonary disease (COPD) patients experiencing frequent exacerbations. Currently, there are five ICS/LABA combination products available on the market. The purpose of this study was to systematically review the efficacy of various ICS/LABA combinations with a network meta-analysis. Several databases and manufacturer's websites were searched for relevant clinical trials. Randomized control trials, at least 12 weeks duration, comparing an ICS/LABA combination with active control or placebo were included. Moderate and severe exacerbations were chosen as the outcome assessment criteria. The primary analyses were conducted with a Bayesian Markov chain Monte Carlo method. Most of the ICS/LABA combinations reduced moderate-to-severe exacerbations as compared with placebo and LABA, but none of them reduced severe exacerbations. However, many studies excluded patients receiving long-term oxygen therapy. Moderate-dose ICS was as effective as high-dose ICS in reducing exacerbations when combined with LABA. ICS/LABA combinations had a class effect with regard to the prevention of COPD exacerbations. Moderate-dose ICS/LABA combination therapy would be sufficient for COPD patients when indicated. The efficacy of ICS/LABA combination therapy appeared modest and had no impact in reducing severe exacerbations. Further studies are needed to evaluate the efficacy of ICS/LABA combination therapy in severely affected COPD patients requiring long-term oxygen therapy.

  19. Constructing and Classifying Email Networks from Raw Forensic Images

    Science.gov (United States)

    2016-09-01

    set of categories an observation belongs. Two major approaches to classification problems are supervised and unsupervised learning . In supervised ...there is no prior knowledge [26], [27]. Unsupervised learning , in contrast to supervised learning , draws inferences about datasets without having...Support Vector Machines Support vector machines (SVM) are another type of supervised learning model. Given a set of training examples, with the knowledge

  20. Layered recognition networks that pre-process, classify, and describe

    Science.gov (United States)

    Uhr, L.

    1971-01-01

    A brief overview is presented of six types of pattern recognition programs that: (1) preprocess, then characterize; (2) preprocess and characterize together; (3) preprocess and characterize into a recognition cone; (4) describe as well as name; (5) compose interrelated descriptions; and (6) converse. A computer program (of types 3 through 6) is presented that transforms and characterizes the input scene through the successive layers of a recognition cone, and then engages in a stylized conversation to describe the scene.

  1. Progressive Email Classifier (PEC) for Ingress Enterprise Network Traffic Analysis

    Science.gov (United States)

    2010-09-21

    different packet types of  TCP  sessions  ( SYN , ACK, FIN, RST, etc), so that one can measure and monitor the congestion control behaviors of a  TCP ...Streams", to be submitted 2. Shengya Lin, Jyh-Charn Liu, "On Classification of TCP Flows in the Middle of End-to-End Path", to be submitted 3. Hao Wang, Jyh... flooding  of spam at the gateway, so that  they can be  intercepted or quarantined before reaching end users. Despite  the rich collection of signatures

  2. A Topic Model Approach to Representing and Classifying Football Plays

    KAUST Repository

    Varadarajan, Jagannadan

    2013-09-09

    We address the problem of modeling and classifying American Football offense teams’ plays in video, a challenging example of group activity analysis. Automatic play classification will allow coaches to infer patterns and tendencies of opponents more ef- ficiently, resulting in better strategy planning in a game. We define a football play as a unique combination of player trajectories. To this end, we develop a framework that uses player trajectories as inputs to MedLDA, a supervised topic model. The joint maximiza- tion of both likelihood and inter-class margins of MedLDA in learning the topics allows us to learn semantically meaningful play type templates, as well as, classify different play types with 70% average accuracy. Furthermore, this method is extended to analyze individual player roles in classifying each play type. We validate our method on a large dataset comprising 271 play clips from real-world football games, which will be made publicly available for future comparisons.

  3. Data characteristics that determine classifier performance

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2006-11-01

    Full Text Available The relationship between the distribution of data, on the one hand, and classifier performance, on the other, for non-parametric classifiers has been studied. It is shown that predictable factors such as the available amount of training data...

  4. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  5. Detection of microaneurysms in retinal images using an ensemble classifier

    Directory of Open Access Journals (Sweden)

    M.M. Habib

    2017-01-01

    Full Text Available This paper introduces, and reports on the performance of, a novel combination of algorithms for automated microaneurysm (MA detection in retinal images. The presence of MAs in retinal images is a pathognomonic sign of Diabetic Retinopathy (DR which is one of the leading causes of blindness amongst the working age population. An extensive survey of the literature is presented and current techniques in the field are summarised. The proposed technique first detects an initial set of candidates using a Gaussian Matched Filter and then classifies this set to reduce the number of false positives. A Tree Ensemble classifier is used with a set of 70 features (the most commons features in the literature. A new set of 32 MA groundtruth images (with a total of 256 labelled MAs based on images from the MESSIDOR dataset is introduced as a public dataset for benchmarking MA detection algorithms. We evaluate our algorithm on this dataset as well as another public dataset (DIARETDB1 v2.1 and compare it against the best available alternative. Results show that the proposed classifier is superior in terms of eliminating false positive MA detection from the initial set of candidates. The proposed method achieves an ROC score of 0.415 compared to 0.2636 achieved by the best available technique. Furthermore, results show that the classifier model maintains consistent performance across datasets, illustrating the generalisability of the classifier and that overfitting does not occur.

  6. Identification of GRB2 and GAB1 coexpression as an unfavorable prognostic factor for hepatocellular carcinoma by a combination of expression profile and network analysis.

    Directory of Open Access Journals (Sweden)

    Yanqiong Zhang

    Full Text Available AIM: To screen novel markers for hepatocellular carcinoma (HCC by a combination of expression profile, interaction network analysis and clinical validation. METHODS: HCC significant molecules which are differentially expressed or had genetic variations in HCC tissues were obtained from five existing HCC related databases (OncoDB.HCC, HCC.net, dbHCCvar, EHCO and Liverome. Then, the protein-protein interaction (PPI network of these molecules was constructed. Three topological features of the network ('Degree', 'Betweenness', and 'Closeness' and the k-core algorithm were used to screen candidate HCC markers which play crucial roles in tumorigenesis of HCC. Furthermore, the clinical significance of two candidate HCC markers growth factor receptor-bound 2 (GRB2 and GRB2-associated-binding protein 1 (GAB1 was validated. RESULTS: In total, 6179 HCC significant genes and 977 HCC significant proteins were collected from existing HCC related databases. After network analysis, 331 candidate HCC markers were identified. Especially, GAB1 has the highest k-coreness suggesting its central localization in HCC related network, and the interaction between GRB2 and GAB1 has the largest edge-betweenness implying it may be biologically important to the function of HCC related network. As the results of clinical validation, the expression levels of both GRB2 and GAB1 proteins were significantly higher in HCC tissues than those in their adjacent nonneoplastic tissues. More importantly, the combined GRB2 and GAB1 protein expression was significantly associated with aggressive tumor progression and poor prognosis in patients with HCC. CONCLUSION: This study provided an integrative analysis by combining expression profile and interaction network analysis to identify a list of biologically significant HCC related markers and pathways. Further experimental validation indicated that the aberrant expression of GRB2 and GAB1 proteins may be strongly related to tumor

  7. Advanced Ring-Shaped Microelectrode Assay Combined with Small Rectangular Electrode for Quasi-In vivo Measurement of Cell-to-Cell Conductance in Cardiomyocyte Network

    Science.gov (United States)

    Nomura, Fumimasa; Kaneko, Tomoyuki; Hamada, Tomoyo; Hattori, Akihiro; Yasuda, Kenji

    2013-06-01

    To predict the risk of fatal arrhythmia induced by cardiotoxicity in the highly complex human heart system, we have developed a novel quasi-in vivo electrophysiological measurement assay, which combines a ring-shaped human cardiomyocyte network and a set of two electrodes that form a large single ring-shaped electrode for the direct measurement of irregular cell-to-cell conductance occurrence in a cardiomyocyte network, and a small rectangular microelectrode for forced pacing of cardiomyocyte beating and for acquiring the field potential waveforms of cardiomyocytes. The advantages of this assay are as follows. The electrophysiological signals of cardiomyocytes in the ring-shaped network are superimposed directly on a single loop-shaped electrode, in which the information of asynchronous behavior of cell-to-cell conductance are included, without requiring a set of huge numbers of microelectrode arrays, a set of fast data conversion circuits, or a complex analysis in a computer. Another advantage is that the small rectangular electrode can control the position and timing of forced beating in a ring-shaped human induced pluripotent stem cell (hiPS)-derived cardiomyocyte network and can also acquire the field potentials of cardiomyocytes. First, we constructed the human iPS-derived cardiomyocyte ring-shaped network on the set of two electrodes, and acquired the field potential signals of particular cardiomyocytes in the ring-shaped cardiomyocyte network during simultaneous acquisition of the superimposed signals of whole-cardiomyocyte networks representing cell-to-cell conduction. Using the small rectangular electrode, we have also evaluated the response of the cell network to electrical stimulation. The mean and SD of the minimum stimulation voltage required for pacing (VMin) at the small rectangular electrode was 166+/-74 mV, which is the same as the magnitude of amplitude for the pacing using the ring-shaped electrode (179+/-33 mV). The results showed that the

  8. Network structure and travel time perception.

    Science.gov (United States)

    Parthasarathi, Pavithra; Levinson, David; Hochmair, Hartwig

    2013-01-01

    The purpose of this research is to test the systematic variation in the perception of travel time among travelers and relate the variation to the underlying street network structure. Travel survey data from the Twin Cities metropolitan area (which includes the cities of Minneapolis and St. Paul) is used for the analysis. Travelers are classified into two groups based on the ratio of perceived and estimated commute travel time. The measures of network structure are estimated using the street network along the identified commute route. T-test comparisons are conducted to identify statistically significant differences in estimated network measures between the two traveler groups. The combined effect of these estimated network measures on travel time is then analyzed using regression models. The results from the t-test and regression analyses confirm the influence of the underlying network structure on the perception of travel time.

  9. Frequency of victimization experiences and well-being among online, offline and combined victims on social online network sites of German children and adolescents

    Directory of Open Access Journals (Sweden)

    Michael eGlüer

    2015-12-01

    Full Text Available Victimization is associated with negative developmental outcomes in childhood and adolescence. However, previous studies have provided mixed results regarding the association between offline and online victimization and indicators of social, psychological, and somatic well-being. In this study, we investigated 1,906 German children and adolescents (grades 5 to 10, mean age = 13.9; SD = 2.1 with and without offline or online victimization experiences who participated in a social online network (SNS. Online questionnaires were used to assess previous victimization (offline, online, combined, and without, somatic and psychological symptoms, self-esteem, and social self-concept (social competence, resistance to peer influence, esteem by others. In total, 1,362 (71.4% children and adolescents reported being a member of at least one social online network, and 377 students (28.8% reported previous victimization. Most children and adolescents had offline victimization experiences (17.5%, whereas 2.7% reported online victimization, and 8.6% reported combined experiences. Girls reported more online and combined victimization, and boys reported more offline victimization. The type of victimization (offline, online, combined was associated with increased reports of psychological and somatic symptoms, lower self-esteem and esteem by others, and lower resistance to peer influences. The effects were comparable for the groups with offline and online victimization. They were, however, increased in the combined group in comparison to victims with offline experiences alone.

  10. Exploring patterns of alteration in Alzheimer’s disease brain networks: a combined structural and functional connectomics analysis

    Directory of Open Access Journals (Sweden)

    Fulvia Palesi

    2016-09-01

    Full Text Available Alzheimer’s disease (AD is a neurodegenerative disorder characterized by a severe derangement of cognitive functions, primarily memory, in elderly subjects. As far as the functional impairment is concerned, growing evidence supports the disconnection syndrome hypothesis. Recent investigations using fMRI have revealed a generalized alteration of resting state networks in patients affected by AD and mild cognitive impairment (MCI. However, it was unclear whether the changes in functional connectivity were accompanied by corresponding structural network changes. In this work, we have developed a novel structural/functional connectomic approach: resting state fMRI was used to identify the functional cortical network nodes and diffusion MRI to reconstruct the fiber tracts to give a weight to internodal subcortical connections. Then, local and global efficiency were determined for different networks, exploring specific alterations of integration and segregation patterns in AD and MCI patients compared to healthy controls (HC. In the default mode network (DMN, that was the most affected, axonal loss and reduced axonal integrity appeared to compromise both local and global efficiency along posterior-anterior connections. In the basal ganglia network (BGN, disruption of white matter integrity implied that main alterations occurred in local microstructure. In the anterior insular network (AIN, neuronal loss probably subtended a compromised communication with the insular cortex. Cognitive performance, evaluated by neuropsychological examinations, revealed a dependency on integration and segregation of brain networks. These findings are indicative of the fact that cognitive deficits in AD could be associated not only with cortical alterations (revealed by fMRI but also with subcortical alterations (revealed by diffusion MRI that extend beyond the areas primarily damaged by neurodegeneration, towards the support of an emerging concept of AD as a

  11. Precise deformation measurement of prestressed concrete beam during a strain test using the combination of intersection photogrammetry and micro-network measurement

    Science.gov (United States)

    Urban, Rudolf; Braun, Jaroslav; Štroner, Martin

    2015-05-01

    The prestressed thin-walled concrete elements enable the bridge a relatively large span. These structures are advantageous in economic and environmental way due to their thickness and lower consumption of materials. The bending moments can be effectively influenced by using the pre-stress. The experiment was done to monitor deformation of the under load. During the experiment the discrete points were monitored. To determine a large number of points, the intersection photogrammetry combined with precise micro-network were chosen. Keywords:

  12. Histopathological Image Classification With Color Pattern Random Binary Hashing-Based PCANet and Matrix-Form Classifier.

    Science.gov (United States)

    Shi, Jun; Wu, Jinjie; Li, Yan; Zhang, Qi; Ying, Shihui

    2017-09-01

    The computer-aided diagnosis for histopathological images has attracted considerable attention. Principal component analysis network (PCANet) is a novel deep learning algorithm for feature learning with the simple network architecture and parameters. In this study, a color pattern random binary hashing-based PCANet (C-RBH-PCANet) algorithm is proposed to learn an effective feature representation from color histopathological images. The color norm pattern and angular pattern are extracted from the principal component images of R, G, and B color channels after cascaded PCA networks. The random binary encoding is then performed on both color norm pattern images and angular pattern images to generate multiple binary images. Moreover, we rearrange the pooled local histogram features by spatial pyramid pooling to a matrix-form for reducing the dimension of feature and preserving spatial information. Therefore, a C-RBH-PCANet and matrix-form classifier-based feature learning and classification framework is proposed for diagnosis of color histopathological images. The experimental results on three color histopathological image datasets show that the proposed C-RBH-PCANet algorithm is superior to the original PCANet and other conventional unsupervised deep learning algorithms, while the best performance is achieved by the proposed feature learning and classification framework that combines C-RBH-PCANet and matrix-form classifier.

  13. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... amount of training data to cover sufficient biological variability. Learning methods scaling badly with number of training data points cannot be used in such scenarios. This may restrict the usage of many powerful classifiers having excellent generalization ability. We propose a cascaded classifier which...

  14. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  15. Combining Community Engagement and Scientific Approaches in Next-Generation Monitor Siting: The Case of the Imperial County Community Air Network

    Directory of Open Access Journals (Sweden)

    Michelle Wong

    2018-03-01

    Full Text Available Air pollution continues to be a global public health threat, and the expanding availability of small, low-cost air sensors has led to increased interest in both personal and crowd-sourced air monitoring. However, to date, few low-cost air monitoring networks have been developed with the scientific rigor or continuity needed to conduct public health surveillance and inform policy. In Imperial County, California, near the U.S./Mexico border, we used a collaborative, community-engaged process to develop a community air monitoring network that attains the scientific rigor required for research, while also achieving community priorities. By engaging community residents in the project design, monitor siting processes, data dissemination, and other key activities, the resulting air monitoring network data are relevant, trusted, understandable, and used by community residents. Integration of spatial analysis and air monitoring best practices into the network development process ensures that the data are reliable and appropriate for use in research activities. This combined approach results in a community air monitoring network that is better able to inform community residents, support research activities, guide public policy, and improve public health. Here we detail the monitor siting process and outline the advantages and challenges of this approach.

  16. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  17. A CLASSIFIER SYSTEM USING SMOOTH GRAPH COLORING

    Directory of Open Access Journals (Sweden)

    JORGE FLORES CRUZ

    2017-01-01

    Full Text Available Unsupervised classifiers allow clustering methods with less or no human intervention. Therefore it is desirable to group the set of items with less data processing. This paper proposes an unsupervised classifier system using the model of soft graph coloring. This method was tested with some classic instances in the literature and the results obtained were compared with classifications made with human intervention, yielding as good or better results than supervised classifiers, sometimes providing alternative classifications that considers additional information that humans did not considered.

  18. Security Enrichment in Intrusion Detection System Using Classifier Ensemble

    Directory of Open Access Journals (Sweden)

    Uma R. Salunkhe

    2017-01-01

    Full Text Available In the era of Internet and with increasing number of people as its end users, a large number of attack categories are introduced daily. Hence, effective detection of various attacks with the help of Intrusion Detection Systems is an emerging trend in research these days. Existing studies show effectiveness of machine learning approaches in handling Intrusion Detection Systems. In this work, we aim to enhance detection rate of Intrusion Detection System by using machine learning technique. We propose a novel classifier ensemble based IDS that is constructed using hybrid approach which combines data level and feature level approach. Classifier ensembles combine the opinions of different experts and improve the intrusion detection rate. Experimental results show the improved detection rates of our system compared to reference technique.

  19. 76 FR 19707 - Classified Information: Classification/Declassification/Access; Authority To Classify Information

    Science.gov (United States)

    2011-04-08

    ... Office of the Secretary of Transportation 49 CFR Part 8 RIN 9991-AA58 Classified Information: Classification/Declassification/Access; Authority To Classify Information AGENCY: Office of the Secretary of... originally classify information as SECRET or CONFIDENTIAL to the Administrator of the Federal Aviation...

  20. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  1. Robust C-Loss Kernel Classifiers.

    Science.gov (United States)

    Xu, Guibiao; Hu, Bao-Gang; Principe, Jose C

    2018-03-01

    The correntropy-induced loss (C-loss) function has the nice property of being robust to outliers. In this paper, we study the C-loss kernel classifier with the Tikhonov regularization term, which is used to avoid overfitting. After using the half-quadratic optimization algorithm, which converges much faster than the gradient optimization algorithm, we find out that the resulting C-loss kernel classifier is equivalent to an iterative weighted least square support vector machine (LS-SVM). This relationship helps explain the robustness of iterative weighted LS-SVM from the correntropy and density estimation perspectives. On the large-scale data sets which have low-rank Gram matrices, we suggest to use incomplete Cholesky decomposition to speed up the training process. Moreover, we use the representer theorem to improve the sparseness of the resulting C-loss kernel classifier. Experimental results confirm that our methods are more robust to outliers than the existing common classifiers.

  2. Using Discriminative Dimensionality Reduction to Visualize Classifiers

    OpenAIRE

    Schulz, Alexander; Gisbrecht, Andrej; Hammer, Barbara

    2015-01-01

    Albeit automated classifiers offer a standard tool in many application areas, there exists hardly a generic possibility to directly inspect their behavior, which goes beyond the mere classification of (sets of) data points. In this contribution, we propose a general framework how to visualize a given classifier and its behavior as concerns a given data set in two dimensions. More specifically, we use modern nonlinear dimensionality reduction (DR) techniques to project a given set of data poin...

  3. Novel Approach to Classify Plants Based on Metabolite-Content Similarity

    Directory of Open Access Journals (Sweden)

    Kang Liu

    2017-01-01

    Full Text Available Secondary metabolites are bioactive substances with diverse chemical structures. Depending on the ecological environment within which they are living, higher plants use different combinations of secondary metabolites for adaptation (e.g., defense against attacks by herbivores or pathogenic microbes. This suggests that the similarity in metabolite content is applicable to assess phylogenic similarity of higher plants. However, such a chemical taxonomic approach has limitations of incomplete metabolomics data. We propose an approach for successfully classifying 216 plants based on their known incomplete metabolite content. Structurally similar metabolites have been clustered using the network clustering algorithm DPClus. Plants have been represented as binary vectors, implying relations with structurally similar metabolite groups, and classified using Ward’s method of hierarchical clustering. Despite incomplete data, the resulting plant clusters are consistent with the known evolutional relations of plants. This finding reveals the significance of metabolite content as a taxonomic marker. We also discuss the predictive power of metabolite content in exploring nutritional and medicinal properties in plants. As a byproduct of our analysis, we could predict some currently unknown species-metabolite relations.

  4. Stacking machine learning classifiers to identify Higgs bosons at the LHC

    International Nuclear Information System (INIS)

    Alves, A.

    2017-01-01

    Machine learning (ML) algorithms have been employed in the problem of classifying signal and background events with high accuracy in particle physics. In this paper, we compare the performance of a widespread ML technique, namely, stacked generalization , against the results of two state-of-art algorithms: (1) a deep neural network (DNN) in the task of discovering a new neutral Higgs boson and (2) a scalable machine learning system for tree boosting, in the Standard Model Higgs to tau leptons channel, both at the 8 TeV LHC. In a cut-and-count analysis, stacking three algorithms performed around 16% worse than DNN but demanding far less computation efforts, however, the same stacking outperforms boosted decision trees. Using the stacked classifiers in a multivariate statistical analysis (MVA), on the other hand, significantly enhances the statistical significance compared to cut-and-count in both Higgs processes, suggesting that combining an ensemble of simpler and faster ML algorithms with MVA tools is a better approach than building a complex state-of-art algorithm for cut-and-count.

  5. Credit scoring using ensemble of various classifiers on reduced feature set

    Directory of Open Access Journals (Sweden)

    Dahiya Shashi

    2015-01-01

    Full Text Available Credit scoring methods are widely used for evaluating loan applications in financial and banking institutions. Credit score identifies if applicant customers belong to good risk applicant group or a bad risk applicant group. These decisions are based on the demographic data of the customers, overall business by the customer with bank, and loan payment history of the loan applicants. The advantages of using credit scoring models include reducing the cost of credit analysis, enabling faster credit decisions and diminishing possible risk. Many statistical and machine learning techniques such as Logistic Regression, Support Vector Machines, Neural Networks and Decision tree algorithms have been used independently and as hybrid credit scoring models. This paper proposes an ensemble based technique combining seven individual models to increase the classification accuracy. Feature selection has also been used for selecting important attributes for classification. Cross classification was conducted using three data partitions. German credit dataset having 1000 instances and 21 attributes is used in the present study. The results of the experiments revealed that the ensemble model yielded a very good accuracy when compared to individual models. In all three different partitions, the ensemble model was able to classify more than 80% of the loan customers as good creditors correctly. Also, for 70:30 partition there was a good impact of feature selection on the accuracy of classifiers. The results were improved for almost all individual models including the ensemble model.

  6. Wreck finding and classifying with a sonar filter

    Science.gov (United States)

    Agehed, Kenneth I.; Padgett, Mary Lou; Becanovic, Vlatko; Bornich, C.; Eide, Age J.; Engman, Per; Globoden, O.; Lindblad, Thomas; Lodgberg, K.; Waldemark, Karina E.

    1999-03-01

    Sonar detection and classification of sunken wrecks and other objects is of keen interest to many. This paper describes the use of neural networks (NN) for locating, classifying and determining the alignment of objects on a lakebed in Sweden. A complex program for data preprocessing and visualization was developed. Part of this program, The Sonar Viewer, facilitates training and testing of the NN using (1) the MATLAB Neural Networks Toolbox for multilayer perceptrons with backpropagation (BP) and (2) the neural network O-Algorithm (OA) developed by Age Eide and Thomas Lindblad. Comparison of the performance of the two neural networks approaches indicates that, for this data BP generalizes better than OA, but use of OA eliminates the need for training on non-target (lake bed) images. The OA algorithm does not work well with the smaller ships. Increasing the resolution to counteract this problem would slow down processing and require interpolation to suggest data values between the actual sonar measurements. In general, good results were obtained for recognizing large wrecks and determining their alignment. The programs developed a useful tool for further study of sonar signals in many environments. Recent developments in pulse coupled neural networks techniques provide an opportunity to extend the use in real-world applications where experimental data is difficult, expensive or time consuming to obtain.

  7. A combination of gene expression ranking and co-expression network analysis increases discovery rate in large-scale mutant screens for novel Arabidopsis thaliana abiotic stress genes.

    Science.gov (United States)

    Ransbotyn, Vanessa; Yeger-Lotem, Esti; Basha, Omer; Acuna, Tania; Verduyn, Christoph; Gordon, Michal; Chalifa-Caspi, Vered; Hannah, Matthew A; Barak, Simon

    2015-05-01

    As challenges to food security increase, the demand for lead genes for improving crop production is growing. However, genetic screens of plant mutants typically yield very low frequencies of desired phenotypes. Here, we present a powerful computational approach for selecting candidate genes for screening insertion mutants. We combined ranking of Arabidopsis thaliana regulatory genes according to their expression in response to multiple abiotic stresses (Multiple Stress [MST] score), with stress-responsive RNA co-expression network analysis to select candidate multiple stress regulatory (MSTR) genes. Screening of 62 T-DNA insertion mutants defective in candidate MSTR genes, for abiotic stress germination phenotypes yielded a remarkable hit rate of up to 62%; this gene discovery rate is 48-fold greater than that of other large-scale insertional mutant screens. Moreover, the MST score of these genes could be used to prioritize them for screening. To evaluate the contribution of the co-expression analysis, we screened 64 additional mutant lines of MST-scored genes that did not appear in the RNA co-expression network. The screening of these MST-scored genes yielded a gene discovery rate of 36%, which is much higher than that of classic mutant screens but not as high as when picking candidate genes from the co-expression network. The MSTR co-expression network that we created, AraSTressRegNet is publicly available at http://netbio.bgu.ac.il/arnet. This systems biology-based screening approach combining gene ranking and network analysis could be generally applicable to enhancing identification of genes regulating additional processes in plants and other organisms provided that suitable transcriptome data are available. © 2014 Society for Experimental Biology, Association of Applied Biologists and John Wiley & Sons Ltd.

  8. Modified feed-forward neural network structures and combined-function-derivative approximations incorporating exchange symmetry for potential energy surface fitting.

    Science.gov (United States)

    Nguyen, Hieu T T; Le, Hung M

    2012-05-10

    The classical interchange (permutation) of atoms of similar identity does not have an effect on the overall potential energy. In this study, we present feed-forward neural network structures that provide permutation symmetry to the potential energy surfaces of molecules. The new feed-forward neural network structures are employed to fit the potential energy surfaces for two illustrative molecules, which are H(2)O and ClOOCl. Modifications are made to describe the symmetric interchange (permutation) of atoms of similar identity (or mathematically, the permutation of symmetric input parameters). The combined-function-derivative approximation algorithm (J. Chem. Phys. 2009, 130, 134101) is also implemented to fit the neural-network potential energy surfaces accurately. The combination of our symmetric neural networks and the function-derivative fitting effectively produces PES fits using fewer numbers of training data points. For H(2)O, only 282 configurations are employed as the training set; the testing root-mean-squared and mean-absolute energy errors are respectively reported as 0.0103 eV (0.236 kcal/mol) and 0.0078 eV (0.179 kcal/mol). In the ClOOCl case, 1693 configurations are required to construct the training set; the root-mean-squared and mean-absolute energy errors for the ClOOCl testing set are 0.0409 eV (0.943 kcal/mol) and 0.0269 eV (0.620 kcal/mol), respectively. Overall, we find good agreements between ab initio and NN prediction in term of energy and gradient errors, and conclude that the new feed-forward neural-network models advantageously describe the molecules with excellent accuracy.

  9. Combined bio-inspired/evolutionary computational methods in cross-layer protocol optimization for wireless ad hoc sensor networks

    Science.gov (United States)

    Hortos, William S.

    2011-06-01

    Published studies have focused on the application of one bio-inspired or evolutionary computational method to the functions of a single protocol layer in a wireless ad hoc sensor network (WSN). For example, swarm intelligence in the form of ant colony optimization (ACO), has been repeatedly considered for the routing of data/information among nodes, a network-layer function, while genetic algorithms (GAs) have been used to select transmission frequencies and power levels, physical-layer functions. Similarly, artificial immune systems (AISs) as well as trust models of quantized data reputation have been invoked for detection of network intrusions that cause anomalies in data and information; these act on the application and presentation layers. Most recently, a self-organizing scheduling scheme inspired by frog-calling behavior for reliable data transmission in wireless sensor networks, termed anti-phase synchronization, has been applied to realize collision-free transmissions between neighboring nodes, a function of the MAC layer. In a novel departure from previous work, the cross-layer approach to WSN protocol design suggests applying more than one evolutionary computational method to the functions of the appropriate layers to improve the QoS performance of the cross-layer design beyond that of one method applied to a single layer's functions. A baseline WSN protocol design, embedding GAs, anti-phase synchronization, ACO, and a trust model based on quantized data reputation at the physical, MAC, network, and application layers, respectively, is constructed. Simulation results demonstrate the synergies among the bioinspired/ evolutionary methods of the proposed baseline design improve the overall QoS performance of networks over that of a single computational method.

  10. Pixel Classification of SAR ice images using ANFIS-PSO Classifier

    Directory of Open Access Journals (Sweden)

    G. Vasumathi

    2016-12-01

    Full Text Available Synthetic Aperture Radar (SAR is playing a vital role in taking extremely high resolution radar images. It is greatly used to monitor the ice covered ocean regions. Sea monitoring is important for various purposes which includes global climate systems and ship navigation. Classification on the ice infested area gives important features which will be further useful for various monitoring process around the ice regions. Main objective of this paper is to classify the SAR ice image that helps in identifying the regions around the ice infested areas. In this paper three stages are considered in classification of SAR ice images. It starts with preprocessing in which the speckled SAR ice images are denoised using various speckle removal filters; comparison is made on all these filters to find the best filter in speckle removal. Second stage includes segmentation in which different regions are segmented using K-means and watershed segmentation algorithms; comparison is made between these two algorithms to find the best in segmenting SAR ice images. The last stage includes pixel based classification which identifies and classifies the segmented regions using various supervised learning classifiers. The algorithms includes Back propagation neural networks (BPN, Fuzzy Classifier, Adaptive Neuro Fuzzy Inference Classifier (ANFIS classifier and proposed ANFIS with Particle Swarm Optimization (PSO classifier; comparison is made on all these classifiers to propose which classifier is best suitable for classifying the SAR ice image. Various evaluation metrics are performed separately at all these three stages.

  11. Effects of the distribution density of a biomass combined heat and power plant network on heat utilisation efficiency in village-town systems.

    Science.gov (United States)

    Zhang, Yifei; Kang, Jian

    2017-11-01

    The building of biomass combined heat and power (CHP) plants is an effective means of developing biomass energy because they can satisfy demands for winter heating and electricity consumption. The purpose of this study was to analyse the effect of the distribution density of a biomass CHP plant network on heat utilisation efficiency in a village-town system. The distribution density is determined based on the heat transmission threshold, and the heat utilisation efficiency is determined based on the heat demand distribution, heat output efficiency, and heat transmission loss. The objective of this study was to ascertain the optimal value for the heat transmission threshold using a multi-scheme comparison based on an analysis of these factors. To this end, a model of a biomass CHP plant network was built using geographic information system tools to simulate and generate three planning schemes with different heat transmission thresholds (6, 8, and 10 km) according to the heat demand distribution. The heat utilisation efficiencies of these planning schemes were then compared by calculating the gross power, heat output efficiency, and heat transmission loss of the biomass CHP plant for each scenario. This multi-scheme comparison yielded the following results: when the heat transmission threshold was low, the distribution density of the biomass CHP plant network was high and the biomass CHP plants tended to be relatively small. In contrast, when the heat transmission threshold was high, the distribution density of the network was low and the biomass CHP plants tended to be relatively large. When the heat transmission threshold was 8 km, the distribution density of the biomass CHP plant network was optimised for efficient heat utilisation. To promote the development of renewable energy sources, a planning scheme for a biomass CHP plant network that maximises heat utilisation efficiency can be obtained using the optimal heat transmission threshold and the nonlinearity

  12. Combining Amplitude Spectrum Area with Previous Shock Information Using Neural Networks Improves Prediction Performance of Defibrillation Outcome for Subsequent Shocks in Out-Of-Hospital Cardiac Arrest Patients.

    Directory of Open Access Journals (Sweden)

    Mi He

    Full Text Available Quantitative ventricular fibrillation (VF waveform analysis is a potentially powerful tool to optimize defibrillation. However, whether combining VF features with additional attributes that related to the previous shock could enhance the prediction performance for subsequent shocks is still uncertain.A total of 528 defibrillation shocks from 199 patients experienced out-of-hospital cardiac arrest were analyzed in this study. VF waveform was quantified using amplitude spectrum area (AMSA from defibrillator's ECG recordings prior to each shock. Combinations of AMSA with previous shock index (PSI or/and change of AMSA (ΔAMSA between successive shocks were exercised through a training dataset including 255shocks from 99patientswith neural networks. Performance of the combination methods were compared with AMSA based single feature prediction by area under receiver operating characteristic curve(AUC, sensitivity, positive predictive value (PPV, negative predictive value (NPV and prediction accuracy (PA through a validation dataset that was consisted of 273 shocks from 100patients.A total of61 (61.0% patients required subsequent shocks (N = 173 in the validation dataset. Combining AMSA with PSI and ΔAMSA obtained highest AUC (0.904 vs. 0.819, p<0.001 among different combination approaches for subsequent shocks. Sensitivity (76.5% vs. 35.3%, p<0.001, NPV (90.2% vs. 76.9%, p = 0.007 and PA (86.1% vs. 74.0%, p = 0.005were greatly improved compared with AMSA based single feature prediction with a threshold of 90% specificity.In this retrospective study, combining AMSA with previous shock information using neural networks greatly improves prediction performance of defibrillation outcome for subsequent shocks.

  13. Hybrid dynamic modeling of Escherichia coli central metabolic network combining Michaelis–Menten and approximate kinetic equations

    DEFF Research Database (Denmark)

    Costa, Rafael S.; Machado, Daniel; Rocha, Isabel

    2010-01-01

    The construction of dynamic metabolic models at reaction network level requires the use of mechanistic enzymatic rate equations that comprise a large number of parameters. The lack of knowledge on these equations and the difficulty in the experimental identification of their associated parameters...

  14. Improving the measurement of semantic similarity by combining gene ontology and co-functional network: a random walk based approach.

    Science.gov (United States)

    Peng, Jiajie; Zhang, Xuanshuo; Hui, Weiwei; Lu, Junya; Li, Qianqian; Liu, Shuhui; Shang, Xuequn

    2018-03-19

    Gene Ontology (GO) is one of the most popular bioinformatics resources. In the past decade, Gene Ontology-based gene semantic similarity has been effectively used to model gene-to-gene interactions in multiple research areas. However, most existing semantic similarity approaches rely only on GO annotations and structure, or incorporate only local interactions in the co-functional network. This may lead to inaccurate GO-based similarity resulting from the incomplete GO topology structure and gene annotations. We present NETSIM2, a new network-based method that allows researchers to measure GO-based gene functional similarities by considering the global structure of the co-functional network with a random walk with restart (RWR)-based method, and by selecting the significant term pairs to decrease the noise information. Based on the EC number (Enzyme Commission)-based groups of yeast and Arabidopsis, evaluation test shows that NETSIM2 can enhance the accuracy of Gene Ontology-based gene functional similarity. Using NETSIM2 as an example, we found that the accuracy of semantic similarities can be significantly improved after effectively incorporating the global gene-to-gene interactions in the co-functional network, especially on the species that gene annotations in GO are far from complete.

  15. Resilience to climate change in a cross-scale tourism governance context: a combined quantitative-qualitative network analysis

    Directory of Open Access Journals (Sweden)

    Tobias Luthe

    2016-03-01

    Full Text Available Social systems in mountain regions are exposed to a number of disturbances, such as climate change. Calls for conceptual and practical approaches on how to address climate change have been taken up in the literature. The resilience concept as a comprehensive theory-driven approach to address climate change has only recently increased in importance. Limited research has been undertaken concerning tourism and resilience from a network governance point of view. We analyze tourism supply chain networks with regard to resilience to climate change at the municipal governance scale of three Alpine villages. We compare these with a planned destination management organization (DMO as a governance entity of the same three municipalities on the regional scale. Network measures are analyzed via a quantitative social network analysis (SNA focusing on resilience from a tourism governance point of view. Results indicate higher resilience of the regional DMO because of a more flexible and diverse governance structure, more centralized steering of fast collective action, and improved innovative capacity, because of higher modularity and better core-periphery integration. Interpretations of quantitative results have been qualitatively validated by interviews and a workshop. We conclude that adaptation of tourism-dependent municipalities to gradual climate change should be dealt with at a regional governance scale and adaptation to sudden changes at a municipal scale. Overall, DMO building at a regional scale may enhance the resilience of tourism destinations, if the municipalities are well integrated.

  16. Application of Bayesian classifier for the diagnosis of dental pain.

    Science.gov (United States)

    Chattopadhyay, Subhagata; Davis, Rima M; Menezes, Daphne D; Singh, Gautam; Acharya, Rajendra U; Tamura, Toshio

    2012-06-01

    Toothache is the most common symptom encountered in dental practice. It is subjective and hence, there is a possibility of under or over diagnosis of oral pathologies where patients present with only toothache. Addressing the issue, the paper proposes a methodology to develop a Bayesian classifier for diagnosing some common dental diseases (D = 10) using a set of 14 pain parameters (P = 14). A questionnaire is developed using these variables and filled up by ten dentists (n = 10) with various levels of expertise. Each questionnaire is consisted of 40 real-world cases. Total 14*10*10 combinations of data are hence collected. The reliability of the data (P and D sets) has been tested by measuring (Cronbach's alpha). One-way ANOVA has been used to note the intra and intergroup mean differences. Multiple linear regressions are used for extracting the significant predictors among P and D sets as well as finding the goodness of the model fit. A naïve Bayesian classifier (NBC) is then designed initially that predicts either presence/absence of diseases given a set of pain parameters. The most informative and highest quality datasheet is used for training of NBC and the remaining sheets are used for testing the performance of the classifier. Hill climbing algorithm is used to design a Learned Bayes' classifier (LBC), which learns the conditional probability table (CPT) entries optimally. The developed LBC showed an average accuracy of 72%, which is clinically encouraging to the dentists.

  17. An ensemble self-training protein interaction article classifier.

    Science.gov (United States)

    Chen, Yifei; Hou, Ping; Manderick, Bernard

    2014-01-01

    Protein-protein interaction (PPI) is essential to understand the fundamental processes governing cell biology. The mining and curation of PPI knowledge are critical for analyzing proteomics data. Hence it is desired to classify articles PPI-related or not automatically. In order to build interaction article classification systems, an annotated corpus is needed. However, it is usually the case that only a small number of labeled articles can be obtained manually. Meanwhile, a large number of unlabeled articles are available. By combining ensemble learning and semi-supervised self-training, an ensemble self-training interaction classifier called EST_IACer is designed to classify PPI-related articles based on a small number of labeled articles and a large number of unlabeled articles. A biological background based feature weighting strategy is extended using the category information from both labeled and unlabeled data. Moreover, a heuristic constraint is put forward to select optimal instances from unlabeled data to improve the performance further. Experiment results show that the EST_IACer can classify the PPI related articles effectively and efficiently.

  18. Combined effect of CVR and penetration of DG in the voltage profile and losses of lowvoltage secondary distribution networks

    Science.gov (United States)

    Bokhari, Abdullah

    Demarcations between traditional distribution power systems and distributed generation (DG) architectures are increasingly evolving as higher DG penetration is introduced in the system. The concerns in existing electric power systems (EPSs) to accommodate less restrictive interconnection policies while maintaining reliability and performance of power delivery have been the major challenge for DG growth. In this dissertation, the work is aimed to study power quality, energy saving and losses in a low voltage distributed network under various DG penetration cases. Simulation platform suite that includes electric power system, distributed generation and ZIP load models is implemented to determine the impact of DGs on power system steady state performance and the voltage profile of the customers/loads in the network under the voltage reduction events. The investigation designed to test the DG impact on power system starting with one type of DG, then moves on multiple DG types distributed in a random case and realistic/balanced case. The functionality of the proposed DG interconnection is designed to meet the basic requirements imposed by the various interconnection standards, most notably IEEE 1547, public service commission, and local utility regulation. It is found that implementation of DGs on the low voltage secondary network would improve customer's voltage profile, system losses and significantly provide energy savings and economics for utilities. In a network populated with DGs, utility would have a uniform voltage profile at the customers end as the voltage profile becomes more concentrated around targeted voltage level. The study further reinforced the concept that the behavior of DG in distributed network would improve voltage regulation as certain percentage reduction on utility side would ensure uniform percentage reduction seen by all customers and reduce number of voltage violations.

  19. Assessment of the impact of dimensionality reduction methods on information classes and classifiers for hyperspectral image classification by multiple classifier system

    Science.gov (United States)

    Damodaran, Bharath Bhushan; Nidamanuri, Rama Rao

    2014-06-01

    Identification of the appropriate combination of classifier and dimensionality reduction method has been a recurring task for various hyperspectral image classification scenarios. Image classification by multiple classifier system has been evolving as a promising method for enhancing accuracy and reliability of image classification. Because of the diversity in generalization capabilities of various dimensionality reduction methods, the classifier optimal to the problem and hence the accuracy of image classification varies considerably. The impact of including multiple dimensionality reduction methods in the MCS architecture for the supervised classification of a hyperspectral image for land cover classification has been assessed in this study. Multi-source airborne hyperspectral images acquired over five different sites covering a range of land cover categories have been classified by a multiple classifier system and compared against the classification results obtained from support vector machines (SVM). The MCS offers acceptable classification results across the images or sites when there are multiple dimensionality reduction methods in addition to different classifiers. Apart from offering acceptable classification results, the MCS indicates about 5% increase in the overall accuracy when compared to the SVM classifier across the hyperspectral images and sites. Results indicate the presence of dimensionality reduction method specific empirical preferences by land cover categories for certain classifiers thereby demanding the design of MCS to support adaptive selection of classifiers and dimensionality reduction methods for hyperspectral image classification.

  20. Pilot plant trial of the reflux classifier

    Energy Technology Data Exchange (ETDEWEB)

    Galvin, K.P.; Doroodchi, E.; Callen, A.M.; Lambert, N.; Pratten, S.J. [University of Newcastle, Callaghan, NSW (Australia). Dept. of Chemical Engineers

    2002-01-01

    The Ludowici LMPE Reflux Classifier is a new device designed for classifying and separating particles on the basis of size or density. This work presents a series of experimental results obtained from the first pilot scale study of the reflux classifier (RC). The main focus of the investigation was to assess the particle gravity separation and throughput performance of the device. In this study, the classifier was used to separate coal and mineral matter less than 2 mm in size. The experimental results were then compared with the performance data on a teetered bed separator (TBS). It was concluded that the classifier could offer an excellent gravity separation at a remarkably high solids throughput of 47 t/m{sup 2}h more than 3 times higher than for a TBS. The separation performance of the RC was also better, with significantly less variation in the D-50 with particle size. A simple theoretical model providing an explanation of the separation performance is also presented.