WorldWideScience

Sample records for neural ensemble based

  1. Village Building Identification Based on Ensemble Convolutional Neural Networks

    Science.gov (United States)

    Guo, Zhiling; Chen, Qi; Xu, Yongwei; Shibasaki, Ryosuke; Shao, Xiaowei

    2017-01-01

    In this study, we present the Ensemble Convolutional Neural Network (ECNN), an elaborate CNN frame formulated based on ensembling state-of-the-art CNN models, to identify village buildings from open high-resolution remote sensing (HRRS) images. First, to optimize and mine the capability of CNN for village mapping and to ensure compatibility with our classification targets, a few state-of-the-art models were carefully optimized and enhanced based on a series of rigorous analyses and evaluations. Second, rather than directly implementing building identification by using these models, we exploited most of their advantages by ensembling their feature extractor parts into a stronger model called ECNN based on the multiscale feature learning method. Finally, the generated ECNN was applied to a pixel-level classification frame to implement object identification. The proposed method can serve as a viable tool for village building identification with high accuracy and efficiency. The experimental results obtained from the test area in Savannakhet province, Laos, prove that the proposed ECNN model significantly outperforms existing methods, improving overall accuracy from 96.64% to 99.26%, and kappa from 0.57 to 0.86. PMID:29084154

  2. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    Science.gov (United States)

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  3. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  4. Forecasting crude oil price with an EMD-based neural network ensemble learning paradigm

    International Nuclear Information System (INIS)

    Yu, Lean; Wang, Shouyang; Lai, Kin Keung

    2008-01-01

    In this study, an empirical mode decomposition (EMD) based neural network ensemble learning paradigm is proposed for world crude oil spot price forecasting. For this purpose, the original crude oil spot price series were first decomposed into a finite, and often small, number of intrinsic mode functions (IMFs). Then a three-layer feed-forward neural network (FNN) model was used to model each of the extracted IMFs, so that the tendencies of these IMFs could be accurately predicted. Finally, the prediction results of all IMFs are combined with an adaptive linear neural network (ALNN), to formulate an ensemble output for the original crude oil price series. For verification and testing, two main crude oil price series, West Texas Intermediate (WTI) crude oil spot price and Brent crude oil spot price, are used to test the effectiveness of the proposed EMD-based neural network ensemble learning methodology. Empirical results obtained demonstrate attractiveness of the proposed EMD-based neural network ensemble learning paradigm. (author)

  5. Decoding of Human Movements Based on Deep Brain Local Field Potentials Using Ensemble Neural Networks

    Directory of Open Access Journals (Sweden)

    Mohammad S. Islam

    2017-01-01

    Full Text Available Decoding neural activities related to voluntary and involuntary movements is fundamental to understanding human brain motor circuits and neuromotor disorders and can lead to the development of neuromotor prosthetic devices for neurorehabilitation. This study explores using recorded deep brain local field potentials (LFPs for robust movement decoding of Parkinson’s disease (PD and Dystonia patients. The LFP data from voluntary movement activities such as left and right hand index finger clicking were recorded from patients who underwent surgeries for implantation of deep brain stimulation electrodes. Movement-related LFP signal features were extracted by computing instantaneous power related to motor response in different neural frequency bands. An innovative neural network ensemble classifier has been proposed and developed for accurate prediction of finger movement and its forthcoming laterality. The ensemble classifier contains three base neural network classifiers, namely, feedforward, radial basis, and probabilistic neural networks. The majority voting rule is used to fuse the decisions of the three base classifiers to generate the final decision of the ensemble classifier. The overall decoding performance reaches a level of agreement (kappa value at about 0.729±0.16 for decoding movement from the resting state and about 0.671±0.14 for decoding left and right visually cued movements.

  6. A comparative study of breast cancer diagnosis based on neural network ensemble via improved training algorithms.

    Science.gov (United States)

    Azami, Hamed; Escudero, Javier

    2015-08-01

    Breast cancer is one of the most common types of cancer in women all over the world. Early diagnosis of this kind of cancer can significantly increase the chances of long-term survival. Since diagnosis of breast cancer is a complex problem, neural network (NN) approaches have been used as a promising solution. Considering the low speed of the back-propagation (BP) algorithm to train a feed-forward NN, we consider a number of improved NN trainings for the Wisconsin breast cancer dataset: BP with momentum, BP with adaptive learning rate, BP with adaptive learning rate and momentum, Polak-Ribikre conjugate gradient algorithm (CGA), Fletcher-Reeves CGA, Powell-Beale CGA, scaled CGA, resilient BP (RBP), one-step secant and quasi-Newton methods. An NN ensemble, which is a learning paradigm to combine a number of NN outputs, is used to improve the accuracy of the classification task. Results demonstrate that NN ensemble-based classification methods have better performance than NN-based algorithms. The highest overall average accuracy is 97.68% obtained by NN ensemble trained by RBP for 50%-50% training-test evaluation method.

  7. Finger language recognition based on ensemble artificial neural network learning using armband EMG sensors.

    Science.gov (United States)

    Kim, Seongjung; Kim, Jongman; Ahn, Soonjae; Kim, Youngho

    2018-04-18

    Deaf people use sign or finger languages for communication, but these methods of communication are very specialized. For this reason, the deaf can suffer from social inequalities and financial losses due to their communication restrictions. In this study, we developed a finger language recognition algorithm based on an ensemble artificial neural network (E-ANN) using an armband system with 8-channel electromyography (EMG) sensors. The developed algorithm was composed of signal acquisition, filtering, segmentation, feature extraction and an E-ANN based classifier that was evaluated with the Korean finger language (14 consonants, 17 vowels and 7 numbers) in 17 subjects. E-ANN was categorized according to the number of classifiers (1 to 10) and size of training data (50 to 1500). The accuracy of the E-ANN-based classifier was obtained by 5-fold cross validation and compared with an artificial neural network (ANN)-based classifier. As the number of classifiers (1 to 8) and size of training data (50 to 300) increased, the average accuracy of the E-ANN-based classifier increased and the standard deviation decreased. The optimal E-ANN was composed with eight classifiers and 300 size of training data, and the accuracy of the E-ANN was significantly higher than that of the general ANN.

  8. Neural network ensemble based supplier evaluation model in line with nuclear safety conditions

    International Nuclear Information System (INIS)

    Wang Yonggang; Chang Baosheng

    2006-01-01

    Nuclear safety is the most critical target for nuclear power plant operation. Besides the rigid operation procedures established, evaluation of suppliers working with plants can be another important aspects. Selection and evaluation of suppliers can be classified with qualitative analysis and quantitative management. The indicators involved are coupled with each other in a very complicated manner, therefore the relevant data show the strong characteristic of non-linearity. The article is based on the research and analysis of the real conditions of the Daya Bay nuclear power plant operation management. Through study and analysis of the information home and abroad, and with reference to the neural network ensemble technology, the supplier evaluation system and model are established as illustrated within the paper, thus to heighten objectivity of the supplier selection. (authors)

  9. Intelligent and robust prediction of short term wind power using genetic programming based ensemble of neural networks

    International Nuclear Information System (INIS)

    Zameer, Aneela; Arshad, Junaid; Khan, Asifullah; Raja, Muhammad Asif Zahoor

    2017-01-01

    Highlights: • Genetic programming based ensemble of neural networks is employed for short term wind power prediction. • Proposed predictor shows resilience against abrupt changes in weather. • Genetic programming evolves nonlinear mapping between meteorological measures and wind-power. • Proposed approach gives mathematical expressions of wind power to its independent variables. • Proposed model shows relatively accurate and steady wind-power prediction performance. - Abstract: The inherent instability of wind power production leads to critical problems for smooth power generation from wind turbines, which then requires an accurate forecast of wind power. In this study, an effective short term wind power prediction methodology is presented, which uses an intelligent ensemble regressor that comprises Artificial Neural Networks and Genetic Programming. In contrast to existing series based combination of wind power predictors, whereby the error or variation in the leading predictor is propagated down the stream to the next predictors, the proposed intelligent ensemble predictor avoids this shortcoming by introducing Genetical Programming based semi-stochastic combination of neural networks. It is observed that the decision of the individual base regressors may vary due to the frequent and inherent fluctuations in the atmospheric conditions and thus meteorological properties. The novelty of the reported work lies in creating ensemble to generate an intelligent, collective and robust decision space and thereby avoiding large errors due to the sensitivity of the individual wind predictors. The proposed ensemble based regressor, Genetic Programming based ensemble of Artificial Neural Networks, has been implemented and tested on data taken from five different wind farms located in Europe. Obtained numerical results of the proposed model in terms of various error measures are compared with the recent artificial intelligence based strategies to demonstrate the

  10. Neural Network Ensemble Based Approach for 2D-Interval Prediction of Solar Photovoltaic Power

    Directory of Open Access Journals (Sweden)

    Mashud Rana

    2016-10-01

    Full Text Available Solar energy generated from PhotoVoltaic (PV systems is one of the most promising types of renewable energy. However, it is highly variable as it depends on the solar irradiance and other meteorological factors. This variability creates difficulties for the large-scale integration of PV power in the electricity grid and requires accurate forecasting of the electricity generated by PV systems. In this paper we consider 2D-interval forecasts, where the goal is to predict summary statistics for the distribution of the PV power values in a future time interval. 2D-interval forecasts have been recently introduced, and they are more suitable than point forecasts for applications where the predicted variable has a high variability. We propose a method called NNE2D that combines variable selection based on mutual information and an ensemble of neural networks, to compute 2D-interval forecasts, where the two interval boundaries are expressed in terms of percentiles. NNE2D was evaluated for univariate prediction of Australian solar PV power data for two years. The results show that it is a promising method, outperforming persistence baselines and other methods used for comparison in terms of accuracy and coverage probability.

  11. Neural network ensemble based CAD system for focal liver lesions from B-mode ultrasound.

    Science.gov (United States)

    Virmani, Jitendra; Kumar, Vinod; Kalra, Naveen; Khandelwal, Niranjan

    2014-08-01

    A neural network ensemble (NNE) based computer-aided diagnostic (CAD) system to assist radiologists in differential diagnosis between focal liver lesions (FLLs), including (1) typical and atypical cases of Cyst, hemangioma (HEM) and metastatic carcinoma (MET) lesions, (2) small and large hepatocellular carcinoma (HCC) lesions, along with (3) normal (NOR) liver tissue is proposed in the present work. Expert radiologists, visualize the textural characteristics of regions inside and outside the lesions to differentiate between different FLLs, accordingly texture features computed from inside lesion regions of interest (IROIs) and texture ratio features computed from IROIs and surrounding lesion regions of interests (SROIs) are taken as input. Principal component analysis (PCA) is used for reducing the dimensionality of the feature space before classifier design. The first step of classification module consists of a five class PCA-NN based primary classifier which yields probability outputs for five liver image classes. The second step of classification module consists of ten binary PCA-NN based secondary classifiers for NOR/Cyst, NOR/HEM, NOR/HCC, NOR/MET, Cyst/HEM, Cyst/HCC, Cyst/MET, HEM/HCC, HEM/MET and HCC/MET classes. The probability outputs of five class PCA-NN based primary classifier is used to determine the first two most probable classes for a test instance, based on which it is directed to the corresponding binary PCA-NN based secondary classifier for crisp classification between two classes. By including the second step of the classification module, classification accuracy increases from 88.7 % to 95 %. The promising results obtained by the proposed system indicate its usefulness to assist radiologists in differential diagnosis of FLLs.

  12. [Computer aided diagnosis model for lung tumor based on ensemble convolutional neural network].

    Science.gov (United States)

    Wang, Yuanyuan; Zhou, Tao; Lu, Huiling; Wu, Cuiying; Yang, Pengfei

    2017-08-01

    The convolutional neural network (CNN) could be used on computer-aided diagnosis of lung tumor with positron emission tomography (PET)/computed tomography (CT), which can provide accurate quantitative analysis to compensate for visual inertia and defects in gray-scale sensitivity, and help doctors diagnose accurately. Firstly, parameter migration method is used to build three CNNs (CT-CNN, PET-CNN, and PET/CT-CNN) for lung tumor recognition in CT, PET, and PET/CT image, respectively. Then, we aimed at CT-CNN to obtain the appropriate model parameters for CNN training through analysis the influence of model parameters such as epochs, batchsize and image scale on recognition rate and training time. Finally, three single CNNs are used to construct ensemble CNN, and then lung tumor PET/CT recognition was completed through relative majority vote method and the performance between ensemble CNN and single CNN was compared. The experiment results show that the ensemble CNN is better than single CNN on computer-aided diagnosis of lung tumor.

  13. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    Science.gov (United States)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived

  14. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous ... process by training a number of neural networks. .... Matlab® version 6.1 was employed for building principal component ... provide a fair simulation of calibration data set with some degree.

  15. Pattern Recognition of Momentary Mental Workload Based on Multi-Channel Electrophysiological Data and Ensemble Convolutional Neural Networks.

    Science.gov (United States)

    Zhang, Jianhua; Li, Sunan; Wang, Rubin

    2017-01-01

    In this paper, we deal with the Mental Workload (MWL) classification problem based on the measured physiological data. First we discussed the optimal depth (i.e., the number of hidden layers) and parameter optimization algorithms for the Convolutional Neural Networks (CNN). The base CNNs designed were tested according to five classification performance indices, namely Accuracy, Precision, F-measure, G-mean, and required training time. Then we developed an Ensemble Convolutional Neural Network (ECNN) to enhance the accuracy and robustness of the individual CNN model. For the ECNN design, three model aggregation approaches (weighted averaging, majority voting and stacking) were examined and a resampling strategy was used to enhance the diversity of individual CNN models. The results of MWL classification performance comparison indicated that the proposed ECNN framework can effectively improve MWL classification performance and is featured by entirely automatic feature extraction and MWL classification, when compared with traditional machine learning methods.

  16. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    Science.gov (United States)

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  17. Detecting Malware with an Ensemble Method Based on Deep Neural Network

    Directory of Open Access Journals (Sweden)

    Jinpei Yan

    2018-01-01

    Full Text Available Malware detection plays a crucial role in computer security. Recent researches mainly use machine learning based methods heavily relying on domain knowledge for manually extracting malicious features. In this paper, we propose MalNet, a novel malware detection method that learns features automatically from the raw data. Concretely, we first generate a grayscale image from malware file, meanwhile extracting its opcode sequences with the decompilation tool IDA. Then MalNet uses CNN and LSTM networks to learn from grayscale image and opcode sequence, respectively, and takes a stacking ensemble for malware classification. We perform experiments on more than 40,000 samples including 20,650 benign files collected from online software providers and 21,736 malwares provided by Microsoft. The evaluation result shows that MalNet achieves 99.88% validation accuracy for malware detection. In addition, we also take malware family classification experiment on 9 malware families to compare MalNet with other related works, in which MalNet outperforms most of related works with 99.36% detection accuracy and achieves a considerable speed-up on detecting efficiency comparing with two state-of-the-art results on Microsoft malware dataset.

  18. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2015-10-01

    Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  19. One-day-ahead streamflow forecasting via super-ensembles of several neural network architectures based on the Multi-Level Diversity Model

    Science.gov (United States)

    Brochero, Darwin; Hajji, Islem; Pina, Jasson; Plana, Queralt; Sylvain, Jean-Daniel; Vergeynst, Jenna; Anctil, Francois

    2015-04-01

    Theories about generalization error with ensembles are mainly based on the diversity concept, which promotes resorting to many members of different properties to support mutually agreeable decisions. Kuncheva (2004) proposed the Multi Level Diversity Model (MLDM) to promote diversity in model ensembles, combining different data subsets, input subsets, models, parameters, and including a combiner level in order to optimize the final ensemble. This work tests the hypothesis about the minimisation of the generalization error with ensembles of Neural Network (NN) structures. We used the MLDM to evaluate two different scenarios: (i) ensembles from a same NN architecture, and (ii) a super-ensemble built by a combination of sub-ensembles of many NN architectures. The time series used correspond to the 12 basins of the MOdel Parameter Estimation eXperiment (MOPEX) project that were used by Duan et al. (2006) and Vos (2013) as benchmark. Six architectures are evaluated: FeedForward NN (FFNN) trained with the Levenberg Marquardt algorithm (Hagan et al., 1996), FFNN trained with SCE (Duan et al., 1993), Recurrent NN trained with a complex method (Weins et al., 2008), Dynamic NARX NN (Leontaritis and Billings, 1985), Echo State Network (ESN), and leak integrator neuron (L-ESN) (Lukosevicius and Jaeger, 2009). Each architecture performs separately an Input Variable Selection (IVS) according to a forward stepwise selection (Anctil et al., 2009) using mean square error as objective function. Post-processing by Predictor Stepwise Selection (PSS) of the super-ensemble has been done following the method proposed by Brochero et al. (2011). IVS results showed that the lagged stream flow, lagged precipitation, and Standardized Precipitation Index (SPI) (McKee et al., 1993) were the most relevant variables. They were respectively selected as one of the firsts three selected variables in 66, 45, and 28 of the 72 scenarios. A relationship between aridity index (Arora, 2002) and NN

  20. Finding diversity for building one-day ahead Hydrological Ensemble Prediction System based on artificial neural network stacks

    Science.gov (United States)

    Brochero, Darwin; Anctil, Francois; Gagné, Christian; López, Karol

    2013-04-01

    In this study, we addressed the application of Artificial Neural Networks (ANN) in the context of Hydrological Ensemble Prediction Systems (HEPS). Such systems have become popular in the past years as a tool to include the forecast uncertainty in the decision making process. HEPS considers fundamentally the uncertainty cascade model [4] for uncertainty representation. Analogously, the machine learning community has proposed models of multiple classifier systems that take into account the variability in datasets, input space, model structures, and parametric configuration [3]. This approach is based primarily on the well-known "no free lunch theorem" [1]. Consequently, we propose a framework based on two separate but complementary topics: data stratification and input variable selection (IVS). Thus, we promote an ANN prediction stack in which each predictor is trained based on input spaces defined by the IVS application on different stratified sub-samples. All this, added to the inherent variability of classical ANN optimization, leads us to our ultimate goal: diversity in the prediction, defined as the complementarity of the individual predictors. The stratification application on the 12 basins used in this study, which originate from the second and third workshop of the MOPEX project [2], shows that the informativeness of the data is far more important than the quantity used for ANN training. Additionally, the input space variability leads to ANN stacks that outperform an ANN stack model trained with 100% of the available information but with a random selection of dataset used in the early stopping method (scenario R100P). The results show that from a deterministic view, the main advantage focuses on the efficient selection of the training information, which is an equally important concept for the calibration of conceptual hydrological models. On the other hand, the diversity achieved is reflected in a substantial improvement in the scores that define the

  1. An Ensemble of Neural Networks for Stock Trading Decision Making

    Science.gov (United States)

    Chang, Pei-Chann; Liu, Chen-Hao; Fan, Chin-Yuan; Lin, Jun-Lin; Lai, Chih-Ming

    Stock turning signals detection are very interesting subject arising in numerous financial and economic planning problems. In this paper, Ensemble Neural Network system with Intelligent Piecewise Linear Representation for stock turning points detection is presented. The Intelligent piecewise linear representation method is able to generate numerous stocks turning signals from the historic data base, then Ensemble Neural Network system will be applied to train the pattern and retrieve similar stock price patterns from historic data for training. These turning signals represent short-term and long-term trading signals for selling or buying stocks from the market which are applied to forecast the future turning points from the set of test data. Experimental results demonstrate that the hybrid system can make a significant and constant amount of profit when compared with other approaches using stock data available in the market.

  2. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    Marquardt algorithm by varying conditions such as inputs, hidden neurons, initialization, training sets and random Gaussian noise injection to ... Several such ensembles formed the population which was evolved to generate the fittest ensemble.

  3. Social behaviour shapes hypothalamic neural ensemble representations of conspecific sex

    Science.gov (United States)

    Remedios, Ryan; Kennedy, Ann; Zelikowsky, Moriel; Grewe, Benjamin F.; Schnitzer, Mark J.; Anderson, David J.

    2017-10-01

    All animals possess a repertoire of innate (or instinctive) behaviours, which can be performed without training. Whether such behaviours are mediated by anatomically distinct and/or genetically specified neural pathways remains unknown. Here we report that neural representations within the mouse hypothalamus, that underlie innate social behaviours, are shaped by social experience. Oestrogen receptor 1-expressing (Esr1+) neurons in the ventrolateral subdivision of the ventromedial hypothalamus (VMHvl) control mating and fighting in rodents. We used microendoscopy to image Esr1+ neuronal activity in the VMHvl of male mice engaged in these social behaviours. In sexually and socially experienced adult males, divergent and characteristic neural ensembles represented male versus female conspecifics. However, in inexperienced adult males, male and female intruders activated overlapping neuronal populations. Sex-specific neuronal ensembles gradually separated as the mice acquired social and sexual experience. In mice permitted to investigate but not to mount or attack conspecifics, ensemble divergence did not occur. However, 30 minutes of sexual experience with a female was sufficient to promote the separation of male and female ensembles and to induce an attack response 24 h later. These observations uncover an unexpected social experience-dependent component to the formation of hypothalamic neural assemblies controlling innate social behaviours. More generally, they reveal plasticity and dynamic coding in an evolutionarily ancient deep subcortical structure that is traditionally viewed as a ‘hard-wired’ system.

  4. Bayesian model ensembling using meta-trained recurrent neural networks

    NARCIS (Netherlands)

    Ambrogioni, L.; Berezutskaya, Y.; Gü ç lü , U.; Borne, E.W.P. van den; Gü ç lü tü rk, Y.; Gerven, M.A.J. van; Maris, E.G.G.

    2017-01-01

    In this paper we demonstrate that a recurrent neural network meta-trained on an ensemble of arbitrary classification tasks can be used as an approximation of the Bayes optimal classifier. This result is obtained by relying on the framework of e-free approximate Bayesian inference, where the Bayesian

  5. Data Pre-Analysis and Ensemble of Various Artificial Neural Networks for Monthly Streamflow Forecasting

    Directory of Open Access Journals (Sweden)

    Jianzhong Zhou

    2018-05-01

    Full Text Available This paper introduces three artificial neural network (ANN architectures for monthly streamflow forecasting: a radial basis function network, an extreme learning machine, and the Elman network. Three ensemble techniques, a simple average ensemble, a weighted average ensemble, and an ANN-based ensemble, were used to combine the outputs of the individual ANN models. The objective was to highlight the performance of the general regression neural network-based ensemble technique (GNE through an improvement of monthly streamflow forecasting accuracy. Before the construction of an ANN model, data preanalysis techniques, such as empirical wavelet transform (EWT, were exploited to eliminate the oscillations of the streamflow series. Additionally, a theory of chaos phase space reconstruction was used to select the most relevant and important input variables for forecasting. The proposed GNE ensemble model has been applied for the mean monthly streamflow observation data from the Wudongde hydrological station in the Jinsha River Basin, China. Comparisons and analysis of this study have demonstrated that the denoised streamflow time series was less disordered and unsystematic than was suggested by the original time series according to chaos theory. Thus, EWT can be adopted as an effective data preanalysis technique for the prediction of monthly streamflow. Concurrently, the GNE performed better when compared with other ensemble techniques.

  6. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Directory of Open Access Journals (Sweden)

    Yoonsik Shim

    2016-10-01

    Full Text Available We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP. The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  7. Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP.

    Science.gov (United States)

    Shim, Yoonsik; Philippides, Andrew; Staras, Kevin; Husbands, Phil

    2016-10-01

    We propose a biologically plausible architecture for unsupervised ensemble learning in a population of spiking neural network classifiers. A mixture of experts type organisation is shown to be effective, with the individual classifier outputs combined via a gating network whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating mechanism is based on recent experimental findings. An abstract, analytically tractable model of the ITDP driven ensemble architecture is derived from a logical model based on the probabilities of neural firing events. A detailed analysis of this model provides insights that allow it to be extended into a full, biologically plausible, computational implementation of the architecture which is demonstrated on a visual classification task. The extended model makes use of a style of spiking network, first introduced as a model of cortical microcircuits, that is capable of Bayesian inference, effectively performing expectation maximization. The unsupervised ensemble learning mechanism, based around such spiking expectation maximization (SEM) networks whose combined outputs are mediated by ITDP, is shown to perform the visual classification task well and to generalize to unseen data. The combined ensemble performance is significantly better than that of the individual classifiers, validating the ensemble architecture and learning mechanisms. The properties of the full model are analysed in the light of extensive experiments with the classification task, including an investigation into the influence of different input feature selection schemes and a comparison with a hierarchical STDP based ensemble architecture.

  8. A Comparison Study on Rule Extraction from Neural Network Ensembles, Boosted Shallow Trees, and SVMs

    OpenAIRE

    Bologna, Guido; Hayashi, Yoichi

    2018-01-01

    One way to make the knowledge stored in an artificial neural network more intelligible is to extract symbolic rules. However, producing rules from Multilayer Perceptrons (MLPs) is an NP-hard problem. Many techniques have been introduced to generate rules from single neural networks, but very few were proposed for ensembles. Moreover, experiments were rarely assessed by 10-fold cross-validation trials. In this work, based on the Discretized Interpretable Multilayer Perceptron (DIMLP), experime...

  9. An Ensemble of 2D Convolutional Neural Networks for Tumor Segmentation

    DEFF Research Database (Denmark)

    Lyksborg, Mark; Puonti, Oula; Agn, Mikael

    2015-01-01

    Accurate tumor segmentation plays an important role in radiosurgery planning and the assessment of radiotherapy treatment efficacy. In this paper we propose a method combining an ensemble of 2D convolutional neural networks for doing a volumetric segmentation of magnetic resonance images....... The segmentation is done in three steps; first the full tumor region, is segmented from the background by a voxel-wise merging of the decisions of three networks learned from three orthogonal planes, next the segmentation is refined using a cellular automaton-based seed growing method known as growcut. Finally......, within-tumor sub-regions are segmented using an additional ensemble of networks trained for the task. We demonstrate the method on the MICCAI Brain Tumor Segmentation Challenge dataset of 2014, and show improved segmentation accuracy compared to an axially trained 2D network and an ensemble segmentation...

  10. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Xike Zhang

    2018-05-01

    Full Text Available Daily land surface temperature (LST forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD coupled with Machine Learning (ML algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs and a single residue item. Then, the Partial Autocorrelation Function (PACF is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE, Mean Absolute Error (MAE, Mean Absolute Percentage Error (MAPE, Root Mean Square Error (RMSE, Pearson Correlation Coefficient (CC and Nash-Sutcliffe Coefficient of Efficiency (NSCE. To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN, LSTM and Empirical Mode Decomposition (EMD coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other

  11. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei

    2018-05-21

    Daily land surface temperature (LST) forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD) coupled with Machine Learning (ML) algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM) neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs) and a single residue item. Then, the Partial Autocorrelation Function (PACF) is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Pearson Correlation Coefficient (CC) and Nash-Sutcliffe Coefficient of Efficiency (NSCE). To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN), LSTM and Empirical Mode Decomposition (EMD) coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other five

  12. Modeling of steam generator in nuclear power plant using neural network ensemble

    International Nuclear Information System (INIS)

    Lee, S. K.; Lee, E. C.; Jang, J. W.

    2003-01-01

    Neural network is now being used in modeling the steam generator is known to be difficult due to the reverse dynamics. However, Neural network is prone to the problem of overfitting. This paper investigates the use of neural network combining methods to model steam generator water level and compares with single neural network. The results show that neural network ensemble is effective tool which can offer improved generalization, lower dependence of the training set and reduced training time

  13. An artificial neural network ensemble model for estimating global solar radiation from Meteosat satellite images

    International Nuclear Information System (INIS)

    Linares-Rodriguez, Alvaro; Ruiz-Arias, José Antonio; Pozo-Vazquez, David; Tovar-Pescador, Joaquin

    2013-01-01

    An optimized artificial neural network ensemble model is built to estimate daily global solar radiation over large areas. The model uses clear-sky estimates and satellite images as input variables. Unlike most studies using satellite imagery based on visible channels, our model also exploits all information within infrared channels of the Meteosat 9 satellite. A genetic algorithm is used to optimize selection of model inputs, for which twelve are selected – eleven 3-km Meteosat 9 channels and one clear-sky term. The model is validated in Andalusia (Spain) from January 2008 through December 2008. Measured data from 83 stations across the region are used, 65 for training and 18 independent ones for testing the model. At the latter stations, the ensemble model yields an overall root mean square error of 6.74% and correlation coefficient of 99%; the generated estimates are relatively accurate and errors spatially uniform. The model yields reliable results even on cloudy days, improving on current models based on satellite imagery. - Highlights: • Daily solar radiation data are generated using an artificial neural network ensemble. • Eleven Meteosat channels observations and a clear sky term are used as model inputs. • Model exploits all information within infrared Meteosat channels. • Measured data for a year from 83 ground stations are used. • The proposed approach has better performance than existing models on daily basis

  14. A Comparison Study on Rule Extraction from Neural Network Ensembles, Boosted Shallow Trees, and SVMs

    Directory of Open Access Journals (Sweden)

    Guido Bologna

    2018-01-01

    Full Text Available One way to make the knowledge stored in an artificial neural network more intelligible is to extract symbolic rules. However, producing rules from Multilayer Perceptrons (MLPs is an NP-hard problem. Many techniques have been introduced to generate rules from single neural networks, but very few were proposed for ensembles. Moreover, experiments were rarely assessed by 10-fold cross-validation trials. In this work, based on the Discretized Interpretable Multilayer Perceptron (DIMLP, experiments were performed on 10 repetitions of stratified 10-fold cross-validation trials over 25 binary classification problems. The DIMLP architecture allowed us to produce rules from DIMLP ensembles, boosted shallow trees (BSTs, and Support Vector Machines (SVM. The complexity of rulesets was measured with the average number of generated rules and average number of antecedents per rule. From the 25 used classification problems, the most complex rulesets were generated from BSTs trained by “gentle boosting” and “real boosting.” Moreover, we clearly observed that the less complex the rules were, the better their fidelity was. In fact, rules generated from decision stumps trained by modest boosting were, for almost all the 25 datasets, the simplest with the highest fidelity. Finally, in terms of average predictive accuracy and average ruleset complexity, the comparison of some of our results to those reported in the literature proved to be competitive.

  15. Efficient Pruning Method for Ensemble Self-Generating Neural Networks

    Directory of Open Access Journals (Sweden)

    Hirotaka Inoue

    2003-12-01

    Full Text Available Recently, multiple classifier systems (MCS have been used for practical applications to improve classification accuracy. Self-generating neural networks (SGNN are one of the suitable base-classifiers for MCS because of their simple setting and fast learning. However, the computation cost of the MCS increases in proportion to the number of SGNN. In this paper, we propose an efficient pruning method for the structure of the SGNN in the MCS. We compare the pruned MCS with two sampling methods. Experiments have been conducted to compare the pruned MCS with an unpruned MCS, the MCS based on C4.5, and k-nearest neighbor method. The results show that the pruned MCS can improve its classification accuracy as well as reducing the computation cost.

  16. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    Science.gov (United States)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  17. Ensemble of classifiers based network intrusion detection system performance bound

    CSIR Research Space (South Africa)

    Mkuzangwe, Nenekazi NP

    2017-11-01

    Full Text Available This paper provides a performance bound of a network intrusion detection system (NIDS) that uses an ensemble of classifiers. Currently researchers rely on implementing the ensemble of classifiers based NIDS before they can determine the performance...

  18. Backpropagation Neural Ensemble for Localizing and Recognizing Non-Standardized Malaysia’s Car Plates

    OpenAIRE

    Chin Kim On; Teo Kein Yau; Rayner Alfred; Jason Teo; Patricia Anthony; Wang Cheng

    2016-01-01

    In this paper, we describe a research project that autonomously localizes and recognizes non-standardized Malaysian’s car plates using conventional Backpropagation algorithm (BPP) in combination with Ensemble Neural Network (ENN). We compared the results with the results obtained using simple Feed-Forward Neural Network (FFNN). This research aims to solve four main issues; (1) localization of car plates that has the same colour with the vehicle colour, (2) detection and recognition of car pla...

  19. Ensemble-based Kalman Filters in Strongly Nonlinear Dynamics

    Institute of Scientific and Technical Information of China (English)

    Zhaoxia PU; Joshua HACKER

    2009-01-01

    This study examines the effectiveness of ensemble Kalman filters in data assimilation with the strongly nonlinear dynamics of the Lorenz-63 model, and in particular their use in predicting the regime transition that occurs when the model jumps from one basin of attraction to the other. Four configurations of the ensemble-based Kalman filtering data assimilation techniques, including the ensemble Kalman filter, ensemble adjustment Kalman filter, ensemble square root filter and ensemble transform Kalman filter, are evaluated with their ability in predicting the regime transition (also called phase transition) and also are compared in terms of their sensitivity to both observational and sampling errors. The sensitivity of each ensemble-based filter to the size of the ensemble is also examined.

  20. An Effective and Novel Neural Network Ensemble for Shift Pattern Detection in Control Charts

    Directory of Open Access Journals (Sweden)

    Mahmoud Barghash

    2015-01-01

    Full Text Available Pattern recognition in control charts is critical to make a balance between discovering faults as early as possible and reducing the number of false alarms. This work is devoted to designing a multistage neural network ensemble that achieves this balance which reduces rework and scrape without reducing productivity. The ensemble under focus is composed of a series of neural network stages and a series of decision points. Initially, this work compared using multidecision points and single-decision point on the performance of the ANN which showed that multidecision points are highly preferable to single-decision points. This work also tested the effect of population percentages on the ANN and used this to optimize the ANN’s performance. Also this work used optimized and nonoptimized ANNs in an ensemble and proved that using nonoptimized ANN may reduce the performance of the ensemble. The ensemble that used only optimized ANNs has improved performance over individual ANNs and three-sigma level rule. In that respect using the designed ensemble can help in reducing the number of false stops and increasing productivity. It also can be used to discover even small shifts in the mean as early as possible.

  1. Improving a Deep Learning based RGB-D Object Recognition Model by Ensemble Learning

    DEFF Research Database (Denmark)

    Aakerberg, Andreas; Nasrollahi, Kamal; Heder, Thomas

    2018-01-01

    Augmenting RGB images with depth information is a well-known method to significantly improve the recognition accuracy of object recognition models. Another method to im- prove the performance of visual recognition models is ensemble learning. However, this method has not been widely explored...... in combination with deep convolutional neural network based RGB-D object recognition models. Hence, in this paper, we form different ensembles of complementary deep convolutional neural network models, and show that this can be used to increase the recognition performance beyond existing limits. Experiments...

  2. A class of energy-based ensembles in Tsallis statistics

    International Nuclear Information System (INIS)

    Chandrashekar, R; Naina Mohammed, S S

    2011-01-01

    A comprehensive investigation is carried out on the class of energy-based ensembles. The eight ensembles are divided into two main classes. In the isothermal class of ensembles the individual members are at the same temperature. A unified framework is evolved to describe the four isothermal ensembles using the currently accepted third constraint formalism. The isothermal–isobaric, grand canonical and generalized ensembles are illustrated through a study of the classical nonrelativistic and extreme relativistic ideal gas models. An exact calculation is possible only in the case of the isothermal–isobaric ensemble. The study of the ideal gas models in the grand canonical and the generalized ensembles has been carried out using a perturbative procedure with the nonextensivity parameter (1 − q) as the expansion parameter. Though all the thermodynamic quantities have been computed up to a particular order in (1 − q) the procedure can be extended up to any arbitrary order in the expansion parameter. In the adiabatic class of ensembles the individual members of the ensemble have the same value of the heat function and a unified formulation to described all four ensembles is given. The nonrelativistic and the extreme relativistic ideal gases are studied in the isoenthalpic–isobaric ensemble, the adiabatic ensemble with number fluctuations and the adiabatic ensemble with number and particle fluctuations

  3. An Ensemble of Neural Networks for Online Electron Filtering at the ATLAS Experiment.

    CERN Document Server

    Da Fonseca Pinto, Joao Victor; The ATLAS collaboration

    2018-01-01

    In 2017 the ATLAS experiment implemented an ensemble of neural networks (NeuralRinger algorithm) dedicated to improving the performance of filtering events containing electrons in the high-input rate online environment of the Large Hadron Collider at CERN, Geneva. The ensemble employs a concept of calorimetry rings. The training procedure and final structure of the ensemble are used to minimize fluctuations from detector response, according to the particle energy and position of incidence. A detailed study was carried out to assess profile distortions in crucial offline quantities through the usage of statistical tests and residual analysis. These details and the online performance of this algorithm during the 2017 data-taking will be presented.

  4. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  5. Online cross-validation-based ensemble learning.

    Science.gov (United States)

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  7. Multi-model ensemble hydrological simulation using a BP Neural Network for the upper Yalongjiang River Basin, China

    Science.gov (United States)

    Li, Zhanjie; Yu, Jingshan; Xu, Xinyi; Sun, Wenchao; Pang, Bo; Yue, Jiajia

    2018-06-01

    Hydrological models are important and effective tools for detecting complex hydrological processes. Different models have different strengths when capturing the various aspects of hydrological processes. Relying on a single model usually leads to simulation uncertainties. Ensemble approaches, based on multi-model hydrological simulations, can improve application performance over single models. In this study, the upper Yalongjiang River Basin was selected for a case study. Three commonly used hydrological models (SWAT, VIC, and BTOPMC) were selected and used for independent simulations with the same input and initial values. Then, the BP neural network method was employed to combine the results from the three models. The results show that the accuracy of BP ensemble simulation is better than that of the single models.

  8. Ensemble-Based Data Assimilation in Reservoir Characterization: A Review

    Directory of Open Access Journals (Sweden)

    Seungpil Jung

    2018-02-01

    Full Text Available This paper presents a review of ensemble-based data assimilation for strongly nonlinear problems on the characterization of heterogeneous reservoirs with different production histories. It concentrates on ensemble Kalman filter (EnKF and ensemble smoother (ES as representative frameworks, discusses their pros and cons, and investigates recent progress to overcome their drawbacks. The typical weaknesses of ensemble-based methods are non-Gaussian parameters, improper prior ensembles and finite population size. Three categorized approaches, to mitigate these limitations, are reviewed with recent accomplishments; improvement of Kalman gains, add-on of transformation functions, and independent evaluation of observed data. The data assimilation in heterogeneous reservoirs, applying the improved ensemble methods, is discussed on predicting unknown dynamic data in reservoir characterization.

  9. Comparison of standard resampling methods for performance estimation of artificial neural network ensembles

    OpenAIRE

    Green, Michael; Ohlsson, Mattias

    2007-01-01

    Estimation of the generalization performance for classification within the medical applications domain is always an important task. In this study we focus on artificial neural network ensembles as the machine learning technique. We present a numerical comparison between five common resampling techniques: k-fold cross validation (CV), holdout, using three cutoffs, and bootstrap using five different data sets. The results show that CV together with holdout $0.25$ and $0.50$ are the best resampl...

  10. An Ensemble Deep Convolutional Neural Network Model with Improved D-S Evidence Fusion for Bearing Fault Diagnosis.

    Science.gov (United States)

    Li, Shaobo; Liu, Guokai; Tang, Xianghong; Lu, Jianguang; Hu, Jianjun

    2017-07-28

    Intelligent machine health monitoring and fault diagnosis are becoming increasingly important for modern manufacturing industries. Current fault diagnosis approaches mostly depend on expert-designed features for building prediction models. In this paper, we proposed IDSCNN, a novel bearing fault diagnosis algorithm based on ensemble deep convolutional neural networks and an improved Dempster-Shafer theory based evidence fusion. The convolutional neural networks take the root mean square (RMS) maps from the FFT (Fast Fourier Transformation) features of the vibration signals from two sensors as inputs. The improved D-S evidence theory is implemented via distance matrix from evidences and modified Gini Index. Extensive evaluations of the IDSCNN on the Case Western Reserve Dataset showed that our IDSCNN algorithm can achieve better fault diagnosis performance than existing machine learning methods by fusing complementary or conflicting evidences from different models and sensors and adapting to different load conditions.

  11. Stress affects the neural ensemble for integrating new information and prior knowledge.

    Science.gov (United States)

    Vogel, Susanne; Kluen, Lisa Marieke; Fernández, Guillén; Schwabe, Lars

    2018-06-01

    Prior knowledge, represented as a schema, facilitates memory encoding. This schema-related learning is assumed to rely on the medial prefrontal cortex (mPFC) that rapidly integrates new information into the schema, whereas schema-incongruent or novel information is encoded by the hippocampus. Stress is a powerful modulator of prefrontal and hippocampal functioning and first studies suggest a stress-induced deficit of schema-related learning. However, the underlying neural mechanism is currently unknown. To investigate the neural basis of a stress-induced schema-related learning impairment, participants first acquired a schema. One day later, they underwent a stress induction or a control procedure before learning schema-related and novel information in the MRI scanner. In line with previous studies, learning schema-related compared to novel information activated the mPFC, angular gyrus, and precuneus. Stress, however, affected the neural ensemble activated during learning. Whereas the control group distinguished between sets of brain regions for related and novel information, stressed individuals engaged the hippocampus even when a relevant schema was present. Additionally, stressed participants displayed aberrant functional connectivity between brain regions involved in schema processing when encoding novel information. The failure to segregate functional connectivity patterns depending on the presence of prior knowledge was linked to impaired performance after stress. Our results show that stress affects the neural ensemble underlying the efficient use of schemas during learning. These findings may have relevant implications for clinical and educational settings. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Ensemble Nonlinear Autoregressive Exogenous Artificial Neural Networks for Short-Term Wind Speed and Power Forecasting.

    Science.gov (United States)

    Men, Zhongxian; Yee, Eugene; Lien, Fue-Sang; Yang, Zhiling; Liu, Yongqian

    2014-01-01

    Short-term wind speed and wind power forecasts (for a 72 h period) are obtained using a nonlinear autoregressive exogenous artificial neural network (ANN) methodology which incorporates either numerical weather prediction or high-resolution computational fluid dynamics wind field information as an exogenous input. An ensemble approach is used to combine the predictions from many candidate ANNs in order to provide improved forecasts for wind speed and power, along with the associated uncertainties in these forecasts. More specifically, the ensemble ANN is used to quantify the uncertainties arising from the network weight initialization and from the unknown structure of the ANN. All members forming the ensemble of neural networks were trained using an efficient particle swarm optimization algorithm. The results of the proposed methodology are validated using wind speed and wind power data obtained from an operational wind farm located in Northern China. The assessment demonstrates that this methodology for wind speed and power forecasting generally provides an improvement in predictive skills when compared to the practice of using an "optimal" weight vector from a single ANN while providing additional information in the form of prediction uncertainty bounds.

  13. Application of Entropy Ensemble Filter in Neural Network Forecasts of Tropical Pacific Sea Surface Temperatures

    Directory of Open Access Journals (Sweden)

    Hossein Foroozand

    2018-03-01

    Full Text Available Recently, the Entropy Ensemble Filter (EEF method was proposed to mitigate the computational cost of the Bootstrap AGGregatING (bagging method. This method uses the most informative training data sets in the model ensemble rather than all ensemble members created by the conventional bagging. In this study, we evaluate, for the first time, the application of the EEF method in Neural Network (NN modeling of El Nino-southern oscillation. Specifically, we forecast the first five principal components (PCs of sea surface temperature monthly anomaly fields over tropical Pacific, at different lead times (from 3 to 15 months, with a three-month increment for the period 1979–2017. We apply the EEF method in a multiple-linear regression (MLR model and two NN models, one using Bayesian regularization and one Levenberg-Marquardt algorithm for training, and evaluate their performance and computational efficiency relative to the same models with conventional bagging. All models perform equally well at the lead time of 3 and 6 months, while at higher lead times, the MLR model’s skill deteriorates faster than the nonlinear models. The neural network models with both bagging methods produce equally successful forecasts with the same computational efficiency. It remains to be shown whether this finding is sensitive to the dataset size.

  14. Stochastic resonance in an ensemble of single-electron neuromorphic devices and its application to competitive neural networks

    International Nuclear Information System (INIS)

    Oya, Takahide; Asai, Tetsuya; Amemiya, Yoshihito

    2007-01-01

    Neuromorphic computing based on single-electron circuit technology is gaining prominence because of its massively increased computational efficiency and the increasing relevance of computer technology and nanotechnology [Likharev K, Mayr A, Muckra I, Tuerel O. CrossNets: High-performance neuromorphic architectures for CMOL circuits. Molec Electron III: Ann NY Acad Sci 1006;2003:146-63; Oya T, Schmid A, Asai T, Leblebici Y, Amemiya Y. On the fault tolerance of a clustered single-electron neural network for differential enhancement. IEICE Electron Expr 2;2005:76-80]. The maximum impact of these technologies will be strongly felt when single-electron circuits based on fault- and noise-tolerant neural structures can operate at room temperature. In this paper, inspired by stochastic resonance (SR) in an ensemble of spiking neurons [Collins JJ, Chow CC, Imhoff TT. Stochastic resonance without tuning. Nature 1995;376:236-8], we propose our design of a basic single-electron neural component and report how we examined its statistical results on a network

  15. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    Science.gov (United States)

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  16. Flood Forecasting Based on TIGGE Precipitation Ensemble Forecast

    Directory of Open Access Journals (Sweden)

    Jinyin Ye

    2016-01-01

    Full Text Available TIGGE (THORPEX International Grand Global Ensemble was a major part of the THORPEX (Observing System Research and Predictability Experiment. It integrates ensemble precipitation products from all the major forecast centers in the world and provides systematic evaluation on the multimodel ensemble prediction system. Development of meteorologic-hydrologic coupled flood forecasting model and early warning model based on the TIGGE precipitation ensemble forecast can provide flood probability forecast, extend the lead time of the flood forecast, and gain more time for decision-makers to make the right decision. In this study, precipitation ensemble forecast products from ECMWF, NCEP, and CMA are used to drive distributed hydrologic model TOPX. We focus on Yi River catchment and aim to build a flood forecast and early warning system. The results show that the meteorologic-hydrologic coupled model can satisfactorily predict the flow-process of four flood events. The predicted occurrence time of peak discharges is close to the observations. However, the magnitude of the peak discharges is significantly different due to various performances of the ensemble prediction systems. The coupled forecasting model can accurately predict occurrence of the peak time and the corresponding risk probability of peak discharge based on the probability distribution of peak time and flood warning, which can provide users a strong theoretical foundation and valuable information as a promising new approach.

  17. Estimation of soil saturated hydraulic conductivity by artificial neural networks ensemble in smectitic soils

    Science.gov (United States)

    Sedaghat, A.; Bayat, H.; Safari Sinegani, A. A.

    2016-03-01

    The saturated hydraulic conductivity ( K s ) of the soil is one of the main soil physical properties. Indirect estimation of this parameter using pedo-transfer functions (PTFs) has received considerable attention. The Purpose of this study was to improve the estimation of K s using fractal parameters of particle and micro-aggregate size distributions in smectitic soils. In this study 260 disturbed and undisturbed soil samples were collected from Guilan province, the north of Iran. The fractal model of Bird and Perrier was used to compute the fractal parameters of particle and micro-aggregate size distributions. The PTFs were developed by artificial neural networks (ANNs) ensemble to estimate K s by using available soil data and fractal parameters. There were found significant correlations between K s and fractal parameters of particles and microaggregates. Estimation of K s was improved significantly by using fractal parameters of soil micro-aggregates as predictors. But using geometric mean and geometric standard deviation of particles diameter did not improve K s estimations significantly. Using fractal parameters of particles and micro-aggregates simultaneously, had the most effect in the estimation of K s . Generally, fractal parameters can be successfully used as input parameters to improve the estimation of K s in the PTFs in smectitic soils. As a result, ANNs ensemble successfully correlated the fractal parameters of particles and micro-aggregates to K s .

  18. Ensemble of Neural Network Conditional Random Fields for Self-Paced Brain Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Hossein Bashashati

    2017-07-01

    Full Text Available Classification of EEG signals in self-paced Brain Computer Interfaces (BCI is an extremely challenging task. The main difficulty stems from the fact that start time of a control task is not defined. Therefore it is imperative to exploit the characteristics of the EEG data to the extent possible. In sensory motor self-paced BCIs, while performing the mental task, the user’s brain goes through several well-defined internal state changes. Applying appropriate classifiers that can capture these state changes and exploit the temporal correlation in EEG data can enhance the performance of the BCI. In this paper, we propose an ensemble learning approach for self-paced BCIs. We use Bayesian optimization to train several different classifiers on different parts of the BCI hyper- parameter space. We call each of these classifiers Neural Network Conditional Random Field (NNCRF. NNCRF is a combination of a neural network and conditional random field (CRF. As in the standard CRF, NNCRF is able to model the correlation between adjacent EEG samples. However, NNCRF can also model the nonlinear dependencies between the input and the output, which makes it more powerful than the standard CRF. We compare the performance of our algorithm to those of three popular sequence labeling algorithms (Hidden Markov Models, Hidden Markov Support Vector Machines and CRF, and to two classical classifiers (Logistic Regression and Support Vector Machines. The classifiers are compared for the two cases: when the ensemble learning approach is not used and when it is. The data used in our studies are those from the BCI competition IV and the SM2 dataset. We show that our algorithm is considerably superior to the other approaches in terms of the Area Under the Curve (AUC of the BCI system.

  19. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo

    2015-11-11

    We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.

  20. A Prediction Method of Airport Noise Based on Hybrid Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Tao XU

    2014-05-01

    Full Text Available Using monitoring history data to build and to train a prediction model for airport noise is a normal method in recent years. However, the single model built in different ways has various performances in the storage, efficiency and accuracy. In order to predict the noise accurately in some complex environment around airport, this paper presents a prediction method based on hybrid ensemble learning. The proposed method ensembles three algorithms: artificial neural network as an active learner, nearest neighbor as a passive leaner and nonlinear regression as a synthesized learner. The experimental results show that the three learners can meet forecast demands respectively in on- line, near-line and off-line. And the accuracy of prediction is improved by integrating these three learners’ results.

  1. Dynamic neuronal ensembles: Issues in representing structure change in object-oriented, biologically-based brain models

    Energy Technology Data Exchange (ETDEWEB)

    Vahie, S.; Zeigler, B.P.; Cho, H. [Univ. of Arizona, Tucson, AZ (United States)

    1996-12-31

    This paper describes the structure of dynamic neuronal ensembles (DNEs). DNEs represent a new paradigm for learning, based on biological neural networks that use variable structures. We present a computational neural element that demonstrates biological neuron functionality such as neurotransmitter feedback absolute refractory period and multiple output potentials. More specifically, we will develop a network of neural elements that have the ability to dynamically strengthen, weaken, add and remove interconnections. We demonstrate that the DNE is capable of performing dynamic modifications to neuron connections and exhibiting biological neuron functionality. In addition to its applications for learning, DNEs provide an excellent environment for testing and analysis of biological neural systems. An example of habituation and hyper-sensitization in biological systems, using a neural circuit from a snail is presented and discussed. This paper provides an insight into the DNE paradigm using models developed and simulated in DEVS.

  2. Prediction of drug synergy in cancer using ensemble-based machine learning techniques

    Science.gov (United States)

    Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder

    2018-04-01

    Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.

  3. Human resource recommendation algorithm based on ensemble learning and Spark

    Science.gov (United States)

    Cong, Zihan; Zhang, Xingming; Wang, Haoxiang; Xu, Hongjie

    2017-08-01

    Aiming at the problem of “information overload” in the human resources industry, this paper proposes a human resource recommendation algorithm based on Ensemble Learning. The algorithm considers the characteristics and behaviours of both job seeker and job features in the real business circumstance. Firstly, the algorithm uses two ensemble learning methods-Bagging and Boosting. The outputs from both learning methods are then merged to form user interest model. Based on user interest model, job recommendation can be extracted for users. The algorithm is implemented as a parallelized recommendation system on Spark. A set of experiments have been done and analysed. The proposed algorithm achieves significant improvement in accuracy, recall rate and coverage, compared with recommendation algorithms such as UserCF and ItemCF.

  4. Harmony Search Based Parameter Ensemble Adaptation for Differential Evolution

    Directory of Open Access Journals (Sweden)

    Rammohan Mallipeddi

    2013-01-01

    Full Text Available In differential evolution (DE algorithm, depending on the characteristics of the problem at hand and the available computational resources, different strategies combined with a different set of parameters may be effective. In addition, a single, well-tuned combination of strategies and parameters may not guarantee optimal performance because different strategies combined with different parameter settings can be appropriate during different stages of the evolution. Therefore, various adaptive/self-adaptive techniques have been proposed to adapt the DE strategies and parameters during the course of evolution. In this paper, we propose a new parameter adaptation technique for DE based on ensemble approach and harmony search algorithm (HS. In the proposed method, an ensemble of parameters is randomly sampled which form the initial harmony memory. The parameter ensemble evolves during the course of the optimization process by HS algorithm. Each parameter combination in the harmony memory is evaluated by testing them on the DE population. The performance of the proposed adaptation method is evaluated using two recently proposed strategies (DE/current-to-pbest/bin and DE/current-to-gr_best/bin as basic DE frameworks. Numerical results demonstrate the effectiveness of the proposed adaptation technique compared to the state-of-the-art DE based algorithms on a set of challenging test problems (CEC 2005.

  5. Comprehensive Study on Lexicon-based Ensemble Classification Sentiment Analysis

    Directory of Open Access Journals (Sweden)

    Łukasz Augustyniak

    2015-12-01

    Full Text Available We propose a novel method for counting sentiment orientation that outperforms supervised learning approaches in time and memory complexity and is not statistically significantly different from them in accuracy. Our method consists of a novel approach to generating unigram, bigram and trigram lexicons. The proposed method, called frequentiment, is based on calculating the frequency of features (words in the document and averaging their impact on the sentiment score as opposed to documents that do not contain these features. Afterwards, we use ensemble classification to improve the overall accuracy of the method. What is important is that the frequentiment-based lexicons with sentiment threshold selection outperform other popular lexicons and some supervised learners, while being 3–5 times faster than the supervised approach. We compare 37 methods (lexicons, ensembles with lexicon’s predictions as input and supervised learners applied to 10 Amazon review data sets and provide the first statistical comparison of the sentiment annotation methods that include ensemble approaches. It is one of the most comprehensive comparisons of domain sentiment analysis in the literature.

  6. An artificial neural network ensemble method for fault diagnosis of proton exchange membrane fuel cell system

    International Nuclear Information System (INIS)

    Shao, Meng; Zhu, Xin-Jian; Cao, Hong-Fei; Shen, Hai-Feng

    2014-01-01

    The commercial viability of PEMFC (proton exchange membrane fuel cell) systems depends on using effective fault diagnosis technologies in PEMFC systems. However, many researchers have experimentally studied PEMFC (proton exchange membrane fuel cell) systems without considering certain fault conditions. In this paper, an ANN (artificial neural network) ensemble method is presented that improves the stability and reliability of the PEMFC systems. In the first part, a transient model giving it flexibility in application to some exceptional conditions is built. The PEMFC dynamic model is built and simulated using MATLAB. In the second, using this model and experiments, the mechanisms of four different faults in PEMFC systems are analyzed in detail. Third, the ANN ensemble for the fault diagnosis is built and modeled. This model is trained and tested by the data. The test result shows that, compared with the previous method for fault diagnosis of PEMFC systems, the proposed fault diagnosis method has higher diagnostic rate and generalization ability. Moreover, the partial structure of this method can be altered easily, along with the change of the PEMFC systems. In general, this method for diagnosis of PEMFC has value for certain applications. - Highlights: • We analyze the principles and mechanisms of the four faults in PEMFC (proton exchange membrane fuel cell) system. • We design and model an ANN (artificial neural network) ensemble method for the fault diagnosis of PEMFC system. • This method has high diagnostic rate and strong generalization ability

  7. A Novel Multiscale Ensemble Carbon Price Prediction Model Integrating Empirical Mode Decomposition, Genetic Algorithm and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Bangzhu Zhu

    2012-02-01

    Full Text Available Due to the movement and complexity of the carbon market, traditional monoscale forecasting approaches often fail to capture its nonstationary and nonlinear properties and accurately describe its moving tendencies. In this study, a multiscale ensemble forecasting model integrating empirical mode decomposition (EMD, genetic algorithm (GA and artificial neural network (ANN is proposed to forecast carbon price. Firstly, the proposed model uses EMD to decompose carbon price data into several intrinsic mode functions (IMFs and one residue. Then, the IMFs and residue are composed into a high frequency component, a low frequency component and a trend component which have similar frequency characteristics, simple components and strong regularity using the fine-to-coarse reconstruction algorithm. Finally, those three components are predicted using an ANN trained by GA, i.e., a GAANN model, and the final forecasting results can be obtained by the sum of these three forecasting results. For verification and testing, two main carbon future prices with different maturity in the European Climate Exchange (ECX are used to test the effectiveness of the proposed multiscale ensemble forecasting model. Empirical results obtained demonstrate that the proposed multiscale ensemble forecasting model can outperform the single random walk (RW, ARIMA, ANN and GAANN models without EMD preprocessing and the ensemble ARIMA model with EMD preprocessing.

  8. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks.

    Science.gov (United States)

    Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun

    2016-10-13

    The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  9. The Use of Artificial-Intelligence-Based Ensembles for Intrusion Detection: A Review

    Directory of Open Access Journals (Sweden)

    Gulshan Kumar

    2012-01-01

    Full Text Available In supervised learning-based classification, ensembles have been successfully employed to different application domains. In the literature, many researchers have proposed different ensembles by considering different combination methods, training datasets, base classifiers, and many other factors. Artificial-intelligence-(AI- based techniques play prominent role in development of ensemble for intrusion detection (ID and have many benefits over other techniques. However, there is no comprehensive review of ensembles in general and AI-based ensembles for ID to examine and understand their current research status to solve the ID problem. Here, an updated review of ensembles and their taxonomies has been presented in general. The paper also presents the updated review of various AI-based ensembles for ID (in particular during last decade. The related studies of AI-based ensembles are compared by set of evaluation metrics driven from (1 architecture & approach followed; (2 different methods utilized in different phases of ensemble learning; (3 other measures used to evaluate classification performance of the ensembles. The paper also provides the future directions of the research in this area. The paper will help the better understanding of different directions in which research of ensembles has been done in general and specifically: field of intrusion detection systems (IDSs.

  10. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  11. Improving ECG classification accuracy using an ensemble of neural network modules.

    Directory of Open Access Journals (Sweden)

    Mehrdad Javadi

    Full Text Available This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization.

  12. Ensemble-based forecasting at Horns Rev: Ensemble conversion and kernel dressing

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    . The obtained ensemble forecasts of wind power are then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which are recursively estimated in order to maximize the overall skill of obtained...

  13. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  14. Utilising Tree-Based Ensemble Learning for Speaker Segmentation

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll

    2014-01-01

    In audio and speech processing, accurate detection of the changing points between multiple speakers in speech segments is an important stage for several applications such as speaker identification and tracking. Bayesian Information Criteria (BIC)-based approaches are the most traditionally used...... for a certain condition, the model becomes biased to the data used for training limiting the model’s generalisation ability. In this paper, we propose a BIC-based tuning-free approach for speaker segmentation through the use of ensemble-based learning. A forest of segmentation trees is constructed in which each...... tree is trained using a sampled version of the speech segment. During the tree construction process, a set of randomly selected points in the input sequence is examined as potential segmentation points. The point that yields the highest ΔBIC is chosen and the same process is repeated for the resultant...

  15. Cluster-based analysis of multi-model climate ensembles

    Science.gov (United States)

    Hyde, Richard; Hossaini, Ryan; Leeson, Amber A.

    2018-06-01

    Clustering - the automated grouping of similar data - can provide powerful and unique insight into large and complex data sets, in a fast and computationally efficient manner. While clustering has been used in a variety of fields (from medical image processing to economics), its application within atmospheric science has been fairly limited to date, and the potential benefits of the application of advanced clustering techniques to climate data (both model output and observations) has yet to be fully realised. In this paper, we explore the specific application of clustering to a multi-model climate ensemble. We hypothesise that clustering techniques can provide (a) a flexible, data-driven method of testing model-observation agreement and (b) a mechanism with which to identify model development priorities. We focus our analysis on chemistry-climate model (CCM) output of tropospheric ozone - an important greenhouse gas - from the recent Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP). Tropospheric column ozone from the ACCMIP ensemble was clustered using the Data Density based Clustering (DDC) algorithm. We find that a multi-model mean (MMM) calculated using members of the most-populous cluster identified at each location offers a reduction of up to ˜ 20 % in the global absolute mean bias between the MMM and an observed satellite-based tropospheric ozone climatology, with respect to a simple, all-model MMM. On a spatial basis, the bias is reduced at ˜ 62 % of all locations, with the largest bias reductions occurring in the Northern Hemisphere - where ozone concentrations are relatively large. However, the bias is unchanged at 9 % of all locations and increases at 29 %, particularly in the Southern Hemisphere. The latter demonstrates that although cluster-based subsampling acts to remove outlier model data, such data may in fact be closer to observed values in some locations. We further demonstrate that clustering can provide a viable and

  16. Uncertainty visualization in HARDI based on ensembles of ODFs

    KAUST Repository

    Jiao, Fangxiang

    2012-02-01

    In this paper, we propose a new and accurate technique for uncertainty analysis and uncertainty visualization based on fiber orientation distribution function (ODF) glyphs, associated with high angular resolution diffusion imaging (HARDI). Our visualization applies volume rendering techniques to an ensemble of 3D ODF glyphs, which we call SIP functions of diffusion shapes, to capture their variability due to underlying uncertainty. This rendering elucidates the complex heteroscedastic structural variation in these shapes. Furthermore, we quantify the extent of this variation by measuring the fraction of the volume of these shapes, which is consistent across all noise levels, the certain volume ratio. Our uncertainty analysis and visualization framework is then applied to synthetic data, as well as to HARDI human-brain data, to study the impact of various image acquisition parameters and background noise levels on the diffusion shapes. © 2012 IEEE.

  17. Uncertainty visualization in HARDI based on ensembles of ODFs

    KAUST Repository

    Jiao, Fangxiang; Phillips, Jeff M.; Gur, Yaniv; Johnson, Chris R.

    2012-01-01

    In this paper, we propose a new and accurate technique for uncertainty analysis and uncertainty visualization based on fiber orientation distribution function (ODF) glyphs, associated with high angular resolution diffusion imaging (HARDI). Our visualization applies volume rendering techniques to an ensemble of 3D ODF glyphs, which we call SIP functions of diffusion shapes, to capture their variability due to underlying uncertainty. This rendering elucidates the complex heteroscedastic structural variation in these shapes. Furthermore, we quantify the extent of this variation by measuring the fraction of the volume of these shapes, which is consistent across all noise levels, the certain volume ratio. Our uncertainty analysis and visualization framework is then applied to synthetic data, as well as to HARDI human-brain data, to study the impact of various image acquisition parameters and background noise levels on the diffusion shapes. © 2012 IEEE.

  18. Ensemble-based Probabilistic Forecasting at Horns Rev

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    2009-01-01

    forecasting methodology. In a first stage, ensemble forecasts of meteorological variables are converted to power through a suitable power curve model. This modelemploys local polynomial regression, and is adoptively estimated with an orthogonal fitting method. The obtained ensemble forecasts of wind power...

  19. Ensemble-based prediction of RNA secondary structures.

    Science.gov (United States)

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between

  20. An Adjoint-Based Adaptive Ensemble Kalman Filter

    KAUST Repository

    Song, Hajoon

    2013-10-01

    A new hybrid ensemble Kalman filter/four-dimensional variational data assimilation (EnKF/4D-VAR) approach is introduced to mitigate background covariance limitations in the EnKF. The work is based on the adaptive EnKF (AEnKF) method, which bears a strong resemblance to the hybrid EnKF/three-dimensional variational data assimilation (3D-VAR) method. In the AEnKF, the representativeness of the EnKF ensemble is regularly enhanced with new members generated after back projection of the EnKF analysis residuals to state space using a 3D-VAR [or optimal interpolation (OI)] scheme with a preselected background covariance matrix. The idea here is to reformulate the transformation of the residuals as a 4D-VAR problem, constraining the new member with model dynamics and the previous observations. This should provide more information for the estimation of the new member and reduce dependence of the AEnKF on the assumed stationary background covariance matrix. This is done by integrating the analysis residuals backward in time with the adjoint model. Numerical experiments are performed with the Lorenz-96 model under different scenarios to test the new approach and to evaluate its performance with respect to the EnKF and the hybrid EnKF/3D-VAR. The new method leads to the least root-mean-square estimation errors as long as the linear assumption guaranteeing the stability of the adjoint model holds. It is also found to be less sensitive to choices of the assimilation system inputs and parameters.

  1. An Adjoint-Based Adaptive Ensemble Kalman Filter

    KAUST Repository

    Song, Hajoon; Hoteit, Ibrahim; Cornuelle, Bruce D.; Luo, Xiaodong; Subramanian, Aneesh C.

    2013-01-01

    A new hybrid ensemble Kalman filter/four-dimensional variational data assimilation (EnKF/4D-VAR) approach is introduced to mitigate background covariance limitations in the EnKF. The work is based on the adaptive EnKF (AEnKF) method, which bears a strong resemblance to the hybrid EnKF/three-dimensional variational data assimilation (3D-VAR) method. In the AEnKF, the representativeness of the EnKF ensemble is regularly enhanced with new members generated after back projection of the EnKF analysis residuals to state space using a 3D-VAR [or optimal interpolation (OI)] scheme with a preselected background covariance matrix. The idea here is to reformulate the transformation of the residuals as a 4D-VAR problem, constraining the new member with model dynamics and the previous observations. This should provide more information for the estimation of the new member and reduce dependence of the AEnKF on the assumed stationary background covariance matrix. This is done by integrating the analysis residuals backward in time with the adjoint model. Numerical experiments are performed with the Lorenz-96 model under different scenarios to test the new approach and to evaluate its performance with respect to the EnKF and the hybrid EnKF/3D-VAR. The new method leads to the least root-mean-square estimation errors as long as the linear assumption guaranteeing the stability of the adjoint model holds. It is also found to be less sensitive to choices of the assimilation system inputs and parameters.

  2. On evaluation of ensemble precipitation forecasts with observation-based ensembles

    Directory of Open Access Journals (Sweden)

    S. Jaun

    2007-04-01

    Full Text Available Spatial interpolation of precipitation data is uncertain. How important is this uncertainty and how can it be considered in evaluation of high-resolution probabilistic precipitation forecasts? These questions are discussed by experimental evaluation of the COSMO consortium's limited-area ensemble prediction system COSMO-LEPS. The applied performance measure is the often used Brier skill score (BSS. The observational references in the evaluation are (a analyzed rain gauge data by ordinary Kriging and (b ensembles of interpolated rain gauge data by stochastic simulation. This permits the consideration of either a deterministic reference (the event is observed or not with 100% certainty or a probabilistic reference that makes allowance for uncertainties in spatial averaging. The evaluation experiments show that the evaluation uncertainties are substantial even for the large area (41 300 km2 of Switzerland with a mean rain gauge distance as good as 7 km: the one- to three-day precipitation forecasts have skill decreasing with forecast lead time but the one- and two-day forecast performances differ not significantly.

  3. Developing an Ensemble Prediction System based on COSMO-DE

    Science.gov (United States)

    Theis, S.; Gebhardt, C.; Buchhold, M.; Ben Bouallègue, Z.; Ohl, R.; Paulat, M.; Peralta, C.

    2010-09-01

    The numerical weather prediction model COSMO-DE is a configuration of the COSMO model with a horizontal grid size of 2.8 km. It has been running operationally at DWD since 2007, it covers the area of Germany and produces forecasts with a lead time of 0-21 hours. The model COSMO-DE is convection-permitting, which means that it does without a parametrisation of deep convection and simulates deep convection explicitly. One aim is an improved forecast of convective heavy rain events. Convection-permitting models are in operational use at several weather services, but currently not in ensemble mode. It is expected that an ensemble system could reveal the advantages of a convection-permitting model even better. The probabilistic approach is necessary, because the explicit simulation of convective processes for more than a few hours cannot be viewed as a deterministic forecast anymore. This is due to the chaotic behaviour and short life cycle of the processes which are simulated explicitly now. In the framework of the project COSMO-DE-EPS, DWD is developing and implementing an ensemble prediction system (EPS) for the model COSMO-DE. The project COSMO-DE-EPS comprises the generation of ensemble members, as well as the verification and visualization of the ensemble forecasts and also statistical postprocessing. A pre-operational mode of the EPS with 20 ensemble members is foreseen to start in 2010. Operational use is envisaged to start in 2012, after an upgrade to 40 members and inclusion of statistical postprocessing. The presentation introduces the project COSMO-DE-EPS and describes the design of the ensemble as it is planned for the pre-operational mode. In particular, the currently implemented method for the generation of ensemble members will be explained and discussed. The method includes variations of initial conditions, lateral boundary conditions, and model physics. At present, pragmatic methods are applied which resemble the basic ideas of a multi-model approach

  4. Visualizing Confidence in Cluster-Based Ensemble Weather Forecast Analyses.

    Science.gov (United States)

    Kumpf, Alexander; Tost, Bianca; Baumgart, Marlene; Riemer, Michael; Westermann, Rudiger; Rautenhaus, Marc

    2018-01-01

    In meteorology, cluster analysis is frequently used to determine representative trends in ensemble weather predictions in a selected spatio-temporal region, e.g., to reduce a set of ensemble members to simplify and improve their analysis. Identified clusters (i.e., groups of similar members), however, can be very sensitive to small changes of the selected region, so that clustering results can be misleading and bias subsequent analyses. In this article, we - a team of visualization scientists and meteorologists-deliver visual analytics solutions to analyze the sensitivity of clustering results with respect to changes of a selected region. We propose an interactive visual interface that enables simultaneous visualization of a) the variation in composition of identified clusters (i.e., their robustness), b) the variability in cluster membership for individual ensemble members, and c) the uncertainty in the spatial locations of identified trends. We demonstrate that our solution shows meteorologists how representative a clustering result is, and with respect to which changes in the selected region it becomes unstable. Furthermore, our solution helps to identify those ensemble members which stably belong to a given cluster and can thus be considered similar. In a real-world application case we show how our approach is used to analyze the clustering behavior of different regions in a forecast of "Tropical Cyclone Karl", guiding the user towards the cluster robustness information required for subsequent ensemble analysis.

  5. Constructing Support Vector Machine Ensembles for Cancer Classification Based on Proteomic Profiling

    Institute of Scientific and Technical Information of China (English)

    Yong Mao; Xiao-Bo Zhou; Dao-Ying Pi; You-Xian Sun

    2005-01-01

    In this study, we present a constructive algorithm for training cooperative support vector machine ensembles (CSVMEs). CSVME combines ensemble architecture design with cooperative training for individual SVMs in ensembles. Unlike most previous studies on training ensembles, CSVME puts emphasis on both accuracy and collaboration among individual SVMs in an ensemble. A group of SVMs selected on the basis of recursive classifier elimination is used in CSVME, and the number of the individual SVMs selected to construct CSVME is determined by 10-fold cross-validation. This kind of SVME has been tested on two ovarian cancer datasets previously obtained by proteomic mass spectrometry. By combining several individual SVMs, the proposed method achieves better performance than the SVME of all base SVMs.

  6. Polyphony: superposition independent methods for ensemble-based drug discovery.

    Science.gov (United States)

    Pitt, William R; Montalvão, Rinaldo W; Blundell, Tom L

    2014-09-30

    Structure-based drug design is an iterative process, following cycles of structural biology, computer-aided design, synthetic chemistry and bioassay. In favorable circumstances, this process can lead to the structures of hundreds of protein-ligand crystal structures. In addition, molecular dynamics simulations are increasingly being used to further explore the conformational landscape of these complexes. Currently, methods capable of the analysis of ensembles of crystal structures and MD trajectories are limited and usually rely upon least squares superposition of coordinates. Novel methodologies are described for the analysis of multiple structures of a protein. Statistical approaches that rely upon residue equivalence, but not superposition, are developed. Tasks that can be performed include the identification of hinge regions, allosteric conformational changes and transient binding sites. The approaches are tested on crystal structures of CDK2 and other CMGC protein kinases and a simulation of p38α. Known interaction - conformational change relationships are highlighted but also new ones are revealed. A transient but druggable allosteric pocket in CDK2 is predicted to occur under the CMGC insert. Furthermore, an evolutionarily-conserved conformational link from the location of this pocket, via the αEF-αF loop, to phosphorylation sites on the activation loop is discovered. New methodologies are described and validated for the superimposition independent conformational analysis of large collections of structures or simulation snapshots of the same protein. The methodologies are encoded in a Python package called Polyphony, which is released as open source to accompany this paper [http://wrpitt.bitbucket.org/polyphony/].

  7. Cascaded ensemble of convolutional neural networks and handcrafted features for mitosis detection

    Science.gov (United States)

    Wang, Haibo; Cruz-Roa, Angel; Basavanhally, Ajay; Gilmore, Hannah; Shih, Natalie; Feldman, Mike; Tomaszewski, John; Gonzalez, Fabio; Madabhushi, Anant

    2014-03-01

    Breast cancer (BCa) grading plays an important role in predicting disease aggressiveness and patient outcome. A key component of BCa grade is mitotic count, which involves quantifying the number of cells in the process of dividing (i.e. undergoing mitosis) at a specific point in time. Currently mitosis counting is done manually by a pathologist looking at multiple high power fields on a glass slide under a microscope, an extremely laborious and time consuming process. The development of computerized systems for automated detection of mitotic nuclei, while highly desirable, is confounded by the highly variable shape and appearance of mitoses. Existing methods use either handcrafted features that capture certain morphological, statistical or textural attributes of mitoses or features learned with convolutional neural networks (CNN). While handcrafted features are inspired by the domain and the particular application, the data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any of the handcrafted features. On the other hand, CNN is computationally more complex and needs a large number of labeled training instances. Since handcrafted features attempt to model domain pertinent attributes and CNN approaches are largely unsupervised feature generation methods, there is an appeal to attempting to combine these two distinct classes of feature generation strategies to create an integrated set of attributes that can potentially outperform either class of feature extraction strategies individually. In this paper, we present a cascaded approach for mitosis detection that intelligently combines a CNN model and handcrafted features (morphology, color and texture features). By employing a light CNN model, the proposed approach is far less demanding computationally, and the cascaded strategy of combining handcrafted features and CNN-derived features enables the possibility of maximizing performance by

  8. Development of multimodel ensemble based district level medium ...

    Indian Academy of Sciences (India)

    tively by computing the anomaly correlation coef- ficient between the predicted rainfall and observed rainfall. High resolution (lat./long.) gridded data ..... particularly in the prediction of intensity and mesoscale rainfall features causing inland flooding. During recent years, Ensemble. Prediction System (EPS) has emerged as ...

  9. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo; Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2015-01-01

    (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence

  10. Tweet-based Target Market Classification Using Ensemble Method

    Directory of Open Access Journals (Sweden)

    Muhammad Adi Khairul Anshary

    2016-09-01

    Full Text Available Target market classification is aimed at focusing marketing activities on the right targets. Classification of target markets can be done through data mining and by utilizing data from social media, e.g. Twitter. The end result of data mining are learning models that can classify new data. Ensemble methods can improve the accuracy of the models and therefore provide better results. In this study, classification of target markets was conducted on a dataset of 3000 tweets in order to extract features. Classification models were constructed to manipulate the training data using two ensemble methods (bagging and boosting. To investigate the effectiveness of the ensemble methods, this study used the CART (classification and regression tree algorithm for comparison. Three categories of consumer goods (computers, mobile phones and cameras and three categories of sentiments (positive, negative and neutral were classified towards three target-market categories. Machine learning was performed using Weka 3.6.9. The results of the test data showed that the bagging method improved the accuracy of CART with 1.9% (to 85.20%. On the other hand, for sentiment classification, the ensemble methods were not successful in increasing the accuracy of CART. The results of this study may be taken into consideration by companies who approach their customers through social media, especially Twitter.

  11. Predictor-Year Subspace Clustering Based Ensemble Prediction of Indian Summer Monsoon

    Directory of Open Access Journals (Sweden)

    Moumita Saha

    2016-01-01

    Full Text Available Forecasting the Indian summer monsoon is a challenging task due to its complex and nonlinear behavior. A large number of global climatic variables with varying interaction patterns over years influence monsoon. Various statistical and neural prediction models have been proposed for forecasting monsoon, but many of them fail to capture variability over years. The skill of predictor variables of monsoon also evolves over time. In this article, we propose a joint-clustering of monsoon years and predictors for understanding and predicting the monsoon. This is achieved by subspace clustering algorithm. It groups the years based on prevailing global climatic condition using statistical clustering technique and subsequently for each such group it identifies significant climatic predictor variables which assist in better prediction. Prediction model is designed to frame individual cluster using random forest of regression tree. Prediction of aggregate and regional monsoon is attempted. Mean absolute error of 5.2% is obtained for forecasting aggregate Indian summer monsoon. Errors in predicting the regional monsoons are also comparable in comparison to the high variation of regional precipitation. Proposed joint-clustering based ensemble model is observed to be superior to existing monsoon prediction models and it also surpasses general nonclustering based prediction models.

  12. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  13. Current path in light emitting diodes based on nanowire ensembles

    International Nuclear Information System (INIS)

    Limbach, F; Hauswald, C; Lähnemann, J; Wölz, M; Brandt, O; Trampert, A; Hanke, M; Jahn, U; Calarco, R; Geelhaar, L; Riechert, H

    2012-01-01

    Light emitting diodes (LEDs) have been fabricated using ensembles of free-standing (In, Ga)N/GaN nanowires (NWs) grown on Si substrates in the self-induced growth mode by molecular beam epitaxy. Electron-beam-induced current analysis, cathodoluminescence as well as biased μ-photoluminescence spectroscopy, transmission electron microscopy, and electrical measurements indicate that the electroluminescence of such LEDs is governed by the differences in the individual current densities of the single-NW LEDs operated in parallel, i.e. by the inhomogeneity of the current path in the ensemble LED. In addition, the optoelectronic characterization leads to the conclusion that these NWs exhibit N-polarity and that the (In, Ga)N quantum well states in the NWs are subject to a non-vanishing quantum confined Stark effect. (paper)

  14. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  15. Ensemble based system for whole-slide prostate cancer probability mapping using color texture features.

    LENUS (Irish Health Repository)

    DiFranco, Matthew D

    2011-01-01

    We present a tile-based approach for producing clinically relevant probability maps of prostatic carcinoma in histological sections from radical prostatectomy. Our methodology incorporates ensemble learning for feature selection and classification on expert-annotated images. Random forest feature selection performed over varying training sets provides a subset of generalized CIEL*a*b* co-occurrence texture features, while sample selection strategies with minimal constraints reduce training data requirements to achieve reliable results. Ensembles of classifiers are built using expert-annotated tiles from training images, and scores for the probability of cancer presence are calculated from the responses of each classifier in the ensemble. Spatial filtering of tile-based texture features prior to classification results in increased heat-map coherence as well as AUC values of 95% using ensembles of either random forests or support vector machines. Our approach is designed for adaptation to different imaging modalities, image features, and histological decision domains.

  16. Memristor-based neural networks

    International Nuclear Information System (INIS)

    Thomas, Andy

    2013-01-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (topical review)

  17. Force Sensor Based Tool Condition Monitoring Using a Heterogeneous Ensemble Learning Model

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2014-11-01

    Full Text Available Tool condition monitoring (TCM plays an important role in improving machining efficiency and guaranteeing workpiece quality. In order to realize reliable recognition of the tool condition, a robust classifier needs to be constructed to depict the relationship between tool wear states and sensory information. However, because of the complexity of the machining process and the uncertainty of the tool wear evolution, it is hard for a single classifier to fit all the collected samples without sacrificing generalization ability. In this paper, heterogeneous ensemble learning is proposed to realize tool condition monitoring in which the support vector machine (SVM, hidden Markov model (HMM and radius basis function (RBF are selected as base classifiers and a stacking ensemble strategy is further used to reflect the relationship between the outputs of these base classifiers and tool wear states. Based on the heterogeneous ensemble learning classifier, an online monitoring system is constructed in which the harmonic features are extracted from force signals and a minimal redundancy and maximal relevance (mRMR algorithm is utilized to select the most prominent features. To verify the effectiveness of the proposed method, a titanium alloy milling experiment was carried out and samples with different tool wear states were collected to build the proposed heterogeneous ensemble learning classifier. Moreover, the homogeneous ensemble learning model and majority voting strategy are also adopted to make a comparison. The analysis and comparison results show that the proposed heterogeneous ensemble learning classifier performs better in both classification accuracy and stability.

  18. A target recognition method for maritime surveillance radars based on hybrid ensemble selection

    Science.gov (United States)

    Fan, Xueman; Hu, Shengliang; He, Jingbo

    2017-11-01

    In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.

  19. Ensembl 2017

    OpenAIRE

    Aken, Bronwen L.; Achuthan, Premanand; Akanni, Wasiu; Amode, M. Ridwan; Bernsdorff, Friederike; Bhai, Jyothish; Billis, Konstantinos; Carvalho-Silva, Denise; Cummins, Carla; Clapham, Peter; Gil, Laurent; Gir?n, Carlos Garc?a; Gordon, Leo; Hourlier, Thibaut; Hunt, Sarah E.

    2016-01-01

    Ensembl (www.ensembl.org) is a database and genome browser for enabling research on vertebrate genomes. We import, analyse, curate and integrate a diverse collection of large-scale reference data to create a more comprehensive view of genome biology than would be possible from any individual dataset. Our extensive data resources include evidence-based gene and regulatory region annotation, genome variation and gene trees. An accompanying suite of tools, infrastructure and programmatic access ...

  20. An ensemble deep learning based approach for red lesion detection in fundus images.

    Science.gov (United States)

    Orlando, José Ignacio; Prokofyeva, Elena; Del Fresno, Mariana; Blaschko, Matthew B

    2018-01-01

    Diabetic retinopathy (DR) is one of the leading causes of preventable blindness in the world. Its earliest sign are red lesions, a general term that groups both microaneurysms (MAs) and hemorrhages (HEs). In daily clinical practice, these lesions are manually detected by physicians using fundus photographs. However, this task is tedious and time consuming, and requires an intensive effort due to the small size of the lesions and their lack of contrast. Computer-assisted diagnosis of DR based on red lesion detection is being actively explored due to its improvement effects both in clinicians consistency and accuracy. Moreover, it provides comprehensive feedback that is easy to assess by the physicians. Several methods for detecting red lesions have been proposed in the literature, most of them based on characterizing lesion candidates using hand crafted features, and classifying them into true or false positive detections. Deep learning based approaches, by contrast, are scarce in this domain due to the high expense of annotating the lesions manually. In this paper we propose a novel method for red lesion detection based on combining both deep learned and domain knowledge. Features learned by a convolutional neural network (CNN) are augmented by incorporating hand crafted features. Such ensemble vector of descriptors is used afterwards to identify true lesion candidates using a Random Forest classifier. We empirically observed that combining both sources of information significantly improve results with respect to using each approach separately. Furthermore, our method reported the highest performance on a per-lesion basis on DIARETDB1 and e-ophtha, and for screening and need for referral on MESSIDOR compared to a second human expert. Results highlight the fact that integrating manually engineered approaches with deep learned features is relevant to improve results when the networks are trained from lesion-level annotated data. An open source implementation of our

  1. Evaluation of medium-range ensemble flood forecasting based on calibration strategies and ensemble methods in Lanjiang Basin, Southeast China

    Science.gov (United States)

    Liu, Li; Gao, Chao; Xuan, Weidong; Xu, Yue-Ping

    2017-11-01

    Ensemble flood forecasts by hydrological models using numerical weather prediction products as forcing data are becoming more commonly used in operational flood forecasting applications. In this study, a hydrological ensemble flood forecasting system comprised of an automatically calibrated Variable Infiltration Capacity model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated. The hydrological model is optimized by the parallel programmed ε-NSGA II multi-objective algorithm. According to the solutions by ε-NSGA II, two differently parameterized models are determined to simulate daily flows and peak flows at each of the three hydrological stations. Then a simple yet effective modular approach is proposed to combine these daily and peak flows at the same station into one composite series. Five ensemble methods and various evaluation metrics are adopted. The results show that ε-NSGA II can provide an objective determination on parameter estimation, and the parallel program permits a more efficient simulation. It is also demonstrated that the forecasts from ECMWF have more favorable skill scores than other Ensemble Prediction Systems. The multimodel ensembles have advantages over all the single model ensembles and the multimodel methods weighted on members and skill scores outperform other methods. Furthermore, the overall performance at three stations can be satisfactory up to ten days, however the hydrological errors can degrade the skill score by approximately 2 days, and the influence persists until a lead time of 10 days with a weakening trend. With respect to peak flows selected by the Peaks Over Threshold approach, the ensemble means from single models or multimodels are generally underestimated, indicating that the ensemble mean can bring overall improvement in forecasting of flows. For

  2. A convection-allowing ensemble forecast based on the breeding growth mode and associated optimization of precipitation forecast

    Science.gov (United States)

    Li, Xiang; He, Hongrang; Chen, Chaohui; Miao, Ziqing; Bai, Shigang

    2017-10-01

    A convection-allowing ensemble forecast experiment on a squall line was conducted based on the breeding growth mode (BGM). Meanwhile, the probability matched mean (PMM) and neighborhood ensemble probability (NEP) methods were used to optimize the associated precipitation forecast. The ensemble forecast predicted the precipitation tendency accurately, which was closer to the observation than in the control forecast. For heavy rainfall, the precipitation center produced by the ensemble forecast was also better. The Fractions Skill Score (FSS) results indicated that the ensemble mean was skillful in light rainfall, while the PMM produced better probability distribution of precipitation for heavy rainfall. Preliminary results demonstrated that convection-allowing ensemble forecast could improve precipitation forecast skill through providing valuable probability forecasts. It is necessary to employ new methods, such as the PMM and NEP, to generate precipitation probability forecasts. Nonetheless, the lack of spread and the overprediction of precipitation by the ensemble members are still problems that need to be solved.

  3. A Matrix-Free Posterior Ensemble Kalman Filter Implementation Based on a Modified Cholesky Decomposition

    Directory of Open Access Journals (Sweden)

    Elias D. Nino-Ruiz

    2017-07-01

    Full Text Available In this paper, a matrix-free posterior ensemble Kalman filter implementation based on a modified Cholesky decomposition is proposed. The method works as follows: the precision matrix of the background error distribution is estimated based on a modified Cholesky decomposition. The resulting estimator can be expressed in terms of Cholesky factors which can be updated based on a series of rank-one matrices in order to approximate the precision matrix of the analysis distribution. By using this matrix, the posterior ensemble can be built by either sampling from the posterior distribution or using synthetic observations. Furthermore, the computational effort of the proposed method is linear with regard to the model dimension and the number of observed components from the model domain. Experimental tests are performed making use of the Lorenz-96 model. The results reveal that, the accuracy of the proposed implementation in terms of root-mean-square-error is similar, and in some cases better, to that of a well-known ensemble Kalman filter (EnKF implementation: the local ensemble transform Kalman filter. In addition, the results are comparable to those obtained by the EnKF with large ensemble sizes.

  4. Neural Network Based Load Frequency Control for Restructuring ...

    African Journals Online (AJOL)

    Neural Network Based Load Frequency Control for Restructuring Power Industry. ... an artificial neural network (ANN) application of load frequency control (LFC) of a Multi-Area power system by using a neural network controller is presented.

  5. Comparative Visualization of Vector Field Ensembles Based on Longest Common Subsequence

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Richen; Guo, Hanqi; Zhang, Jiang; Yuan, Xiaoru

    2016-04-19

    We propose a longest common subsequence (LCS) based approach to compute the distance among vector field ensembles. By measuring how many common blocks the ensemble pathlines passing through, the LCS distance defines the similarity among vector field ensembles by counting the number of sharing domain data blocks. Compared to the traditional methods (e.g. point-wise Euclidean distance or dynamic time warping distance), the proposed approach is robust to outlier, data missing, and sampling rate of pathline timestep. Taking the advantages of smaller and reusable intermediate output, visualization based on the proposed LCS approach revealing temporal trends in the data at low storage cost, and avoiding tracing pathlines repeatedly. Finally, we evaluate our method on both synthetic data and simulation data, which demonstrate the robustness of the proposed approach.

  6. R-FCN Object Detection Ensemble based on Object Resolution and Image Quality

    DEFF Research Database (Denmark)

    Rasmussen, Christoffer Bøgelund; Nasrollahi, Kamal; Moeslund, Thomas B.

    2017-01-01

    Object detection can be difficult due to challenges such as variations in objects both inter- and intra-class. Additionally, variations can also be present between images. Based on this, research was conducted into creating an ensemble of Region-based Fully Convolutional Networks (R-FCN) object d...

  7. Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique

    Energy Technology Data Exchange (ETDEWEB)

    Teramoto, Atsushi, E-mail: teramoto@fujita-hu.ac.jp [Faculty of Radiological Technology, School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake, Toyoake, Aichi 470-1192 (Japan); Fujita, Hiroshi [Department of Intelligent Image Information, Division of Regeneration and Advanced Medical Sciences, Graduate School of Medicine, Gifu University, 1-1 Yanagido, Gifu 501-1194 (Japan); Yamamuro, Osamu; Tamaki, Tsuneo [East Nagoya Imaging Diagnosis Center, 3-4-26 Jiyugaoka, Chikusa-ku, Nagoya, Aichi 464-0044 (Japan)

    2016-06-15

    Purpose: Automated detection of solitary pulmonary nodules using positron emission tomography (PET) and computed tomography (CT) images shows good sensitivity; however, it is difficult to detect nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs). Methods: The overall scheme detects pulmonary nodules using both CT and PET images. In the CT images, a massive region is first detected using an active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PET images are merged with the regions detected by the CT images. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines. Results: The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study. Conclusions: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules

  8. Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique

    International Nuclear Information System (INIS)

    Teramoto, Atsushi; Fujita, Hiroshi; Yamamuro, Osamu; Tamaki, Tsuneo

    2016-01-01

    Purpose: Automated detection of solitary pulmonary nodules using positron emission tomography (PET) and computed tomography (CT) images shows good sensitivity; however, it is difficult to detect nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs). Methods: The overall scheme detects pulmonary nodules using both CT and PET images. In the CT images, a massive region is first detected using an active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PET images are merged with the regions detected by the CT images. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines. Results: The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study. Conclusions: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules

  9. Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology

    Science.gov (United States)

    Siegle, Joshua H.; Hale, Gregory J.; Newman, Jonathan P.; Voigts, Jakob

    2014-01-01

    One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is “open” or “closed”: that is, whether or not the system’s schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. PMID:25528614

  10. Ensemble-based Regional Climate Prediction: Political Impacts

    Science.gov (United States)

    Miguel, E.; Dykema, J.; Satyanath, S.; Anderson, J. G.

    2008-12-01

    Accurate forecasts of regional climate, including temperature and precipitation, have significant implications for human activities, not just economically but socially. Sub Saharan Africa is a region that has displayed an exceptional propensity for devastating civil wars. Recent research in political economy has revealed a strong statistical relationship between year to year fluctuations in precipitation and civil conflict in this region in the 1980s and 1990s. To investigate how climate change may modify the regional risk of civil conflict in the future requires a probabilistic regional forecast that explicitly accounts for the community's uncertainty in the evolution of rainfall under anthropogenic forcing. We approach the regional climate prediction aspect of this question through the application of a recently demonstrated method called generalized scalar prediction (Leroy et al. 2009), which predicts arbitrary scalar quantities of the climate system. This prediction method can predict change in any variable or linear combination of variables of the climate system averaged over a wide range spatial scales, from regional to hemispheric to global. Generalized scalar prediction utilizes an ensemble of model predictions to represent the community's uncertainty range in climate modeling in combination with a timeseries of any type of observational data that exhibits sensitivity to the scalar of interest. It is not necessary to prioritize models in deriving with the final prediction. We present the results of the application of generalized scalar prediction for regional forecasts of temperature and precipitation and Sub Saharan Africa. We utilize the climate predictions along with the established statistical relationship between year-to-year rainfall variability in Sub Saharan Africa to investigate the potential impact of climate change on civil conflict within that region.

  11. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    Science.gov (United States)

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  12. A user credit assessment model based on clustering ensemble for broadband network new media service supervision

    Science.gov (United States)

    Liu, Fang; Cao, San-xing; Lu, Rui

    2012-04-01

    This paper proposes a user credit assessment model based on clustering ensemble aiming to solve the problem that users illegally spread pirated and pornographic media contents within the user self-service oriented broadband network new media platforms. Its idea is to do the new media user credit assessment by establishing indices system based on user credit behaviors, and the illegal users could be found according to the credit assessment results, thus to curb the bad videos and audios transmitted on the network. The user credit assessment model based on clustering ensemble proposed by this paper which integrates the advantages that swarm intelligence clustering is suitable for user credit behavior analysis and K-means clustering could eliminate the scattered users existed in the result of swarm intelligence clustering, thus to realize all the users' credit classification automatically. The model's effective verification experiments are accomplished which are based on standard credit application dataset in UCI machine learning repository, and the statistical results of a comparative experiment with a single model of swarm intelligence clustering indicates this clustering ensemble model has a stronger creditworthiness distinguishing ability, especially in the aspect of predicting to find user clusters with the best credit and worst credit, which will facilitate the operators to take incentive measures or punitive measures accurately. Besides, compared with the experimental results of Logistic regression based model under the same conditions, this clustering ensemble model is robustness and has better prediction accuracy.

  13. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    Science.gov (United States)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  14. An Ensemble Learning Based Framework for Traditional Chinese Medicine Data Analysis with ICD-10 Labels

    Directory of Open Access Journals (Sweden)

    Gang Zhang

    2015-01-01

    Full Text Available Objective. This study aims to establish a model to analyze clinical experience of TCM veteran doctors. We propose an ensemble learning based framework to analyze clinical records with ICD-10 labels information for effective diagnosis and acupoints recommendation. Methods. We propose an ensemble learning framework for the analysis task. A set of base learners composed of decision tree (DT and support vector machine (SVM are trained by bootstrapping the training dataset. The base learners are sorted by accuracy and diversity through nondominated sort (NDS algorithm and combined through a deep ensemble learning strategy. Results. We evaluate the proposed method with comparison to two currently successful methods on a clinical diagnosis dataset with manually labeled ICD-10 information. ICD-10 label annotation and acupoints recommendation are evaluated for three methods. The proposed method achieves an accuracy rate of 88.2%  ±  2.8% measured by zero-one loss for the first evaluation session and 79.6%  ±  3.6% measured by Hamming loss, which are superior to the other two methods. Conclusion. The proposed ensemble model can effectively model the implied knowledge and experience in historic clinical data records. The computational cost of training a set of base learners is relatively low.

  15. A deep learning-based multi-model ensemble method for cancer prediction.

    Science.gov (United States)

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    Science.gov (United States)

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  17. Estimating Uncertainty of Point-Cloud Based Single-Tree Segmentation with Ensemble Based Filtering

    Directory of Open Access Journals (Sweden)

    Matthew Parkan

    2018-02-01

    Full Text Available Individual tree crown segmentation from Airborne Laser Scanning data is a nodal problem in forest remote sensing. Focusing on single layered spruce and fir dominated coniferous forests, this article addresses the problem of directly estimating 3D segment shape uncertainty (i.e., without field/reference surveys, using a probabilistic approach. First, a coarse segmentation (marker controlled watershed is applied. Then, the 3D alpha hull and several descriptors are computed for each segment. Based on these descriptors, the alpha hulls are grouped to form ensembles (i.e., groups of similar tree shapes. By examining how frequently regions of a shape occur within an ensemble, it is possible to assign a shape probability to each point within a segment. The shape probability can subsequently be thresholded to obtain improved (filtered tree segments. Results indicate this approach can be used to produce segmentation reliability maps. A comparison to manually segmented tree crowns also indicates that the approach is able to produce more reliable tree shapes than the initial (unfiltered segmentation.

  18. Reducing false-positive incidental findings with ensemble genotyping and logistic regression based variant filtering methods.

    Science.gov (United States)

    Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won

    2014-08-01

    As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.

  19. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  20. Three-dimensional theory of quantum memories based on Λ-type atomic ensembles

    International Nuclear Information System (INIS)

    Zeuthen, Emil; Grodecka-Grad, Anna; Soerensen, Anders S.

    2011-01-01

    We develop a three-dimensional theory for quantum memories based on light storage in ensembles of Λ-type atoms, where two long-lived atomic ground states are employed. We consider light storage in an ensemble of finite spatial extent and we show that within the paraxial approximation the Fresnel number of the atomic ensemble and the optical depth are the only important physical parameters determining the quality of the quantum memory. We analyze the influence of these parameters on the storage of light followed by either forward or backward read-out from the quantum memory. We show that for small Fresnel numbers the forward memory provides higher efficiencies, whereas for large Fresnel numbers the backward memory is advantageous. The optimal light modes to store in the memory are presented together with the corresponding spin waves and outcoming light modes. We show that for high optical depths such Λ-type atomic ensembles allow for highly efficient backward and forward memories even for small Fresnel numbers F(greater-or-similar sign)0.1.

  1. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    International Nuclear Information System (INIS)

    Blanco, A; Rodriguez, R; Martinez-Maranon, I

    2014-01-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity

  2. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    Science.gov (United States)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  3. Making decisions based on an imperfect ensemble of climate simulators: strategies and future directions

    Science.gov (United States)

    Sanderson, B. M.

    2017-12-01

    The CMIP ensembles represent the most comprehensive source of information available to decision-makers for climate adaptation, yet it is clear that there are fundamental limitations in our ability to treat the ensemble as an unbiased sample of possible future climate trajectories. There is considerable evidence that models are not independent, and increasing complexity and resolution combined with computational constraints prevent a thorough exploration of parametric uncertainty or internal variability. Although more data than ever is available for calibration, the optimization of each model is influenced by institutional priorities, historical precedent and available resources. The resulting ensemble thus represents a miscellany of climate simulators which defy traditional statistical interpretation. Models are in some cases interdependent, but are sufficiently complex that the degree of interdependency is conditional on the application. Configurations have been updated using available observations to some degree, but not in a consistent or easily identifiable fashion. This means that the ensemble cannot be viewed as a true posterior distribution updated by available data, but nor can observational data alone be used to assess individual model likelihood. We assess recent literature for combining projections from an imperfect ensemble of climate simulators. Beginning with our published methodology for addressing model interdependency and skill in the weighting scheme for the 4th US National Climate Assessment, we consider strategies for incorporating process-based constraints on future response, perturbed parameter experiments and multi-model output into an integrated framework. We focus on a number of guiding questions: Is the traditional framework of confidence in projections inferred from model agreement leading to biased or misleading conclusions? Can the benefits of upweighting skillful models be reconciled with the increased risk of truth lying outside the

  4. Ensemble method: Community detection based on game theory

    Science.gov (United States)

    Zhang, Xia; Xia, Zhengyou; Xu, Shengwu; Wang, J. D.

    2014-08-01

    Timely and cost-effective analytics over social network has emerged as a key ingredient for success in many businesses and government endeavors. Community detection is an active research area of relevance to analyze online social network. The problem of selecting a particular community detection algorithm is crucial if the aim is to unveil the community structure of a network. The choice of a given methodology could affect the outcome of the experiments because different algorithms have different advantages and depend on tuning specific parameters. In this paper, we propose a community division model based on the notion of game theory, which can combine advantages of previous algorithms effectively to get a better community classification result. By making experiments on some standard dataset, it verifies that our community detection model based on game theory is valid and better.

  5. AUC-based biomarker ensemble with an application on gene scores predicting low bone mineral density.

    Science.gov (United States)

    Zhao, X G; Dai, W; Li, Y; Tian, L

    2011-11-01

    The area under the receiver operating characteristic (ROC) curve (AUC), long regarded as a 'golden' measure for the predictiveness of a continuous score, has propelled the need to develop AUC-based predictors. However, the AUC-based ensemble methods are rather scant, largely due to the fact that the associated objective function is neither continuous nor concave. Indeed, there is no reliable numerical algorithm identifying optimal combination of a set of biomarkers to maximize the AUC, especially when the number of biomarkers is large. We have proposed a novel AUC-based statistical ensemble methods for combining multiple biomarkers to differentiate a binary response of interest. Specifically, we propose to replace the non-continuous and non-convex AUC objective function by a convex surrogate loss function, whose minimizer can be efficiently identified. With the established framework, the lasso and other regularization techniques enable feature selections. Extensive simulations have demonstrated the superiority of the new methods to the existing methods. The proposal has been applied to a gene expression dataset to construct gene expression scores to differentiate elderly women with low bone mineral density (BMD) and those with normal BMD. The AUCs of the resulting scores in the independent test dataset has been satisfactory. Aiming for directly maximizing AUC, the proposed AUC-based ensemble method provides an efficient means of generating a stable combination of multiple biomarkers, which is especially useful under the high-dimensional settings. lutian@stanford.edu. Supplementary data are available at Bioinformatics online.

  6. Identifying Different Transportation Modes from Trajectory Data Using Tree-Based Ensemble Classifiers

    Directory of Open Access Journals (Sweden)

    Zhibin Xiao

    2017-02-01

    Full Text Available Recognition of transportation modes can be used in different applications including human behavior research, transport management and traffic control. Previous work on transportation mode recognition has often relied on using multiple sensors or matching Geographic Information System (GIS information, which is not possible in many cases. In this paper, an approach based on ensemble learning is proposed to infer hybrid transportation modes using only Global Position System (GPS data. First, in order to distinguish between different transportation modes, we used a statistical method to generate global features and extract several local features from sub-trajectories after trajectory segmentation, before these features were combined in the classification stage. Second, to obtain a better performance, we used tree-based ensemble models (Random Forest, Gradient Boosting Decision Tree, and XGBoost instead of traditional methods (K-Nearest Neighbor, Decision Tree, and Support Vector Machines to classify the different transportation modes. The experiment results on the later have shown the efficacy of our proposed approach. Among them, the XGBoost model produced the best performance with a classification accuracy of 90.77% obtained on the GEOLIFE dataset, and we used a tree-based ensemble method to ensure accurate feature selection to reduce the model complexity.

  7. Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction

    Science.gov (United States)

    Mons, Vincent; Wang, Qi; Zaki, Tamer

    2017-11-01

    Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).

  8. Semi-Supervised Multi-View Ensemble Learning Based On Extracting Cross-View Correlation

    Directory of Open Access Journals (Sweden)

    ZALL, R.

    2016-05-01

    Full Text Available Correlated information between different views incorporate useful for learning in multi view data. Canonical correlation analysis (CCA plays important role to extract these information. However, CCA only extracts the correlated information between paired data and cannot preserve correlated information between within-class samples. In this paper, we propose a two-view semi-supervised learning method called semi-supervised random correlation ensemble base on spectral clustering (SS_RCE. SS_RCE uses a multi-view method based on spectral clustering which takes advantage of discriminative information in multiple views to estimate labeling information of unlabeled samples. In order to enhance discriminative power of CCA features, we incorporate the labeling information of both unlabeled and labeled samples into CCA. Then, we use random correlation between within-class samples from cross view to extract diverse correlated features for training component classifiers. Furthermore, we extend a general model namely SSMV_RCE to construct ensemble method to tackle semi-supervised learning in the presence of multiple views. Finally, we compare the proposed methods with existing multi-view feature extraction methods using multi-view semi-supervised ensembles. Experimental results on various multi-view data sets are presented to demonstrate the effectiveness of the proposed methods.

  9. Extracting the Neural Representation of Tone Onsets for Separate Voices of Ensemble Music Using Multivariate EEG Analysis

    DEFF Research Database (Denmark)

    Sturm, Irene; Treder, Matthias S.; Miklody, Daniel

    2015-01-01

    responses to tone onsets, such as N1/P2 ERP components. Music clips (resembling minimalistic electro-pop) were presented to 11 subjects, either in an ensemble version (drums, bass, keyboard) or in the corresponding three solo versions. For each instrument we train a spatio-temporal regression filter...... at the level of early auditory ERPs parallels the perceptual segregation of multi-voiced music....

  10. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    Science.gov (United States)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  11. Ensemble Deep Learning for Biomedical Time Series Classification

    Directory of Open Access Journals (Sweden)

    Lin-peng Jin

    2016-01-01

    Full Text Available Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.

  12. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    Science.gov (United States)

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the

  13. Prediction based chaos control via a new neural network

    International Nuclear Information System (INIS)

    Shen Liqun; Wang Mao; Liu Wanyu; Sun Guanghui

    2008-01-01

    In this Letter, a new chaos control scheme based on chaos prediction is proposed. To perform chaos prediction, a new neural network architecture for complex nonlinear approximation is proposed. And the difficulty in building and training the neural network is also reduced. Simulation results of Logistic map and Lorenz system show the effectiveness of the proposed chaos control scheme and the proposed neural network

  14. Automated Grading of Gliomas using Deep Learning in Digital Pathology Images: A modular approach with ensemble of convolutional neural networks.

    Science.gov (United States)

    Ertosun, Mehmet Günhan; Rubin, Daniel L

    2015-01-01

    Brain glioma is the most common primary malignant brain tumors in adults with different pathologic subtypes: Lower Grade Glioma (LGG) Grade II, Lower Grade Glioma (LGG) Grade III, and Glioblastoma Multiforme (GBM) Grade IV. The survival and treatment options are highly dependent of this glioma grade. We propose a deep learning-based, modular classification pipeline for automated grading of gliomas using digital pathology images. Whole tissue digitized images of pathology slides obtained from The Cancer Genome Atlas (TCGA) were used to train our deep learning modules. Our modular pipeline provides diagnostic quality statistics, such as precision, sensitivity and specificity, of the individual deep learning modules, and (1) facilitates training given the limited data in this domain, (2) enables exploration of different deep learning structures for each module, (3) leads to developing less complex modules that are simpler to analyze, and (4) provides flexibility, permitting use of single modules within the framework or use of other modeling or machine learning applications, such as probabilistic graphical models or support vector machines. Our modular approach helps us meet the requirements of minimum accuracy levels that are demanded by the context of different decision points within a multi-class classification scheme. Convolutional Neural Networks are trained for each module for each sub-task with more than 90% classification accuracies on validation data set, and achieved classification accuracy of 96% for the task of GBM vs LGG classification, 71% for further identifying the grade of LGG into Grade II or Grade III on independent data set coming from new patients from the multi-institutional repository.

  15. The Drag-based Ensemble Model (DBEM) for Coronal Mass Ejection Propagation

    Science.gov (United States)

    Dumbović, Mateja; Čalogović, Jaša; Vršnak, Bojan; Temmer, Manuela; Mays, M. Leila; Veronig, Astrid; Piantschitsch, Isabell

    2018-02-01

    The drag-based model for heliospheric propagation of coronal mass ejections (CMEs) is a widely used analytical model that can predict CME arrival time and speed at a given heliospheric location. It is based on the assumption that the propagation of CMEs in interplanetary space is solely under the influence of magnetohydrodynamical drag, where CME propagation is determined based on CME initial properties as well as the properties of the ambient solar wind. We present an upgraded version, the drag-based ensemble model (DBEM), that covers ensemble modeling to produce a distribution of possible ICME arrival times and speeds. Multiple runs using uncertainty ranges for the input values can be performed in almost real-time, within a few minutes. This allows us to define the most likely ICME arrival times and speeds, quantify prediction uncertainties, and determine forecast confidence. The performance of the DBEM is evaluated and compared to that of ensemble WSA-ENLIL+Cone model (ENLIL) using the same sample of events. It is found that the mean error is ME = ‑9.7 hr, mean absolute error MAE = 14.3 hr, and root mean square error RMSE = 16.7 hr, which is somewhat higher than, but comparable to ENLIL errors (ME = ‑6.1 hr, MAE = 12.8 hr and RMSE = 14.4 hr). Overall, DBEM and ENLIL show a similar performance. Furthermore, we find that in both models fast CMEs are predicted to arrive earlier than observed, most likely owing to the physical limitations of models, but possibly also related to an overestimation of the CME initial speed for fast CMEs.

  16. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2013-01-01

    the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives

  17. Skill prediction of local weather forecasts based on the ECMWF ensemble

    Directory of Open Access Journals (Sweden)

    C. Ziehmann

    2001-01-01

    Full Text Available Ensemble Prediction has become an essential part of numerical weather forecasting. In this paper we investigate the ability of ensemble forecasts to provide an a priori estimate of the expected forecast skill. Several quantities derived from the local ensemble distribution are investigated for a two year data set of European Centre for Medium-Range Weather Forecasts (ECMWF temperature and wind speed ensemble forecasts at 30 German stations. The results indicate that the population of the ensemble mode provides useful information for the uncertainty in temperature forecasts. The ensemble entropy is a similar good measure. This is not true for the spread if it is simply calculated as the variance of the ensemble members with respect to the ensemble mean. The number of clusters in the C regions is almost unrelated to the local skill. For wind forecasts, the results are less promising.

  18. Distributed HUC-based modeling with SUMMA for ensemble streamflow forecasting over large regional domains.

    Science.gov (United States)

    Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.

    2017-12-01

    Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the Mizu

  19. Ensemble-based flash-flood modelling: Taking into account hydrodynamic parameters and initial soil moisture uncertainties

    Science.gov (United States)

    Edouard, Simon; Vincendon, Béatrice; Ducrocq, Véronique

    2018-05-01

    Intense precipitation events in the Mediterranean often lead to devastating flash floods (FF). FF modelling is affected by several kinds of uncertainties and Hydrological Ensemble Prediction Systems (HEPS) are designed to take those uncertainties into account. The major source of uncertainty comes from rainfall forcing and convective-scale meteorological ensemble prediction systems can manage it for forecasting purpose. But other sources are related to the hydrological modelling part of the HEPS. This study focuses on the uncertainties arising from the hydrological model parameters and initial soil moisture with aim to design an ensemble-based version of an hydrological model dedicated to Mediterranean fast responding rivers simulations, the ISBA-TOP coupled system. The first step consists in identifying the parameters that have the strongest influence on FF simulations by assuming perfect precipitation. A sensitivity study is carried out first using a synthetic framework and then for several real events and several catchments. Perturbation methods varying the most sensitive parameters as well as initial soil moisture allow designing an ensemble-based version of ISBA-TOP. The first results of this system on some real events are presented. The direct perspective of this work will be to drive this ensemble-based version with the members of a convective-scale meteorological ensemble prediction system to design a complete HEPS for FF forecasting.

  20. A new strategy for snow-cover mapping using remote sensing data and ensemble based systems techniques

    Science.gov (United States)

    Roberge, S.; Chokmani, K.; De Sève, D.

    2012-04-01

    The snow cover plays an important role in the hydrological cycle of Quebec (Eastern Canada). Consequently, evaluating its spatial extent interests the authorities responsible for the management of water resources, especially hydropower companies. The main objective of this study is the development of a snow-cover mapping strategy using remote sensing data and ensemble based systems techniques. Planned to be tested in a near real-time operational mode, this snow-cover mapping strategy has the advantage to provide the probability of a pixel to be snow covered and its uncertainty. Ensemble systems are made of two key components. First, a method is needed to build an ensemble of classifiers that is diverse as much as possible. Second, an approach is required to combine the outputs of individual classifiers that make up the ensemble in such a way that correct decisions are amplified, and incorrect ones are cancelled out. In this study, we demonstrate the potential of ensemble systems to snow-cover mapping using remote sensing data. The chosen classifier is a sequential thresholds algorithm using NOAA-AVHRR data adapted to conditions over Eastern Canada. Its special feature is the use of a combination of six sequential thresholds varying according to the day in the winter season. Two versions of the snow-cover mapping algorithm have been developed: one is specific for autumn (from October 1st to December 31st) and the other for spring (from March 16th to May 31st). In order to build the ensemble based system, different versions of the algorithm are created by varying randomly its parameters. One hundred of the versions are included in the ensemble. The probability of a pixel to be snow, no-snow or cloud covered corresponds to the amount of votes the pixel has been classified as such by all classifiers. The overall performance of ensemble based mapping is compared to the overall performance of the chosen classifier, and also with ground observations at meteorological

  1. Ensemble empirical mode decomposition based fluorescence spectral noise reduction for low concentration PAHs

    Science.gov (United States)

    Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian

    2017-11-01

    A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.

  2. Ensemble-based evaluation of extreme water levels for the eastern Baltic Sea

    Science.gov (United States)

    Eelsalu, Maris; Soomere, Tarmo

    2016-04-01

    The risks and damages associated with coastal flooding that are naturally associated with an increase in the magnitude of extreme storm surges are one of the largest concerns of countries with extensive low-lying nearshore areas. The relevant risks are even more contrast for semi-enclosed water bodies such as the Baltic Sea where subtidal (weekly-scale) variations in the water volume of the sea substantially contribute to the water level and lead to large spreading of projections of future extreme water levels. We explore the options for using large ensembles of projections to more reliably evaluate return periods of extreme water levels. Single projections of the ensemble are constructed by means of fitting several sets of block maxima with various extreme value distributions. The ensemble is based on two simulated data sets produced in the Swedish Meteorological and Hydrological Institute. A hindcast by the Rossby Centre Ocean model is sampled with a resolution of 6 h and a similar hindcast by the circulation model NEMO with a resolution of 1 h. As the annual maxima of water levels in the Baltic Sea are not always uncorrelated, we employ maxima for calendar years and for stormy seasons. As the shape parameter of the Generalised Extreme Value distribution changes its sign and substantially varies in magnitude along the eastern coast of the Baltic Sea, the use of a single distribution for the entire coast is inappropriate. The ensemble involves projections based on the Generalised Extreme Value, Gumbel and Weibull distributions. The parameters of these distributions are evaluated using three different ways: maximum likelihood method and method of moments based on both biased and unbiased estimates. The total number of projections in the ensemble is 40. As some of the resulting estimates contain limited additional information, the members of pairs of projections that are highly correlated are assigned weights 0.6. A comparison of the ensemble-based projection of

  3. An ensemble prediction approach to weekly Dengue cases forecasting based on climatic and terrain conditions

    Directory of Open Access Journals (Sweden)

    Sougata Deb

    2017-11-01

    Full Text Available Introduction: Dengue fever has been one of the most concerning endemic diseases of recent times. Every year, 50-100 million people get infected by the dengue virus across the world. Historically, it has been most prevalent in Southeast Asia and the Pacific Islands. In recent years, frequent dengue epidemics have started occurring in Latin America as well. This study focused on assessing the impact of different short and long-term lagged climatic predictors on dengue cases. Additionally, it assessed the impact of building an ensemble model using multiple time series and regression models, in improving prediction accuracy. Materials and Methods: Experimental data were based on two Latin American cities, viz. San Juan (Puerto Rico and Iquitos (Peru. Due to weather and geographic differences, San Juan recorded higher dengue incidences than Iquitos. Using lagged cross-correlations, this study confirmed the impact of temperature and vegetation on the number of dengue cases for both cities, though in varied degrees and time lags. An ensemble of multiple predictive models using an elaborate set of derived predictors was built and validated. Results: The proposed ensemble prediction achieved a mean absolute error of 21.55, 4.26 points lower than the 25.81 obtained by a standard negative binomial model. Changes in climatic conditions and urbanization were found to be strong predictors as established empirically in other researches. Some of the predictors were new and informative, which have not been explored in any other relevant studies yet. Discussion and Conclusions: Two original contributions were made in this research. Firstly, a focused and extensive feature engineering aligned with the mosquito lifecycle. Secondly, a novel covariate pattern-matching based prediction approach using past time series trend of the predictor variables. Increased accuracy of the proposed model over the benchmark model proved the appropriateness of the analytical approach

  4. Hybrid nanomembrane-based capacitors for the determination of the dielectric constant of semiconducting molecular ensembles

    Science.gov (United States)

    Petrini, Paula A.; Silva, Ricardo M. L.; de Oliveira, Rafael F.; Merces, Leandro; Bof Bufon, Carlos C.

    2018-06-01

    Considerable advances in the field of molecular electronics have been achieved over the recent years. One persistent challenge, however, is the exploitation of the electronic properties of molecules fully integrated into devices. Typically, the molecular electronic properties are investigated using sophisticated techniques incompatible with a practical device technology, such as the scanning tunneling microscopy. The incorporation of molecular materials in devices is not a trivial task as the typical dimensions of electrical contacts are much larger than the molecular ones. To tackle this issue, we report on hybrid capacitors using mechanically-compliant nanomembranes to encapsulate ultrathin molecular ensembles for the investigation of molecular dielectric properties. As the prototype material, copper (II) phthalocyanine (CuPc) has been chosen as information on its dielectric constant (k CuPc) at the molecular scale is missing. Here, hybrid nanomembrane-based capacitors containing metallic nanomembranes, insulating Al2O3 layers, and the CuPc molecular ensembles have been fabricated and evaluated. The Al2O3 is used to prevent short circuits through the capacitor plates as the molecular layer is considerably thin (electrical measurements of devices with molecular layers of different thicknesses, the CuPc dielectric constant has been reliably determined (k CuPc = 4.5 ± 0.5). These values suggest a mild contribution of the molecular orientation on the CuPc dielectric properties. The reported nanomembrane-based capacitor is a viable strategy for the dielectric characterization of ultrathin molecular ensembles integrated into a practical, real device technology.

  5. Hybrid nanomembrane-based capacitors for the determination of the dielectric constant of semiconducting molecular ensembles.

    Science.gov (United States)

    Petrini, Paula Andreia; Lopes da Silva, Ricardo Magno; de Oliveira, Rafael Furlan; Merces, Leandro; Bufon, Carlos César Bof

    2018-04-06

    Considerable advances in the field of molecular electronics have been achieved over the recent years. One persistent challenge, however, is the exploitation of the electronic properties of molecules fully integrated into devices. Typically, the molecular electronic properties are investigated using sophisticated techniques incompatible with a practical device technology, such as the scanning tunneling microscope (STM). The incorporation of molecular materials in devices is not a trivial task since the typical dimensions of electrical contacts are much larger than the molecular ones. To tackle this issue, we report on hybrid capacitors using mechanically-compliant nanomembranes to encapsulate ultrathin molecular ensembles for the investigation of molecular dielectric properties. As the prototype material, copper (II) phthalocyanine (CuPc) has been chosen as information on its dielectric constant (kCuPc) at the molecular scale is missing. Here, hybrid nanomembrane-based capacitors containing metallic nanomembranes, insulating Al2O3 layers, and the CuPc molecular ensemble have been fabricated and evaluated. The Al2O3 is used to prevent short circuits through the capacitor plates as the molecular layer is considerably thin (< 30 nm). From the electrical measurements of devices with molecular layers of different thicknesses, the CuPc dielectric constant has been reliably determined (kCuPc = 4.5 ± 0.5). These values suggest a mild contribution of molecular orientation in the CuPc dielectric properties. The reported nanomembrane-based capacitor is a viable strategy for the dielectric characterization of ultrathin molecular ensembles integrated into a practical, real device technology. © 2018 IOP Publishing Ltd.

  6. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  7. An Ensemble Approach to Knowledge-Based Intensity-Modulated Radiation Therapy Planning

    Directory of Open Access Journals (Sweden)

    Jiahan Zhang

    2018-03-01

    Full Text Available Knowledge-based planning (KBP utilizes experienced planners’ knowledge embedded in prior plans to estimate optimal achievable dose volume histogram (DVH of new cases. In the regression-based KBP framework, previously planned patients’ anatomical features and DVHs are extracted, and prior knowledge is summarized as the regression coefficients that transform features to organ-at-risk DVH predictions. In our study, we find that in different settings, different regression methods work better. To improve the robustness of KBP models, we propose an ensemble method that combines the strengths of various linear regression models, including stepwise, lasso, elastic net, and ridge regression. In the ensemble approach, we first obtain individual model prediction metadata using in-training-set leave-one-out cross validation. A constrained optimization is subsequently performed to decide individual model weights. The metadata is also used to filter out impactful training set outliers. We evaluate our method on a fresh set of retrospectively retrieved anonymized prostate intensity-modulated radiation therapy (IMRT cases and head and neck IMRT cases. The proposed approach is more robust against small training set size, wrongly labeled cases, and dosimetric inferior plans, compared with other individual models. In summary, we believe the improved robustness makes the proposed method more suitable for clinical settings than individual models.

  8. Combining Rosetta with molecular dynamics (MD): A benchmark of the MD-based ensemble protein design.

    Science.gov (United States)

    Ludwiczak, Jan; Jarmula, Adam; Dunin-Horkawicz, Stanislaw

    2018-07-01

    Computational protein design is a set of procedures for computing amino acid sequences that will fold into a specified structure. Rosetta Design, a commonly used software for protein design, allows for the effective identification of sequences compatible with a given backbone structure, while molecular dynamics (MD) simulations can thoroughly sample near-native conformations. We benchmarked a procedure in which Rosetta design is started on MD-derived structural ensembles and showed that such a combined approach generates 20-30% more diverse sequences than currently available methods with only a slight increase in computation time. Importantly, the increase in diversity is achieved without a loss in the quality of the designed sequences assessed by their resemblance to natural sequences. We demonstrate that the MD-based procedure is also applicable to de novo design tasks started from backbone structures without any sequence information. In addition, we implemented a protocol that can be used to assess the stability of designed models and to select the best candidates for experimental validation. In sum our results demonstrate that the MD ensemble-based flexible backbone design can be a viable method for protein design, especially for tasks that require a large pool of diverse sequences. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Extending Correlation Filter-Based Visual Tracking by Tree-Structured Ensemble and Spatial Windowing.

    Science.gov (United States)

    Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin

    2017-11-01

    Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.

  10. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  11. Ensemble ANNs-PSO-GA Approach for Day-ahead Stock E-exchange Prices Forecasting

    Directory of Open Access Journals (Sweden)

    Yi Xiao

    2013-02-01

    Full Text Available Stock e-exchange prices forecasting is an important financial problem that is receiving increasing attention. This study proposes a novel three-stage nonlinear ensemble model. In the proposed model, three different types of neural-network based models, i.e. Elman network, generalized regression neural network (GRNN and wavelet neural network (WNN are constructed by three non-overlapping training sets and are further optimized by improved particle swarm optimization (IPSO. Finally, a neural-network-based nonlinear meta-model is generated by learning three neural-network based models through support vector machines (SVM neural network. The superiority of the proposed approach lies in its flexibility to account for potentially complex nonlinear relationships. Three daily stock indices time series are used for validating the forecasting model. Empirical results suggest the ensemble ANNs-PSO-GA approach can significantly improve the prediction performance over other individual models and linear combination models listed in this study.

  12. Ensemble Classification of Data Streams Based on Attribute Reduction and a Sliding Window

    Directory of Open Access Journals (Sweden)

    Yingchun Chen

    2018-04-01

    Full Text Available With the current increasing volume and dimensionality of data, traditional data classification algorithms are unable to satisfy the demands of practical classification applications of data streams. To deal with noise and concept drift in data streams, we propose an ensemble classification algorithm based on attribute reduction and a sliding window in this paper. Using mutual information, an approximate attribute reduction algorithm based on rough sets is used to reduce data dimensionality and increase the diversity of reduced results in the algorithm. A double-threshold concept drift detection method and a three-stage sliding window control strategy are introduced to improve the performance of the algorithm when dealing with both noise and concept drift. The classification precision is further improved by updating the base classifiers and their nonlinear weights. Experiments on synthetic datasets and actual datasets demonstrate the performance of the algorithm in terms of classification precision, memory use, and time efficiency.

  13. An empirical study of ensemble-based semi-supervised learning approaches for imbalanced splice site datasets.

    Science.gov (United States)

    Stanescu, Ana; Caragea, Doina

    2015-01-01

    Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework.

  14. Ensemble Classifiers for Predicting HIV-1 Resistance from Three Rule-Based Genotypic Resistance Interpretation Systems.

    Science.gov (United States)

    Raposo, Letícia M; Nobre, Flavio F

    2017-08-30

    Resistance to antiretrovirals (ARVs) is a major problem faced by HIV-infected individuals. Different rule-based algorithms were developed to infer HIV-1 susceptibility to antiretrovirals from genotypic data. However, there is discordance between them, resulting in difficulties for clinical decisions about which treatment to use. Here, we developed ensemble classifiers integrating three interpretation algorithms: Agence Nationale de Recherche sur le SIDA (ANRS), Rega, and the genotypic resistance interpretation system from Stanford HIV Drug Resistance Database (HIVdb). Three approaches were applied to develop a classifier with a single resistance profile: stacked generalization, a simple plurality vote scheme and the selection of the interpretation system with the best performance. The strategies were compared with the Friedman's test and the performance of the classifiers was evaluated using the F-measure, sensitivity and specificity values. We found that the three strategies had similar performances for the selected antiretrovirals. For some cases, the stacking technique with naïve Bayes as the learning algorithm showed a statistically superior F-measure. This study demonstrates that ensemble classifiers can be an alternative tool for clinical decision-making since they provide a single resistance profile from the most commonly used resistance interpretation systems.

  15. Various multistage ensembles for prediction of heating energy consumption

    Directory of Open Access Journals (Sweden)

    Radisa Jovanovic

    2015-04-01

    Full Text Available Feedforward neural network models are created for prediction of daily heating energy consumption of a NTNU university campus Gloshaugen using actual measured data for training and testing. Improvement of prediction accuracy is proposed by using neural network ensemble. Previously trained feed-forward neural networks are first separated into clusters, using k-means algorithm, and then the best network of each cluster is chosen as member of an ensemble. Two conventional averaging methods for obtaining ensemble output are applied; simple and weighted. In order to achieve better prediction results, multistage ensemble is investigated. As second level, adaptive neuro-fuzzy inference system with various clustering and membership functions are used to aggregate the selected ensemble members. Feedforward neural network in second stage is also analyzed. It is shown that using ensemble of neural networks can predict heating energy consumption with better accuracy than the best trained single neural network, while the best results are achieved with multistage ensemble.

  16. Use of deep neural network ensembles to identify embryonic-fetal transition markers: repression of COX7A1 in embryonic and cancer cells.

    Science.gov (United States)

    West, Michael D; Labat, Ivan; Sternberg, Hal; Larocca, Dana; Nasonkin, Igor; Chapman, Karen B; Singh, Ratnesh; Makarev, Eugene; Aliper, Alex; Kazennov, Andrey; Alekseenko, Andrey; Shuvalov, Nikolai; Cheskidova, Evgenia; Alekseev, Aleksandr; Artemov, Artem; Putin, Evgeny; Mamoshina, Polina; Pryanichnikov, Nikita; Larocca, Jacob; Copeland, Karen; Izumchenko, Evgeny; Korzinkin, Mikhail; Zhavoronkov, Alex

    2018-01-30

    Here we present the application of deep neural network (DNN) ensembles trained on transcriptomic data to identify the novel markers associated with the mammalian embryonic-fetal transition (EFT). Molecular markers of this process could provide important insights into regulatory mechanisms of normal development, epimorphic tissue regeneration and cancer. Subsequent analysis of the most significant genes behind the DNNs classifier on an independent dataset of adult-derived and human embryonic stem cell (hESC)-derived progenitor cell lines led to the identification of COX7A1 gene as a potential EFT marker. COX7A1 , encoding a cytochrome C oxidase subunit, was up-regulated in post-EFT murine and human cells including adult stem cells, but was not expressed in pre-EFT pluripotent embryonic stem cells or their in vitro -derived progeny. COX7A1 expression level was observed to be undetectable or low in multiple sarcoma and carcinoma cell lines as compared to normal controls. The knockout of the gene in mice led to a marked glycolytic shift reminiscent of the Warburg effect that occurs in cancer cells. The DNN approach facilitated the elucidation of a potentially new biomarker of cancer and pre-EFT cells, the embryo-onco phenotype, which may potentially be used as a target for controlling the embryonic-fetal transition.

  17. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    Science.gov (United States)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  18. Symmetric minimally entangled typical thermal states, grand-canonical ensembles, and the influence of the collapse bases

    Science.gov (United States)

    Binder, Moritz; Barthel, Thomas

    Based on DMRG, strongly correlated quantum many-body systems at finite temperatures can be simulated by sampling over a certain class of pure matrix product states (MPS) called minimally entangled typical thermal states (METTS). Here, we show how symmetries of the system can be exploited to considerably reduce computation costs in the METTS algorithm. While this is straightforward for the canonical ensemble, we introduce a modification of the algorithm to efficiently simulate the grand-canonical ensemble under utilization of symmetries. In addition, we construct novel symmetry-conserving collapse bases for the transitions in the Markov chain of METTS that improve the speed of convergence of the algorithm by reducing autocorrelations.

  19. A Neural Network-Based Interval Pattern Matcher

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2015-07-01

    Full Text Available One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches 100% and that is promising.

  20. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  1. Unsupervised ensemble ranking of terms in electronic health record notes based on their importance to patients.

    Science.gov (United States)

    Chen, Jinying; Yu, Hong

    2017-04-01

    Allowing patients to access their own electronic health record (EHR) notes through online patient portals has the potential to improve patient-centered care. However, EHR notes contain abundant medical jargon that can be difficult for patients to comprehend. One way to help patients is to reduce information overload and help them focus on medical terms that matter most to them. Targeted education can then be developed to improve patient EHR comprehension and the quality of care. The aim of this work was to develop FIT (Finding Important Terms for patients), an unsupervised natural language processing (NLP) system that ranks medical terms in EHR notes based on their importance to patients. We built FIT on a new unsupervised ensemble ranking model derived from the biased random walk algorithm to combine heterogeneous information resources for ranking candidate terms from each EHR note. Specifically, FIT integrates four single views (rankers) for term importance: patient use of medical concepts, document-level term salience, word co-occurrence based term relatedness, and topic coherence. It also incorporates partial information of term importance as conveyed by terms' unfamiliarity levels and semantic types. We evaluated FIT on 90 expert-annotated EHR notes and used the four single-view rankers as baselines. In addition, we implemented three benchmark unsupervised ensemble ranking methods as strong baselines. FIT achieved 0.885 AUC-ROC for ranking candidate terms from EHR notes to identify important terms. When including term identification, the performance of FIT for identifying important terms from EHR notes was 0.813 AUC-ROC. Both performance scores significantly exceeded the corresponding scores from the four single rankers (P<0.001). FIT also outperformed the three ensemble rankers for most metrics. Its performance is relatively insensitive to its parameter. FIT can automatically identify EHR terms important to patients. It may help develop future interventions

  2. Observation-based Quantitative Uncertainty Estimation for Realtime Tsunami Inundation Forecast using ABIC and Ensemble Simulation

    Science.gov (United States)

    Takagawa, T.

    2016-12-01

    An ensemble forecasting scheme for tsunami inundation is presented. The scheme consists of three elemental methods. The first is a hierarchical Bayesian inversion using Akaike's Bayesian Information Criterion (ABIC). The second is Montecarlo sampling from a probability density function of multidimensional normal distribution. The third is ensamble analysis of tsunami inundation simulations with multiple tsunami sources. Simulation based validation of the model was conducted. A tsunami scenario of M9.1 Nankai earthquake was chosen as a target of validation. Tsunami inundation around Nagoya Port was estimated by using synthetic tsunami waveforms at offshore GPS buoys. The error of estimation of tsunami inundation area was about 10% even if we used only ten minutes observation data. The estimation accuracy of waveforms on/off land and spatial distribution of maximum tsunami inundation depth is demonstrated.

  3. 3-D visualization of ensemble weather forecasts - Part 2: Forecasting warm conveyor belt situations for aircraft-based field campaigns

    Science.gov (United States)

    Rautenhaus, M.; Grams, C. M.; Schäfler, A.; Westermann, R.

    2015-02-01

    We present the application of interactive 3-D visualization of ensemble weather predictions to forecasting warm conveyor belt situations during aircraft-based atmospheric research campaigns. Motivated by forecast requirements of the T-NAWDEX-Falcon 2012 campaign, a method to predict 3-D probabilities of the spatial occurrence of warm conveyor belts has been developed. Probabilities are derived from Lagrangian particle trajectories computed on the forecast wind fields of the ECMWF ensemble prediction system. Integration of the method into the 3-D ensemble visualization tool Met.3D, introduced in the first part of this study, facilitates interactive visualization of WCB features and derived probabilities in the context of the ECMWF ensemble forecast. We investigate the sensitivity of the method with respect to trajectory seeding and forecast wind field resolution. Furthermore, we propose a visual analysis method to quantitatively analyse the contribution of ensemble members to a probability region and, thus, to assist the forecaster in interpreting the obtained probabilities. A case study, revisiting a forecast case from T-NAWDEX-Falcon, illustrates the practical application of Met.3D and demonstrates the use of 3-D and uncertainty visualization for weather forecasting and for planning flight routes in the medium forecast range (three to seven days before take-off).

  4. Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure

    Directory of Open Access Journals (Sweden)

    Xiaodong Zeng

    2014-01-01

    Full Text Available A weighted accuracy and diversity (WAD method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.

  5. Ensemble Methods

    Science.gov (United States)

    Re, Matteo; Valentini, Giorgio

    2012-03-01

    Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been

  6. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  7. Effectiveness of firefly algorithm based neural network in time series ...

    African Journals Online (AJOL)

    Effectiveness of firefly algorithm based neural network in time series forecasting. ... In the experiments, three well known time series were used to evaluate the performance. Results obtained were compared with ... Keywords: Time series, Artificial Neural Network, Firefly Algorithm, Particle Swarm Optimization, Overfitting ...

  8. NYYD Ensemble

    Index Scriptorium Estoniae

    2002-01-01

    NYYD Ensemble'i duost Traksmann - Lukk E.-S. Tüüri teosega "Symbiosis", mis on salvestatud ka hiljuti ilmunud NYYD Ensemble'i CDle. 2. märtsil Rakvere Teatri väikeses saalis ja 3. märtsil Rotermanni Soolalaos, kavas Tüür, Kaumann, Berio, Reich, Yun, Hauta-aho, Buckinx

  9. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  10. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  11. Application of NARR-based NLDAS Ensemble Simulations to Continental-Scale Drought Monitoring

    Science.gov (United States)

    Alonge, C. J.; Cosgrove, B. A.

    2008-05-01

    Government estimates indicate that droughts cause billions of dollars of damage to agricultural interests each year. More effective identification of droughts would directly benefit decision makers, and would allow for the more efficient allocation of resources that might mitigate the event. Land data assimilation systems, with their high quality representations of soil moisture, present an ideal platform for drought monitoring, and offer many advantages over traditional modeling systems. The recently released North American Regional Reanalysis (NARR) covers the NLDAS domain and provides all fields necessary to force the NLDAS for 27 years. This presents an ideal opportunity to combine NARR and NLDAS resources into an effective real-time drought monitor. Toward this end, our project seeks to validate and explore the NARR's suitability as a base for drought monitoring applications - both in terms of data set length and accuracy. Along the same lines, the project will examine the impact of the use of different (longer) LDAS model climatologies on drought monitoring, and will explore the advantages of ensemble simulations versus single model simulations in drought monitoring activities. We also plan to produce a NARR- and observation-based high quality 27 year, 1/8th degree, 3-hourly, land surface and meteorological forcing data sets. An investigation of the best way to force an LDAS-type system will also be made, with traditional NLDAS and NLDASE forcing options explored. This presentation will focus on an overview of the drought monitoring project, and will include a summary of recent progress. Developments include the generation of forcing data sets, ensemble LSM output, and production of model-based drought indices over the entire NLDAS domain. Project forcing files use 32km NARR model output as a data backbone, and include observed precipitation (blended CPC gauge, PRISM gauge, Stage II, HPD, and CMORPH) and a GOES-based bias correction of downward solar

  12. Assessment of robustness and significance of climate change signals for an ensemble of distribution-based scaled climate projections

    DEFF Research Database (Denmark)

    Seaby, Lauren Paige; Refsgaard, J.C.; Sonnenborg, T.O.

    2013-01-01

    An ensemble of 11 regional climate model (RCM) projections are analysed for Denmark from a hydrological modelling inputs perspective. Two bias correction approaches are applied: a relatively simple monthly delta change (DC) method and a more complex daily distribution-based scaling (DBS) method...

  13. Ensemble Manifold Rank Preserving for Acceleration-Based Human Activity Recognition.

    Science.gov (United States)

    Tao, Dapeng; Jin, Lianwen; Yuan, Yuan; Xue, Yang

    2016-06-01

    With the rapid development of mobile devices and pervasive computing technologies, acceleration-based human activity recognition, a difficult yet essential problem in mobile apps, has received intensive attention recently. Different acceleration signals for representing different activities or even a same activity have different attributes, which causes troubles in normalizing the signals. We thus cannot directly compare these signals with each other, because they are embedded in a nonmetric space. Therefore, we present a nonmetric scheme that retains discriminative and robust frequency domain information by developing a novel ensemble manifold rank preserving (EMRP) algorithm. EMRP simultaneously considers three aspects: 1) it encodes the local geometry using the ranking order information of intraclass samples distributed on local patches; 2) it keeps the discriminative information by maximizing the margin between samples of different classes; and 3) it finds the optimal linear combination of the alignment matrices to approximate the intrinsic manifold lied in the data. Experiments are conducted on the South China University of Technology naturalistic 3-D acceleration-based activity dataset and the naturalistic mobile-devices based human activity dataset to demonstrate the robustness and effectiveness of the new nonmetric scheme for acceleration-based human activity recognition.

  14. Investigating properties of the cardiovascular system using innovative analysis algorithms based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Yeh, Jia-Rong; Lin, Tzu-Yu; Chen, Yun; Sun, Wei-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2012-01-01

    Cardiovascular system is known to be nonlinear and nonstationary. Traditional linear assessments algorithms of arterial stiffness and systemic resistance of cardiac system accompany the problem of nonstationary or inconvenience in practical applications. In this pilot study, two new assessment methods were developed: the first is ensemble empirical mode decomposition based reflection index (EEMD-RI) while the second is based on the phase shift between ECG and BP on cardiac oscillation. Both methods utilise the EEMD algorithm which is suitable for nonlinear and nonstationary systems. These methods were used to investigate the properties of arterial stiffness and systemic resistance for a pig's cardiovascular system via ECG and blood pressure (BP). This experiment simulated a sequence of continuous changes of blood pressure arising from steady condition to high blood pressure by clamping the artery and an inverse by relaxing the artery. As a hypothesis, the arterial stiffness and systemic resistance should vary with the blood pressure due to clamping and relaxing the artery. The results show statistically significant correlations between BP, EEMD-based RI, and the phase shift between ECG and BP on cardiac oscillation. The two assessments results demonstrate the merits of the EEMD for signal analysis.

  15. Identifying climate analogues for precipitation extremes for Denmark based on RCM simulations from the ENSEMBLES database.

    Science.gov (United States)

    Arnbjerg-Nielsen, K; Funder, S G; Madsen, H

    2015-01-01

    Climate analogues, also denoted Space-For-Time, may be used to identify regions where the present climatic conditions resemble conditions of a past or future state of another location or region based on robust climate variable statistics in combination with projections of how these statistics change over time. The study focuses on assessing climate analogues for Denmark based on current climate data set (E-OBS) observations as well as the ENSEMBLES database of future climates with the aim of projecting future precipitation extremes. The local present precipitation extremes are assessed by means of intensity-duration-frequency curves for urban drainage design for the relevant locations being France, the Netherlands, Belgium, Germany, the United Kingdom, and Denmark. Based on this approach projected increases of extreme precipitation by 2100 of 9 and 21% are expected for 2 and 10 year return periods, respectively. The results should be interpreted with caution as the best region to represent future conditions for Denmark is the coastal areas of Northern France, for which only little information is available with respect to present precipitation extremes.

  16. Weather forecasting based on hybrid neural model

    Science.gov (United States)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-11-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  17. A study on reducing update frequency of the forecast samples in the ensemble-based 4DVar data assimilation method

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Aimei; Xu, Daosheng [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province; Chinese Academy of Meteorological Sciences, Beijing (China). State Key Lab. of Severe Weather; Qiu, Xiaobin [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province; Tianjin Institute of Meteorological Science (China); Qiu, Chongjian [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province

    2013-02-15

    In the ensemble-based four dimensional variational assimilation method (SVD-En4DVar), a singular value decomposition (SVD) technique is used to select the leading eigenvectors and the analysis variables are expressed as the orthogonal bases expansion of the eigenvectors. The experiments with a two-dimensional shallow-water equation model and simulated observations show that the truncation error and rejection of observed signals due to the reduced-dimensional reconstruction of the analysis variable are the major factors that damage the analysis when the ensemble size is not large enough. However, a larger-sized ensemble is daunting computational burden. Experiments with a shallow-water equation model also show that the forecast error covariances remain relatively constant over time. For that reason, we propose an approach that increases the members of the forecast ensemble while reducing the update frequency of the forecast error covariance in order to increase analysis accuracy and to reduce the computational cost. A series of experiments were conducted with the shallow-water equation model to test the efficiency of this approach. The experimental results indicate that this approach is promising. Further experiments with the WRF model show that this approach is also suitable for the real atmospheric data assimilation problem, but the update frequency of the forecast error covariances should not be too low. (orig.)

  18. Face recognition based on improved BP neural network

    Directory of Open Access Journals (Sweden)

    Yue Gaili

    2017-01-01

    Full Text Available In order to improve the recognition rate of face recognition, face recognition algorithm based on histogram equalization, PCA and BP neural network is proposed. First, the face image is preprocessed by histogram equalization. Then, the classical PCA algorithm is used to extract the features of the histogram equalization image, and extract the principal component of the image. And then train the BP neural network using the trained training samples. This improved BP neural network weight adjustment method is used to train the network because the conventional BP algorithm has the disadvantages of slow convergence, easy to fall into local minima and training process. Finally, the BP neural network with the test sample input is trained to classify and identify the face images, and the recognition rate is obtained. Through the use of ORL database face image simulation experiment, the analysis results show that the improved BP neural network face recognition method can effectively improve the recognition rate of face recognition.

  19. Target recognition based on convolutional neural network

    Science.gov (United States)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  20. An Integrated Scenario Ensemble-Based Framework for Hurricane Evacuation Modeling: Part 1-Decision Support System.

    Science.gov (United States)

    Davidson, Rachel A; Nozick, Linda K; Wachtendorf, Tricia; Blanton, Brian; Colle, Brian; Kolar, Randall L; DeYoung, Sarah; Dresback, Kendra M; Yi, Wenqi; Yang, Kun; Leonardo, Nicholas

    2018-03-30

    This article introduces a new integrated scenario-based evacuation (ISE) framework to support hurricane evacuation decision making. It explicitly captures the dynamics, uncertainty, and human-natural system interactions that are fundamental to the challenge of hurricane evacuation, but have not been fully captured in previous formal evacuation models. The hazard is represented with an ensemble of probabilistic scenarios, population behavior with a dynamic decision model, and traffic with a dynamic user equilibrium model. The components are integrated in a multistage stochastic programming model that minimizes risk and travel times to provide a tree of evacuation order recommendations and an evaluation of the risk and travel time performance for that solution. The ISE framework recommendations offer an advance in the state of the art because they: (1) are based on an integrated hazard assessment (designed to ultimately include inland flooding), (2) explicitly balance the sometimes competing objectives of minimizing risk and minimizing travel time, (3) offer a well-hedged solution that is robust under the range of ways the hurricane might evolve, and (4) leverage the substantial value of increasing information (or decreasing degree of uncertainty) over the course of a hurricane event. A case study for Hurricane Isabel (2003) in eastern North Carolina is presented to demonstrate how the framework is applied, the type of results it can provide, and how it compares to available methods of a single scenario deterministic analysis and a two-stage stochastic program. © 2018 Society for Risk Analysis.

  1. A DDoS Attack Detection Method Based on Hybrid Heterogeneous Multiclassifier Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Bin Jia

    2017-01-01

    Full Text Available The explosive growth of network traffic and its multitype on Internet have brought new and severe challenges to DDoS attack detection. To get the higher True Negative Rate (TNR, accuracy, and precision and to guarantee the robustness, stability, and universality of detection system, in this paper, we propose a DDoS attack detection method based on hybrid heterogeneous multiclassifier ensemble learning and design a heuristic detection algorithm based on Singular Value Decomposition (SVD to construct our detection system. Experimental results show that our detection method is excellent in TNR, accuracy, and precision. Therefore, our algorithm has good detective performance for DDoS attack. Through the comparisons with Random Forest, k-Nearest Neighbor (k-NN, and Bagging comprising the component classifiers when the three algorithms are used alone by SVD and by un-SVD, it is shown that our model is superior to the state-of-the-art attack detection techniques in system generalization ability, detection stability, and overall detection performance.

  2. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    International Nuclear Information System (INIS)

    Li, Ning; Liang, Caiping; Yang, Jianguo; Zhou, Rui

    2016-01-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner–Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated. (paper)

  3. A rapid, ensemble and free energy based method for engineering protein stabilities.

    Science.gov (United States)

    Naganathan, Athi N

    2013-05-02

    Engineering the conformational stabilities of proteins through mutations has immense potential in biotechnological applications. It is, however, an inherently challenging problem given the weak noncovalent nature of the stabilizing interactions. In this regard, we present here a robust and fast strategy to engineer protein stabilities through mutations involving charged residues using a structure-based statistical mechanical model that accounts for the ensemble nature of folding. We validate the method by predicting the absolute changes in stability for 138 experimental mutations from 16 different proteins and enzymes with a correlation of 0.65 and importantly with a success rate of 81%. Multiple point mutants are predicted with a higher success rate (90%) that is validated further by comparing meosphile-thermophile protein pairs. In parallel, we devise a methodology to rapidly engineer mutations in silico which we benchmark against experimental mutations of ubiquitin (correlation of 0.95) and check for its feasibility on a larger therapeutic protein DNase I. We expect the method to be of importance as a first and rapid step to screen for protein mutants with specific stability in the biotechnology industry, in the construction of stability maps at the residue level (i.e., hot spots), and as a robust tool to probe for mutations that enhance the stability of protein-based drugs.

  4. Convolutional Neural Network Based on Extreme Learning Machine for Maritime Ships Recognition in Infrared Images.

    Science.gov (United States)

    Khellal, Atmane; Ma, Hongbin; Fei, Qing

    2018-05-09

    The success of Deep Learning models, notably convolutional neural networks (CNNs), makes them the favorable solution for object recognition systems in both visible and infrared domains. However, the lack of training data in the case of maritime ships research leads to poor performance due to the problem of overfitting. In addition, the back-propagation algorithm used to train CNN is very slow and requires tuning many hyperparameters. To overcome these weaknesses, we introduce a new approach fully based on Extreme Learning Machine (ELM) to learn useful CNN features and perform a fast and accurate classification, which is suitable for infrared-based recognition systems. The proposed approach combines an ELM based learning algorithm to train CNN for discriminative features extraction and an ELM based ensemble for classification. The experimental results on VAIS dataset, which is the largest dataset of maritime ships, confirm that the proposed approach outperforms the state-of-the-art models in term of generalization performance and training speed. For instance, the proposed model is up to 950 times faster than the traditional back-propagation based training of convolutional neural networks, primarily for low-level features extraction.

  5. Neural bases of congenital amusia in tonal language speakers.

    Science.gov (United States)

    Zhang, Caicai; Peng, Gang; Shao, Jing; Wang, William S-Y

    2017-03-01

    Congenital amusia is a lifelong neurodevelopmental disorder of fine-grained pitch processing. In this fMRI study, we examined the neural bases of congenial amusia in speakers of a tonal language - Cantonese. Previous studies on non-tonal language speakers suggest that the neural deficits of congenital amusia lie in the music-selective neural circuitry in the right inferior frontal gyrus (IFG). However, it is unclear whether this finding can generalize to congenital amusics in tonal languages. Tonal language experience has been reported to shape the neural processing of pitch, which raises the question of how tonal language experience affects the neural bases of congenital amusia. To investigate this question, we examined the neural circuitries sub-serving the processing of relative pitch interval in pitch-matched Cantonese level tone and musical stimuli in 11 Cantonese-speaking amusics and 11 musically intact controls. Cantonese-speaking amusics exhibited abnormal brain activities in a widely distributed neural network during the processing of lexical tone and musical stimuli. Whereas the controls exhibited significant activation in the right superior temporal gyrus (STG) in the lexical tone condition and in the cerebellum regardless of the lexical tone and music conditions, no activation was found in the amusics in those regions, which likely reflects a dysfunctional neural mechanism of relative pitch processing in the amusics. Furthermore, the amusics showed abnormally strong activation of the right middle frontal gyrus and precuneus when the pitch stimuli were repeated, which presumably reflect deficits of attending to repeated pitch stimuli or encoding them into working memory. No significant group difference was found in the right IFG in either the whole-brain analysis or region-of-interest analysis. These findings imply that the neural deficits in tonal language speakers might differ from those in non-tonal language speakers, and overlap partly with the

  6. Ensembl 2004.

    Science.gov (United States)

    Birney, E; Andrews, D; Bevan, P; Caccamo, M; Cameron, G; Chen, Y; Clarke, L; Coates, G; Cox, T; Cuff, J; Curwen, V; Cutts, T; Down, T; Durbin, R; Eyras, E; Fernandez-Suarez, X M; Gane, P; Gibbins, B; Gilbert, J; Hammond, M; Hotz, H; Iyer, V; Kahari, A; Jekosch, K; Kasprzyk, A; Keefe, D; Keenan, S; Lehvaslaiho, H; McVicker, G; Melsopp, C; Meidl, P; Mongin, E; Pettett, R; Potter, S; Proctor, G; Rae, M; Searle, S; Slater, G; Smedley, D; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Storey, R; Ureta-Vidal, A; Woodwark, C; Clamp, M; Hubbard, T

    2004-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organize biology around the sequences of large genomes. It is a comprehensive and integrated source of annotation of large genome sequences, available via interactive website, web services or flat files. As well as being one of the leading sources of genome annotation, Ensembl is an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements. The facilities of the system range from sequence analysis to data storage and visualization and installations exist around the world both in companies and at academic sites. With a total of nine genome sequences available from Ensembl and more genomes to follow, recent developments have focused mainly on closer integration between genomes and external data.

  7. Based on BP Neural Network Stock Prediction

    Science.gov (United States)

    Liu, Xiangwei; Ma, Xin

    2012-01-01

    The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…

  8. Object-Based Change Detection in Urban Areas from High Spatial Resolution Images Based on Multiple Features and Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2018-02-01

    Full Text Available To improve the accuracy of change detection in urban areas using bi-temporal high-resolution remote sensing images, a novel object-based change detection scheme combining multiple features and ensemble learning is proposed in this paper. Image segmentation is conducted to determine the objects in bi-temporal images separately. Subsequently, three kinds of object features, i.e., spectral, shape and texture, are extracted. Using the image differencing process, a difference image is generated and used as the input for nonlinear supervised classifiers, including k-nearest neighbor, support vector machine, extreme learning machine and random forest. Finally, the results of multiple classifiers are integrated using an ensemble rule called weighted voting to generate the final change detection result. Experimental results of two pairs of real high-resolution remote sensing datasets demonstrate that the proposed approach outperforms the traditional methods in terms of overall accuracy and generates change detection maps with a higher number of homogeneous regions in urban areas. Moreover, the influences of segmentation scale and the feature selection strategy on the change detection performance are also analyzed and discussed.

  9. A preclustering-based ensemble learning technique for acute appendicitis diagnoses.

    Science.gov (United States)

    Lee, Yen-Hsien; Hu, Paul Jen-Hwa; Cheng, Tsang-Hsiang; Huang, Te-Chia; Chuang, Wei-Yao

    2013-06-01

    Acute appendicitis is a common medical condition, whose effective, timely diagnosis can be difficult. A missed diagnosis not only puts the patient in danger but also requires additional resources for corrective treatments. An acute appendicitis diagnosis constitutes a classification problem, for which a further fundamental challenge pertains to the skewed outcome class distribution of instances in the training sample. A preclustering-based ensemble learning (PEL) technique aims to address the associated imbalanced sample learning problems and thereby support the timely, accurate diagnosis of acute appendicitis. The proposed PEL technique employs undersampling to reduce the number of majority-class instances in a training sample, uses preclustering to group similar majority-class instances into multiple groups, and selects from each group representative instances to create more balanced samples. The PEL technique thereby reduces potential information loss from random undersampling. It also takes advantage of ensemble learning to improve performance. We empirically evaluate this proposed technique with 574 clinical cases obtained from a comprehensive tertiary hospital in southern Taiwan, using several prevalent techniques and a salient scoring system as benchmarks. The comparative results show that PEL is more effective and less biased than any benchmarks. The proposed PEL technique seems more sensitive to identifying positive acute appendicitis than the commonly used Alvarado scoring system and exhibits higher specificity in identifying negative acute appendicitis. In addition, the sensitivity and specificity values of PEL appear higher than those of the investigated benchmarks that follow the resampling approach. Our analysis suggests PEL benefits from the more representative majority-class instances in the training sample. According to our overall evaluation results, PEL records the best overall performance, and its area under the curve measure reaches 0.619. The

  10. Development of web-based services for an ensemble flood forecasting and risk assessment system

    Science.gov (United States)

    Yaw Manful, Desmond; He, Yi; Cloke, Hannah; Pappenberger, Florian; Li, Zhijia; Wetterhall, Fredrik; Huang, Yingchun; Hu, Yuzhong

    2010-05-01

    Flooding is a wide spread and devastating natural disaster worldwide. Floods that took place in the last decade in China were ranked the worst amongst recorded floods worldwide in terms of the number of human fatalities and economic losses (Munich Re-Insurance). Rapid economic development and population expansion into low lying flood plains has worsened the situation. Current conventional flood prediction systems in China are neither suited to the perceptible climate variability nor the rapid pace of urbanization sweeping the country. Flood prediction, from short-term (a few hours) to medium-term (a few days), needs to be revisited and adapted to changing socio-economic and hydro-climatic realities. The latest technology requires implementation of multiple numerical weather prediction systems. The availability of twelve global ensemble weather prediction systems through the ‘THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a good opportunity for an effective state-of-the-art early forecasting system. A prototype of a Novel Flood Early Warning System (NEWS) using the TIGGE database is tested in the Huai River basin in east-central China. It is the first early flood warning system in China that uses the massive TIGGE database cascaded with river catchment models, the Xinanjiang hydrologic model and a 1-D hydraulic model, to predict river discharge and flood inundation. The NEWS algorithm is also designed to provide web-based services to a broad spectrum of end-users. The latter presents challenges as both databases and proprietary codes reside in different locations and converge at dissimilar times. NEWS will thus make use of a ready-to-run grid system that makes distributed computing and data resources available in a seamless and secure way. An ability to run or function on different operating systems and provide an interface or front that is accessible to broad spectrum of end-users is additional requirement. The aim is to achieve robust interoperability

  11. Predicting Hepatotoxicity of Drug Metabolites Via an Ensemble Approach Based on Support Vector Machine

    Science.gov (United States)

    Lu, Yin; Liu, Lili; Lu, Dong; Cai, Yudong; Zheng, Mingyue; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian

    2017-11-20

    Drug-induced liver injury (DILI) is a major cause of drug withdrawal. The chemical properties of the drug, especially drug metabolites, play key roles in DILI. Our goal is to construct a QSAR model to predict drug hepatotoxicity based on drug metabolites. 64 hepatotoxic drug metabolites and 3,339 non-hepatotoxic drug metabolites were gathered from MDL Metabolite Database. Considering the imbalance of the dataset, we randomly split the negative samples and combined each portion with all the positive samples to construct individually balanced datasets for constructing independent classifiers. Then, we adopted an ensemble approach to make prediction based on the results of all individual classifiers and applied the minimum Redundancy Maximum Relevance (mRMR) feature selection method to select the molecular descriptors. Eventually, for the drugs in the external test set, a Bayesian inference method was used to predict the hepatotoxicity of a drug based on its metabolites. The model showed the average balanced accuracy=78.47%, sensitivity =74.17%, and specificity=82.77%. Five molecular descriptors characterizing molecular polarity, intramolecular bonding strength, and molecular frontier orbital energy were obtained. When predicting the hepatotoxicity of a drug based on all its metabolites, the sensitivity, specificity and balanced accuracy were 60.38%, 70.00%, and 65.19%, respectively, indicating that this method is useful for identifying the hepatotoxicity of drugs. We developed an in silico model to predict hepatotoxicity of drug metabolites. Moreover, Bayesian inference was applied to predict the hepatotoxicity of a drug based on its metabolites which brought out valuable high sensitivity and specificity. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  12. Research on Fault Diagnosis Method Based on Rule Base Neural Network

    Directory of Open Access Journals (Sweden)

    Zheng Ni

    2017-01-01

    Full Text Available The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rule base is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rule base is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method.

  13. Predicting protein subcellular locations using hierarchical ensemble of Bayesian classifiers based on Markov chains

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2006-06-01

    Full Text Available Abstract Background The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. Results A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. Conclusion This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.

  14. Short text sentiment classification based on feature extension and ensemble classifier

    Science.gov (United States)

    Liu, Yang; Zhu, Xie

    2018-05-01

    With the rapid development of Internet social media, excavating the emotional tendencies of the short text information from the Internet, the acquisition of useful information has attracted the attention of researchers. At present, the commonly used can be attributed to the rule-based classification and statistical machine learning classification methods. Although micro-blog sentiment analysis has made good progress, there still exist some shortcomings such as not highly accurate enough and strong dependence from sentiment classification effect. Aiming at the characteristics of Chinese short texts, such as less information, sparse features, and diverse expressions, this paper considers expanding the original text by mining related semantic information from the reviews, forwarding and other related information. First, this paper uses Word2vec to compute word similarity to extend the feature words. And then uses an ensemble classifier composed of SVM, KNN and HMM to analyze the emotion of the short text of micro-blog. The experimental results show that the proposed method can make good use of the comment forwarding information to extend the original features. Compared with the traditional method, the accuracy, recall and F1 value obtained by this method have been improved.

  15. A CN-Based Ensembled Hydrological Model for Enhanced Watershed Runoff Prediction

    Directory of Open Access Journals (Sweden)

    Muhammad Ajmal

    2016-01-01

    Full Text Available A major structural inconsistency of the traditional curve number (CN model is its dependence on an unstable fixed initial abstraction, which normally results in sudden jumps in runoff estimation. Likewise, the lack of pre-storm soil moisture accounting (PSMA procedure is another inherent limitation of the model. To circumvent those problems, we used a variable initial abstraction after ensembling the traditional CN model and a French four-parameter (GR4J model to better quantify direct runoff from ungauged watersheds. To mimic the natural rainfall-runoff transformation at the watershed scale, our new parameterization designates intrinsic parameters and uses a simple structure. It exhibited more accurate and consistent results than earlier methods in evaluating data from 39 forest-dominated watersheds, both for small and large watersheds. In addition, based on different performance evaluation indicators, the runoff reproduction results show that the proposed model produced more consistent results for dry, normal, and wet watershed conditions than the other models used in this study.

  16. Analyzing the uncertainty of ensemble-based gridded observations in land surface simulations and drought assessment

    Science.gov (United States)

    Ahmadalipour, Ali; Moradkhani, Hamid

    2017-12-01

    Hydrologic modeling is one of the primary tools utilized for drought monitoring and drought early warning systems. Several sources of uncertainty in hydrologic modeling have been addressed in the literature. However, few studies have assessed the uncertainty of gridded observation datasets from a drought monitoring perspective. This study provides a hydrologic modeling oriented analysis of the gridded observation data uncertainties over the Pacific Northwest (PNW) and its implications on drought assessment. We utilized a recently developed 100-member ensemble-based observed forcing data to simulate hydrologic fluxes at 1/8° spatial resolution using Variable Infiltration Capacity (VIC) model, and compared the results with a deterministic observation. Meteorological and hydrological droughts are studied at multiple timescales over the basin, and seasonal long-term trends and variations of drought extent is investigated for each case. Results reveal large uncertainty of observed datasets at monthly timescale, with systematic differences for temperature records, mainly due to different lapse rates. The uncertainty eventuates in large disparities of drought characteristics. In general, an increasing trend is found for winter drought extent across the PNW. Furthermore, a ∼3% decrease per decade is detected for snow water equivalent (SWE) over the PNW, with the region being more susceptible to SWE variations of the northern Rockies than the western Cascades. The agricultural areas of southern Idaho demonstrate decreasing trend of natural soil moisture as a result of precipitation decline, which implies higher appeal for anthropogenic water storage and irrigation systems.

  17. A Link-Based Cluster Ensemble Approach For Improved Gene Expression Data Analysis

    Directory of Open Access Journals (Sweden)

    P.Balaji

    2015-01-01

    Full Text Available Abstract It is difficult from possibilities to select a most suitable effective way of clustering algorithm and its dataset for a defined set of gene expression data because we have a huge number of ways and huge number of gene expressions. At present many researchers are preferring to use hierarchical clustering in different forms this is no more totally optimal. Cluster ensemble research can solve this type of problem by automatically merging multiple data partitions from a wide range of different clusterings of any dimensions to improve both the quality and robustness of the clustering result. But we have many existing ensemble approaches using an association matrix to condense sample-cluster and co-occurrence statistics and relations within the ensemble are encapsulated only at raw level while the existing among clusters are totally discriminated. Finding these missing associations can greatly expand the capability of those ensemble methodologies for microarray data clustering. We propose general K-means cluster ensemble approach for the clustering of general categorical data into required number of partitions.

  18. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Scheid Anika

    2012-07-01

    Full Text Available Abstract Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent stochastic context-free grammar (SCFG that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples, where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones, then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst

  19. The neural bases for valuing social equality.

    Science.gov (United States)

    Aoki, Ryuta; Yomogida, Yukihito; Matsumoto, Kenji

    2015-01-01

    The neural basis of how humans value and pursue social equality has become a major topic in social neuroscience research. Although recent studies have identified a set of brain regions and possible mechanisms that are involved in the neural processing of equality of outcome between individuals, how the human brain processes equality of opportunity remains unknown. In this review article, first we describe the importance of the distinction between equality of outcome and equality of opportunity, which has been emphasized in philosophy and economics. Next, we discuss possible approaches for empirical characterization of human valuation of equality of opportunity vs. equality of outcome. Understanding how these two concepts are distinct and interact with each other may provide a better explanation of complex human behaviors concerning fairness and social equality. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  20. Assessing an ensemble Kalman filter inference of Manning’s n coefficient of an idealized tidal inlet against a polynomial chaos-based MCMC

    KAUST Repository

    Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim

    2017-01-01

    an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model

  1. A study of fuzzy logic ensemble system performance on face recognition problem

    Science.gov (United States)

    Polyakova, A.; Lipinskiy, L.

    2017-02-01

    Some problems are difficult to solve by using a single intelligent information technology (IIT). The ensemble of the various data mining (DM) techniques is a set of models which are able to solve the problem by itself, but the combination of which allows increasing the efficiency of the system as a whole. Using the IIT ensembles can improve the reliability and efficiency of the final decision, since it emphasizes on the diversity of its components. The new method of the intellectual informational technology ensemble design is considered in this paper. It is based on the fuzzy logic and is designed to solve the classification and regression problems. The ensemble consists of several data mining algorithms: artificial neural network, support vector machine and decision trees. These algorithms and their ensemble have been tested by solving the face recognition problems. Principal components analysis (PCA) is used for feature selection.

  2. Larger bases and mixed analog/digital neural nets

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    The paper overviews results dealing with the approximation capabilities of neural networks, and bounds on the size of threshold gate circuits. Based on an explicit numerical algorithm for Kolmogorov`s superpositions the authors show that minimum size neural networks--for implementing any Boolean function--have the identity function as the activation function. Conclusions and several comments on the required precision are ending the paper.

  3. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  4. Genetic learning in rule-based and neural systems

    Science.gov (United States)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  5. Ensemble methods for handwritten digit recognition

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Liisberg, Christian; Salamon, P.

    1992-01-01

    Neural network ensembles are applied to handwritten digit recognition. The individual networks of the ensemble are combinations of sparse look-up tables (LUTs) with random receptive fields. It is shown that the consensus of a group of networks outperforms the best individual of the ensemble....... It is further shown that it is possible to estimate the ensemble performance as well as the learning curve on a medium-size database. In addition the authors present preliminary analysis of experiments on a large database and show that state-of-the-art performance can be obtained using the ensemble approach...... by optimizing the receptive fields. It is concluded that it is possible to improve performance significantly by introducing moderate-size ensembles; in particular, a 20-25% improvement has been found. The ensemble random LUTs, when trained on a medium-size database, reach a performance (without rejects) of 94...

  6. Numeral eddy current sensor modelling based on genetic neural network

    International Nuclear Information System (INIS)

    Yu Along

    2008-01-01

    This paper presents a method used to the numeral eddy current sensor modelling based on the genetic neural network to settle its nonlinear problem. The principle and algorithms of genetic neural network are introduced. In this method, the nonlinear model parameters of the numeral eddy current sensor are optimized by genetic neural network (GNN) according to measurement data. So the method remains both the global searching ability of genetic algorithm and the good local searching ability of neural network. The nonlinear model has the advantages of strong robustness, on-line modelling and high precision. The maximum nonlinearity error can be reduced to 0.037% by using GNN. However, the maximum nonlinearity error is 0.075% using the least square method

  7. Wind power prediction based on genetic neural network

    Science.gov (United States)

    Zhang, Suhan

    2017-04-01

    The scale of grid connected wind farms keeps increasing. To ensure the stability of power system operation, make a reasonable scheduling scheme and improve the competitiveness of wind farm in the electricity generation market, it's important to accurately forecast the short-term wind power. To reduce the influence of the nonlinear relationship between the disturbance factor and the wind power, the improved prediction model based on genetic algorithm and neural network method is established. To overcome the shortcomings of long training time of BP neural network and easy to fall into local minimum and improve the accuracy of the neural network, genetic algorithm is adopted to optimize the parameters and topology of neural network. The historical data is used as input to predict short-term wind power. The effectiveness and feasibility of the method is verified by the actual data of a certain wind farm as an example.

  8. Learning in neural networks based on a generalized fluctuation theorem

    Science.gov (United States)

    Hayakawa, Takashi; Aoyagi, Toshio

    2015-11-01

    Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universal laws describing the fluctuation they exhibit and the information they possess have recently been discovered. These laws are termed fluctuation theorems. In the present study, we formulate a theory of learning in neural networks bidirectionally interacting with environments based on the principle of information maximization. Our formulation begins with the introduction of a generalized fluctuation theorem, employing an interpretation appropriate for the present application, which differs from the original thermodynamic interpretation. We analytically and numerically demonstrate that the learning mechanism presented in our theory allows neural networks to efficiently explore their environments and optimally encode information about them.

  9. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  10. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  11. Analysis of ensemble learning using simple perceptrons based on online learning theory

    Science.gov (United States)

    Miyoshi, Seiji; Hara, Kazuyuki; Okada, Masato

    2005-03-01

    Ensemble learning of K nonlinear perceptrons, which determine their outputs by sign functions, is discussed within the framework of online learning and statistical mechanics. One purpose of statistical learning theory is to theoretically obtain the generalization error. This paper shows that ensemble generalization error can be calculated by using two order parameters, that is, the similarity between a teacher and a student, and the similarity among students. The differential equations that describe the dynamical behaviors of these order parameters are derived in the case of general learning rules. The concrete forms of these differential equations are derived analytically in the cases of three well-known rules: Hebbian learning, perceptron learning, and AdaTron (adaptive perceptron) learning. Ensemble generalization errors of these three rules are calculated by using the results determined by solving their differential equations. As a result, these three rules show different characteristics in their affinity for ensemble learning, that is “maintaining variety among students.” Results show that AdaTron learning is superior to the other two rules with respect to that affinity.

  12. Forecasting European cold waves based on subsampling strategies of CMIP5 and Euro-CORDEX ensembles

    Science.gov (United States)

    Cordero-Llana, Laura; Braconnot, Pascale; Vautard, Robert; Vrac, Mathieu; Jezequel, Aglae

    2016-04-01

    Forecasting future extreme events under the present changing climate represents a difficult task. Currently there are a large number of ensembles of simulations for climate projections that take in account different models and scenarios. However, there is a need for reducing the size of the ensemble to make the interpretation of these simulations more manageable for impact studies or climate risk assessment. This can be achieved by developing subsampling strategies to identify a limited number of simulations that best represent the ensemble. In this study, cold waves are chosen to test different approaches for subsampling available simulations. The definition of cold waves depends on the criteria used, but they are generally defined using a minimum temperature threshold, the duration of the cold spell as well as their geographical extend. These climate indicators are not universal, highlighting the difficulty of directly comparing different studies. As part of the of the CLIPC European project, we use daily surface temperature data obtained from CMIP5 outputs as well as Euro-CORDEX simulations to predict future cold waves events in Europe. From these simulations a clustering method is applied to minimise the number of ensembles required. Furthermore, we analyse the different uncertainties that arise from the different model characteristics and definitions of climate indicators. Finally, we will test if the same subsampling strategy can be used for different climate indicators. This will facilitate the use of the subsampling results for a wide number of impact assessment studies.

  13. Regression trees for predicting mortality in patients with cardiovascular disease: What improvement is achieved by using ensemble-based methods?

    Science.gov (United States)

    Austin, Peter C; Lee, Douglas S; Steyerberg, Ewout W; Tu, Jack V

    2012-01-01

    In biomedical research, the logistic regression model is the most commonly used method for predicting the probability of a binary outcome. While many clinical researchers have expressed an enthusiasm for regression trees, this method may have limited accuracy for predicting health outcomes. We aimed to evaluate the improvement that is achieved by using ensemble-based methods, including bootstrap aggregation (bagging) of regression trees, random forests, and boosted regression trees. We analyzed 30-day mortality in two large cohorts of patients hospitalized with either acute myocardial infarction (N = 16,230) or congestive heart failure (N = 15,848) in two distinct eras (1999–2001 and 2004–2005). We found that both the in-sample and out-of-sample prediction of ensemble methods offered substantial improvement in predicting cardiovascular mortality compared to conventional regression trees. However, conventional logistic regression models that incorporated restricted cubic smoothing splines had even better performance. We conclude that ensemble methods from the data mining and machine learning literature increase the predictive performance of regression trees, but may not lead to clear advantages over conventional logistic regression models for predicting short-term mortality in population-based samples of subjects with cardiovascular disease. PMID:22777999

  14. A Hierarchical Method for Transient Stability Prediction of Power Systems Using the Confidence of a SVM-Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Yanzhen Zhou

    2016-09-01

    Full Text Available Machine learning techniques have been widely used in transient stability prediction of power systems. When using the post-fault dynamic responses, it is difficult to draw a definite conclusion about how long the duration of response data used should be in order to balance the accuracy and speed. Besides, previous studies have the problem of lacking consideration for the confidence level. To solve these problems, a hierarchical method for transient stability prediction based on the confidence of ensemble classifier using multiple support vector machines (SVMs is proposed. Firstly, multiple datasets are generated by bootstrap sampling, then features are randomly picked up to compress the datasets. Secondly, the confidence indices are defined and multiple SVMs are built based on these generated datasets. By synthesizing the probabilistic outputs of multiple SVMs, the prediction results and confidence of the ensemble classifier will be obtained. Finally, different ensemble classifiers with different response times are built to construct different layers of the proposed hierarchical scheme. The simulation results show that the proposed hierarchical method can balance the accuracy and rapidity of the transient stability prediction. Moreover, the hierarchical method can reduce the misjudgments of unstable instances and cooperate with the time domain simulation to insure the security and stability of power systems.

  15. Conservative strategy-based ensemble surrogate model for optimal groundwater remediation design at DNAPLs-contaminated sites

    Science.gov (United States)

    Ouyang, Qi; Lu, Wenxi; Lin, Jin; Deng, Wenbing; Cheng, Weiguo

    2017-08-01

    The surrogate-based simulation-optimization techniques are frequently used for optimal groundwater remediation design. When this technique is used, surrogate errors caused by surrogate-modeling uncertainty may lead to generation of infeasible designs. In this paper, a conservative strategy that pushes the optimal design into the feasible region was used to address surrogate-modeling uncertainty. In addition, chance-constrained programming (CCP) was adopted to compare with the conservative strategy in addressing this uncertainty. Three methods, multi-gene genetic programming (MGGP), Kriging (KRG) and support vector regression (SVR), were used to construct surrogate models for a time-consuming multi-phase flow model. To improve the performance of the surrogate model, ensemble surrogates were constructed based on combinations of different stand-alone surrogate models. The results show that: (1) the surrogate-modeling uncertainty was successfully addressed by the conservative strategy, which means that this method is promising for addressing surrogate-modeling uncertainty. (2) The ensemble surrogate model that combines MGGP with KRG showed the most favorable performance, which indicates that this ensemble surrogate can utilize both stand-alone surrogate models to improve the performance of the surrogate model.

  16. Implementation of neural network based non-linear predictive

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1998-01-01

    The paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems including open loop unstable and non-minimum phase systems, but has also been proposed extended for the control of non......-linear systems. GPC is model-based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis on an efficient Quasi......-Newton optimization algorithm. The performance is demonstrated on a pneumatic servo system....

  17. The Development of Target-Specific Pose Filter Ensembles To Boost Ligand Enrichment for Structure-Based Virtual Screening.

    Science.gov (United States)

    Xia, Jie; Hsieh, Jui-Hua; Hu, Huabin; Wu, Song; Wang, Xiang Simon

    2017-06-26

    Structure-based virtual screening (SBVS) has become an indispensable technique for hit identification at the early stage of drug discovery. However, the accuracy of current scoring functions is not high enough to confer success to every target and thus remains to be improved. Previously, we had developed binary pose filters (PFs) using knowledge derived from the protein-ligand interface of a single X-ray structure of a specific target. This novel approach had been validated as an effective way to improve ligand enrichment. Continuing from it, in the present work we attempted to incorporate knowledge collected from diverse protein-ligand interfaces of multiple crystal structures of the same target to build PF ensembles (PFEs). Toward this end, we first constructed a comprehensive data set to meet the requirements of ensemble modeling and validation. This set contains 10 diverse targets, 118 well-prepared X-ray structures of protein-ligand complexes, and large benchmarking actives/decoys sets. Notably, we designed a unique workflow of two-layer classifiers based on the concept of ensemble learning and applied it to the construction of PFEs for all of the targets. Through extensive benchmarking studies, we demonstrated that (1) coupling PFE with Chemgauss4 significantly improves the early enrichment of Chemgauss4 itself and (2) PFEs show greater consistency in boosting early enrichment and larger overall enrichment than our prior PFs. In addition, we analyzed the pairwise topological similarities among cognate ligands used to construct PFEs and found that it is the higher chemical diversity of the cognate ligands that leads to the improved performance of PFEs. Taken together, the results so far prove that the incorporation of knowledge from diverse protein-ligand interfaces by ensemble modeling is able to enhance the screening competence of SBVS scoring functions.

  18. Towards Data-Driven Simulations of Wildfire Spread using Ensemble-based Data Assimilation

    Science.gov (United States)

    Rochoux, M. C.; Bart, J.; Ricci, S. M.; Cuenot, B.; Trouvé, A.; Duchaine, F.; Morel, T.

    2012-12-01

    Real-time predictions of a propagating wildfire remain a challenging task because the problem involves both multi-physics and multi-scales. The propagation speed of wildfires, also called the rate of spread (ROS), is indeed determined by complex interactions between pyrolysis, combustion and flow dynamics, atmospheric dynamics occurring at vegetation, topographical and meteorological scales. Current operational fire spread models are mainly based on a semi-empirical parameterization of the ROS in terms of vegetation, topographical and meteorological properties. For the fire spread simulation to be predictive and compatible with operational applications, the uncertainty on the ROS model should be reduced. As recent progress made in remote sensing technology provides new ways to monitor the fire front position, a promising approach to overcome the difficulties found in wildfire spread simulations is to integrate fire modeling and fire sensing technologies using data assimilation (DA). For this purpose we have developed a prototype data-driven wildfire spread simulator in order to provide optimal estimates of poorly known model parameters [*]. The data-driven simulation capability is adapted for more realistic wildfire spread : it considers a regional-scale fire spread model that is informed by observations of the fire front location. An Ensemble Kalman Filter algorithm (EnKF) based on a parallel computing platform (OpenPALM) was implemented in order to perform a multi-parameter sequential estimation where wind magnitude and direction are in addition to vegetation properties (see attached figure). The EnKF algorithm shows its good ability to track a small-scale grassland fire experiment and ensures a good accounting for the sensitivity of the simulation outcomes to the control parameters. As a conclusion, it was shown that data assimilation is a promising approach to more accurately forecast time-varying wildfire spread conditions as new airborne-like observations of

  19. A novel word spotting method based on recurrent neural networks.

    Science.gov (United States)

    Frinken, Volkmar; Fischer, Andreas; Manmatha, R; Bunke, Horst

    2012-02-01

    Keyword spotting refers to the process of retrieving all instances of a given keyword from a document. In the present paper, a novel keyword spotting method for handwritten documents is described. It is derived from a neural network-based system for unconstrained handwriting recognition. As such it performs template-free spotting, i.e., it is not necessary for a keyword to appear in the training set. The keyword spotting is done using a modification of the CTC Token Passing algorithm in conjunction with a recurrent neural network. We demonstrate that the proposed systems outperform not only a classical dynamic time warping-based approach but also a modern keyword spotting system, based on hidden Markov models. Furthermore, we analyze the performance of the underlying neural networks when using them in a recognition task followed by keyword spotting on the produced transcription. We point out the advantages of keyword spotting when compared to classic text line recognition.

  20. Study on a Biometric Authentication Model based on ECG using a Fuzzy Neural Network

    Science.gov (United States)

    Kim, Ho J.; Lim, Joon S.

    2018-03-01

    Traditional authentication methods use numbers or graphic passwords and thus involve the risk of loss or theft. Various studies are underway regarding biometric authentication because it uses the unique biometric data of a human being. Biometric authentication technology using ECG from biometric data involves signals that record electrical stimuli from the heart. It is difficult to manipulate and is advantageous in that it enables unrestrained measurements from sensors that are attached to the skin. This study is on biometric authentication methods using the neural network with weighted fuzzy membership functions (NEWFM). In the biometric authentication process, normalization and the ensemble average is applied during preprocessing, characteristics are extracted using Haar-wavelets, and a registration process called “training” is performed in the fuzzy neural network. In the experiment, biometric authentication was performed on 73 subjects in the Physionet Database. 10-40 ECG waveforms were tested for use in the registration process, and 15 ECG waveforms were deemed the appropriate number for registering ECG waveforms. 1 ECG waveforms were used during the authentication stage to conduct the biometric authentication test. Upon testing the proposed biometric authentication method based on 73 subjects from the Physionet Database, the TAR was 98.32% and FAR was 5.84%.

  1. Neural network-based sensor signal accelerator.

    Energy Technology Data Exchange (ETDEWEB)

    Vogt, M. C.

    2000-10-16

    A strategy has been developed to computationally accelerate the response time of a generic electronic sensor. The strategy can be deployed as an algorithm in a control system or as a physical interface (on an embedded microcontroller) between a slower responding external sensor and a higher-speed control system. Optional code implementations are available to adjust algorithm performance when computational capability is limited. In one option, the actual sensor signal can be sampled at the slower rate with adaptive linear neural networks predicting the sensor's future output and interpolating intermediate synthetic output values. In another option, a synchronized collection of predictors sequentially controls the corresponding synthetic output voltage. Error is adaptively corrected in both options. The core strategy has been demonstrated with automotive oxygen sensor data. A prototype interface device is under construction. The response speed increase afforded by this strategy could greatly offset the cost of developing a replacement sensor with a faster physical response time.

  2. The neural bases of orthographic working memory

    Directory of Open Access Journals (Sweden)

    Jeremy Purcell

    2014-04-01

    First, these results reveal a neurotopography of OWM lesion sites that is well-aligned with results from neuroimaging of orthographic working memory in neurally intact participants (Rapp & Dufor, 2011. Second, the dorsal neurotopography of the OWM lesion overlap is clearly distinct from what has been reported for lesions associated with either lexical or sublexical deficits (e.g., Henry, Beeson, Stark, & Rapcsak, 2007; Rapcsak & Beeson, 2004; these have, respectively, been identified with the inferior occipital/temporal and superior temporal/inferior parietal regions. These neurotopographic distinctions support the claims of the computational distinctiveness of long-term vs. working memory operations. The specific lesion loci raise a number of questions to be discussed regarding: (a the selectivity of these regions and associated deficits to orthographic working memory vs. working memory more generally (b the possibility that different lesion sub-regions may correspond to different components of the OWM system.

  3. Unfolding code for neutron spectrometry based on neural nets technology

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Vega C, H. R.

    2012-10-01

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the R obust Design of Artificial Neural Networks Methodology . The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6 Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  4. Microfluidic systems for stem cell-based neural tissue engineering.

    Science.gov (United States)

    Karimi, Mahdi; Bahrami, Sajad; Mirshekari, Hamed; Basri, Seyed Masoud Moosavi; Nik, Amirala Bakhshian; Aref, Amir R; Akbari, Mohsen; Hamblin, Michael R

    2016-07-05

    Neural tissue engineering aims at developing novel approaches for the treatment of diseases of the nervous system, by providing a permissive environment for the growth and differentiation of neural cells. Three-dimensional (3D) cell culture systems provide a closer biomimetic environment, and promote better cell differentiation and improved cell function, than could be achieved by conventional two-dimensional (2D) culture systems. With the recent advances in the discovery and introduction of different types of stem cells for tissue engineering, microfluidic platforms have provided an improved microenvironment for the 3D-culture of stem cells. Microfluidic systems can provide more precise control over the spatiotemporal distribution of chemical and physical cues at the cellular level compared to traditional systems. Various microsystems have been designed and fabricated for the purpose of neural tissue engineering. Enhanced neural migration and differentiation, and monitoring of these processes, as well as understanding the behavior of stem cells and their microenvironment have been obtained through application of different microfluidic-based stem cell culture and tissue engineering techniques. As the technology advances it may be possible to construct a "brain-on-a-chip". In this review, we describe the basics of stem cells and tissue engineering as well as microfluidics-based tissue engineering approaches. We review recent testing of various microfluidic approaches for stem cell-based neural tissue engineering.

  5. Unfolding code for neutron spectrometry based on neural nets technology

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Apdo. Postal 336, 98000 Zacatecas (Mexico)

    2012-10-15

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the {sup R}obust Design of Artificial Neural Networks Methodology{sup .} The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a {sup 6}Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  6. Forecasting of Energy Consumption in China Based on Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-04-01

    empirical analysis for energy consumption prediction. Four models, ISFLA-LSSVM, SFLA-LSSVM (Least Squares Support Vector Machine Optimized by Shuffled Frog Leaping Algorithm, LSSVM (Least Squares Support Vector Machine, and BP(Back Propagation neural network (Back Propagation neural network, are selected to compare with the EEMD-ISFLA-LSSVM model based on the evaluation indicators of mean absolute percentage error (MAPE, root mean square error (RMSE, and mean absolute error (MAE, which fully prove the practicability of the EEMD-ISFLA-LSSVM model for energy consumption forecasting in China. Finally, the EEMD-ISFLA-LSSVM model is adopted to forecast the energy consumption in China from 2018 to 2022, and, according to the forecasting results, it can be seen that China’s energy consumption from 2018 to 2022 will have a trend of significant growth.

  7. A comparative research of different ensemble surrogate models based on set pair analysis for the DNAPL-contaminated aquifer remediation strategy optimization

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi; Xue, Haibo; Lin, Jin

    2017-08-01

    Surrogate-based simulation-optimization technique is an effective approach for optimizing the surfactant enhanced aquifer remediation (SEAR) strategy for clearing DNAPLs. The performance of the surrogate model, which is used to replace the simulation model for the aim of reducing computation burden, is the key of corresponding researches. However, previous researches are generally based on a stand-alone surrogate model, and rarely make efforts to improve the approximation accuracy of the surrogate model to the simulation model sufficiently by combining various methods. In this regard, we present set pair analysis (SPA) as a new method to build ensemble surrogate (ES) model, and conducted a comparative research to select a better ES modeling pattern for the SEAR strategy optimization problems. Surrogate models were developed using radial basis function artificial neural network (RBFANN), support vector regression (SVR), and Kriging. One ES model is assembling RBFANN model, SVR model, and Kriging model using set pair weights according their performance, and the other is assembling several Kriging (the best surrogate modeling method of three) models built with different training sample datasets. Finally, an optimization model, in which the ES model was embedded, was established to obtain the optimal remediation strategy. The results showed the residuals of the outputs between the best ES model and simulation model for 100 testing samples were lower than 1.5%. Using an ES model instead of the simulation model was critical for considerably reducing the computation time of simulation-optimization process and maintaining high computation accuracy simultaneously.

  8. An Ensemble Based Evolutionary Approach to the Class Imbalance Problem with Applications in CBIR

    Directory of Open Access Journals (Sweden)

    Aun Irtaza

    2018-03-01

    Full Text Available In order to lower the dependence on textual annotations for image searches, the content based image retrieval (CBIR has become a popular topic in computer vision. A wide range of CBIR applications consider classification techniques, such as artificial neural networks (ANN, support vector machines (SVM, etc. to understand the query image content to retrieve relevant output. However, in multi-class search environments, the retrieval results are far from optimal due to overlapping semantics amongst subjects of various classes. The classification through multiple classifiers generate better results, but as the number of negative examples increases due to highly correlated semantic classes, classification bias occurs towards the negative class, hence, the combination of the classifiers become even more unstable particularly in one-against-all classification scenarios. In order to resolve this issue, a genetic algorithm (GA based classifier comity learning (GCCL method is presented in this paper to generate stable classifiers by combining ANN with SVMs through asymmetric and symmetric bagging. The proposed approach resolves the classification disagreement amongst different classifiers and also resolves the class imbalance problem in CBIR. Once the stable classifiers are generated, the query image is presented to the trained model to understand the underlying semantic content of the query image for association with the precise semantic class. Afterwards, the feature similarity is computed within the obtained class to generate the semantic response of the system. The experiments reveal that the proposed method outperforms various state-of-the-art methods and significantly improves the image retrieval performance.

  9. Neural network based electron identification in the ZEUS calorimeter

    International Nuclear Information System (INIS)

    Abramowicz, H.; Caldwell, A.; Sinkus, R.

    1995-01-01

    We present an electron identification algorithm based on a neural network approach applied to the ZEUS uranium calorimeter. The study is motivated by the need to select deep inelastic, neutral current, electron proton interactions characterized by the presence of a scattered electron in the final state. The performance of the algorithm is compared to an electron identification method based on a classical probabilistic approach. By means of a principle component analysis the improvement in the performance is traced back to the number of variables used in the neural network approach. (orig.)

  10. Discussion on Regression Methods Based on Ensemble Learning and Applicability Domains of Linear Submodels.

    Science.gov (United States)

    Kaneko, Hiromasa

    2018-02-26

    To develop a new ensemble learning method and construct highly predictive regression models in chemoinformatics and chemometrics, applicability domains (ADs) are introduced into the ensemble learning process of prediction. When estimating values of an objective variable using subregression models, only the submodels with ADs that cover a query sample, i.e., the sample is inside the model's AD, are used. By constructing submodels and changing a list of selected explanatory variables, the union of the submodels' ADs, which defines the overall AD, becomes large, and the prediction performance is enhanced for diverse compounds. By analyzing a quantitative structure-activity relationship data set and a quantitative structure-property relationship data set, it is confirmed that the ADs can be enlarged and the estimation performance of regression models is improved compared with traditional methods.

  11. An ensemble based nonlinear orthogonal matching pursuit algorithm for sparse history matching of reservoir models

    KAUST Repository

    Fsheikh, Ahmed H.

    2013-01-01

    A nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of reservoir models is presented. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated components of the basis functions with the residual. The discovered basis (aka support) is augmented across the nonlinear iterations. Once the basis functions are selected from the dictionary, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on approximate gradient estimation using an iterative stochastic ensemble method (ISEM). ISEM utilizes an ensemble of directional derivatives to efficiently approximate gradients. In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm.

  12. The Fault Diagnosis of Rolling Bearing Based on Ensemble Empirical Mode Decomposition and Random Forest

    OpenAIRE

    Qin, Xiwen; Li, Qiaoling; Dong, Xiaogang; Lv, Siqi

    2017-01-01

    Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD) and Random Forest (RF) is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs) by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet meth...

  13. Neural Representation. A Survey-Based Analysis of the Notion

    Directory of Open Access Journals (Sweden)

    Oscar Vilarroya

    2017-08-01

    Full Text Available The word representation (as in “neural representation”, and many of its related terms, such as to represent, representational and the like, play a central explanatory role in neuroscience literature. For instance, in “place cell” literature, place cells are extensively associated with their role in “the representation of space.” In spite of its extended use, we still lack a clear, universal and widely accepted view on what it means for a nervous system to represent something, on what makes a neural activity a representation, and on what is re-presented. The lack of a theoretical foundation and definition of the notion has not hindered actual research. My aim here is to identify how active scientists use the notion of neural representation, and eventually to list a set of criteria, based on actual use, that can help in distinguishing between genuine or non-genuine neural-representation candidates. In order to attain this objective, I present first the results of a survey of authors within two domains, place-cell and multivariate pattern analysis (MVPA research. Based on the authors’ replies, and on a review of neuroscientific research, I outline a set of common properties that an account of neural representation seems to require. I then apply these properties to assess the use of the notion in two domains of the survey, place-cell and MVPA studies. I conclude by exploring a shift in the notion of representation suggested by recent literature.

  14. 'Lazy' quantum ensembles

    International Nuclear Information System (INIS)

    Parfionov, George; Zapatrin, Roman

    2006-01-01

    We compare different strategies aimed to prepare an ensemble with a given density matrix ρ. Preparing the ensemble of eigenstates of ρ with appropriate probabilities can be treated as 'generous' strategy: it provides maximal accessible information about the state. Another extremity is the so-called 'Scrooge' ensemble, which is mostly stingy in sharing the information. We introduce 'lazy' ensembles which require minimal effort to prepare the density matrix by selecting pure states with respect to completely random choice. We consider two parties, Alice and Bob, playing a kind of game. Bob wishes to guess which pure state is prepared by Alice. His null hypothesis, based on the lack of any information about Alice's intention, is that Alice prepares any pure state with equal probability. Then, the average quantum state measured by Bob turns out to be ρ, and he has to make a new hypothesis about Alice's intention solely based on the information that the observed density matrix is ρ. The arising 'lazy' ensemble is shown to be the alternative hypothesis which minimizes type I error

  15. Neural Cell Chip Based Electrochemical Detection of Nanotoxicity.

    Science.gov (United States)

    Kafi, Md Abdul; Cho, Hyeon-Yeol; Choi, Jeong Woo

    2015-07-02

    Development of a rapid, sensitive and cost-effective method for toxicity assessment of commonly used nanoparticles is urgently needed for the sustainable development of nanotechnology. A neural cell with high sensitivity and conductivity has become a potential candidate for a cell chip to investigate toxicity of environmental influences. A neural cell immobilized on a conductive surface has become a potential tool for the assessment of nanotoxicity based on electrochemical methods. The effective electrochemical monitoring largely depends on the adequate attachment of a neural cell on the chip surfaces. Recently, establishment of integrin receptor specific ligand molecules arginine-glycine-aspartic acid (RGD) or its several modifications RGD-Multi Armed Peptide terminated with cysteine (RGD-MAP-C), C(RGD)₄ ensure farm attachment of neural cell on the electrode surfaces either in their two dimensional (dot) or three dimensional (rod or pillar) like nano-scale arrangement. A three dimensional RGD modified electrode surface has been proven to be more suitable for cell adhesion, proliferation, differentiation as well as electrochemical measurement. This review discusses fabrication as well as electrochemical measurements of neural cell chip with particular emphasis on their use for nanotoxicity assessments sequentially since inception to date. Successful monitoring of quantum dot (QD), graphene oxide (GO) and cosmetic compound toxicity using the newly developed neural cell chip were discussed here as a case study. This review recommended that a neural cell chip established on a nanostructured ligand modified conductive surface can be a potential tool for the toxicity assessments of newly developed nanomaterials prior to their use on biology or biomedical technologies.

  16. Neural Cell Chip Based Electrochemical Detection of Nanotoxicity

    Directory of Open Access Journals (Sweden)

    Md. Abdul Kafi

    2015-07-01

    Full Text Available Development of a rapid, sensitive and cost-effective method for toxicity assessment of commonly used nanoparticles is urgently needed for the sustainable development of nanotechnology. A neural cell with high sensitivity and conductivity has become a potential candidate for a cell chip to investigate toxicity of environmental influences. A neural cell immobilized on a conductive surface has become a potential tool for the assessment of nanotoxicity based on electrochemical methods. The effective electrochemical monitoring largely depends on the adequate attachment of a neural cell on the chip surfaces. Recently, establishment of integrin receptor specific ligand molecules arginine-glycine-aspartic acid (RGD or its several modifications RGD-Multi Armed Peptide terminated with cysteine (RGD-MAP-C, C(RGD4 ensure farm attachment of neural cell on the electrode surfaces either in their two dimensional (dot or three dimensional (rod or pillar like nano-scale arrangement. A three dimensional RGD modified electrode surface has been proven to be more suitable for cell adhesion, proliferation, differentiation as well as electrochemical measurement. This review discusses fabrication as well as electrochemical measurements of neural cell chip with particular emphasis on their use for nanotoxicity assessments sequentially since inception to date. Successful monitoring of quantum dot (QD, graphene oxide (GO and cosmetic compound toxicity using the newly developed neural cell chip were discussed here as a case study. This review recommended that a neural cell chip established on a nanostructured ligand modified conductive surface can be a potential tool for the toxicity assessments of newly developed nanomaterials prior to their use on biology or biomedical technologies.

  17. Neural Network Based Models for Fusion Applications

    Science.gov (United States)

    Meneghini, Orso; Tema Biwole, Arsene; Luda, Teobaldo; Zywicki, Bailey; Rea, Cristina; Smith, Sterling; Snyder, Phil; Belli, Emily; Staebler, Gary; Canty, Jeff

    2017-10-01

    Whole device modeling, engineering design, experimental planning and control applications demand models that are simultaneously physically accurate and fast. This poster reports on the ongoing effort towards the development and validation of a series of models that leverage neural-­network (NN) multidimensional regression techniques to accelerate some of the most mission critical first principle models for the fusion community, such as: the EPED workflow for prediction of the H-Mode and Super H-Mode pedestal structure the TGLF and NEO models for the prediction of the turbulent and neoclassical particle, energy and momentum fluxes; and the NEO model for the drift-kinetic solution of the bootstrap current. We also applied NNs on DIII-D experimental data for disruption prediction and quantifying the effect of RMPs on the pedestal and ELMs. All of these projects were supported by the infrastructure provided by the OMFIT integrated modeling framework. Work supported by US DOE under DE-SC0012656, DE-FG02-95ER54309, DE-FC02-04ER54698.

  18. Iris double recognition based on modified evolutionary neural network

    Science.gov (United States)

    Liu, Shuai; Liu, Yuan-Ning; Zhu, Xiao-Dong; Huo, Guang; Liu, Wen-Tao; Feng, Jia-Kai

    2017-11-01

    Aiming at multicategory iris recognition under illumination and noise interference, this paper proposes a method of iris double recognition based on a modified evolutionary neural network. An equalization histogram and Laplace of Gaussian operator are used to process the iris to suppress illumination and noise interference and Haar wavelet to convert the iris feature to binary feature encoding. Calculate the Hamming distance for the test iris and template iris , and compare with classification threshold, determine the type of iris. If the iris cannot be identified as a different type, there needs to be a secondary recognition. The connection weights in back-propagation (BP) neural network use modified evolutionary neural network to adaptively train. The modified neural network is composed of particle swarm optimization with mutation operator and BP neural network. According to different iris libraries in different circumstances of experimental results, under illumination and noise interference, the correct recognition rate of this algorithm is higher, the ROC curve is closer to the coordinate axis, the training and recognition time is shorter, and the stability and the robustness are better.

  19. Adaptive Synchronization of Memristor-based Chaotic Neural Systems

    Directory of Open Access Journals (Sweden)

    Xiaofang Hu

    2014-11-01

    Full Text Available Chaotic neural networks consisting of a great number of chaotic neurons are able to reproduce the rich dynamics observed in biological nervous systems. In recent years, the memristor has attracted much interest in the efficient implementation of artificial synapses and neurons. This work addresses adaptive synchronization of a class of memristor-based neural chaotic systems using a novel adaptive backstepping approach. A systematic design procedure is presented. Simulation results have demonstrated the effectiveness of the proposed adaptive synchronization method and its potential in practical application of memristive chaotic oscillators in secure communication.

  20. Artificial Neural Network Based State Estimators Integrated into Kalmtool

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Ravn, Ole; Poulsen, Niels Kjølstad

    2012-01-01

    In this paper we present a toolbox enabling easy evaluation and comparison of dierent ltering algorithms. The toolbox is called Kalmtool and is a set of MATLAB tools for state estimation of nonlinear systems. The toolbox now contains functions for Articial Neural Network Based State Estimation as...

  1. RBF neural network based H∞ H∞ H∞ synchronization for ...

    Indian Academy of Sciences (India)

    Based on this neural network and linear matrix inequality (LMI) formulation, the RBFNNHS controller and the learning laws are presented to reduce the effect of disturbance to an H ∞ norm constraint. It is shown that finding the RBFNNHS controller and the learning laws can be transformed into the LMI problem and solved ...

  2. Detecting danger labels with RAM-based neural networks

    DEFF Research Database (Denmark)

    Jørgensen, T.M.; Christensen, S.S.; Andersen, A.W.

    1996-01-01

    An image processing system for the automatic location of danger labels on the back of containers is presented. The system uses RAM-based neural networks to locate and classify labels after a pre-processing step involving specially designed non-linear edge filters and RGB-to-HSV conversion. Result...

  3. Neural network based system for script identification in Indian ...

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... The paper describes a neural network-based script identification system which can be used in the machine reading of documents written in English, Hindi and Kannada language scripts. Script identification is a basic requirement in automation of document processing, in multi-script, multi-lingual ...

  4. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    user

    Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total ..... Genetic algorithm-based self-learning fuzzy PI controller for shunt active filter, ... Verification of global optimality of the OFC active power filters by means of ...

  5. Neural network-based retrieval from software reuse repositories

    Science.gov (United States)

    Eichmann, David A.; Srinivas, Kankanahalli

    1992-01-01

    A significant hurdle confronts the software reuser attempting to select candidate components from a software repository - discriminating between those components without resorting to inspection of the implementation(s). We outline an approach to this problem based upon neural networks which avoids requiring the repository administrators to define a conceptual closeness graph for the classification vocabulary.

  6. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    Directory of Open Access Journals (Sweden)

    Shao Jie

    2014-01-01

    Full Text Available A modeling based on the improved Elman neural network (IENN is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL model, Chebyshev neural network (CNN model, and basic Elman neural network (BENN model, the proposed model has better performance.

  7. SU-8-based microneedles for in vitro neural applications

    International Nuclear Information System (INIS)

    Altuna, Ane; Tijero, María; Berganzo, Javier; Salido, Rafa; Fernández, Luis J; Gabriel, Gemma; Guimerá, Anton; Villa, Rosa; Menéndez de la Prida, Liset

    2010-01-01

    This paper presents novel design, fabrication, packaging and the first in vitro neural activity recordings of SU-8-based microneedles. The polymer SU-8 was chosen because it provides excellent features for the fabrication of flexible and thin probes. A microprobe was designed in order to allow a clean insertion and to minimize the damage caused to neural tissue during in vitro applications. In addition, a tetrode is patterned at the tip of the needle to obtain fine-scale measurements of small neuronal populations within a radius of 100 µm. Impedance characterization of the electrodes has been carried out to demonstrate their viability for neural recording. Finally, probes are inserted into 400 µm thick hippocampal slices, and simultaneous action potentials with peak-to-peak amplitudes of 200–250 µV are detected.

  8. Investigating energy-based pool structure selection in the structure ensemble modeling with experimental distance constraints: The example from a multidomain protein Pub1.

    Science.gov (United States)

    Zhu, Guanhua; Liu, Wei; Bao, Chenglong; Tong, Dudu; Ji, Hui; Shen, Zuowei; Yang, Daiwen; Lu, Lanyuan

    2018-05-01

    The structural variations of multidomain proteins with flexible parts mediate many biological processes, and a structure ensemble can be determined by selecting a weighted combination of representative structures from a simulated structure pool, producing the best fit to experimental constraints such as interatomic distance. In this study, a hybrid structure-based and physics-based atomistic force field with an efficient sampling strategy is adopted to simulate a model di-domain protein against experimental paramagnetic relaxation enhancement (PRE) data that correspond to distance constraints. The molecular dynamics simulations produce a wide range of conformations depicted on a protein energy landscape. Subsequently, a conformational ensemble recovered with low-energy structures and the minimum-size restraint is identified in good agreement with experimental PRE rates, and the result is also supported by chemical shift perturbations and small-angle X-ray scattering data. It is illustrated that the regularizations of energy and ensemble-size prevent an arbitrary interpretation of protein conformations. Moreover, energy is found to serve as a critical control to refine the structure pool and prevent data overfitting, because the absence of energy regularization exposes ensemble construction to the noise from high-energy structures and causes a more ambiguous representation of protein conformations. Finally, we perform structure-ensemble optimizations with a topology-based structure pool, to enhance the understanding on the ensemble results from different sources of pool candidates. © 2018 Wiley Periodicals, Inc.

  9. HBC-Evo: predicting human breast cancer by exploiting amino acid sequence-based feature spaces and evolutionary ensemble system.

    Science.gov (United States)

    Majid, Abdul; Ali, Safdar

    2015-01-01

    We developed genetic programming (GP)-based evolutionary ensemble system for the early diagnosis, prognosis and prediction of human breast cancer. This system has effectively exploited the diversity in feature and decision spaces. First, individual learners are trained in different feature spaces using physicochemical properties of protein amino acids. Their predictions are then stacked to develop the best solution during GP evolution process. Finally, results for HBC-Evo system are obtained with optimal threshold, which is computed using particle swarm optimization. Our novel approach has demonstrated promising results compared to state of the art approaches.

  10. A One-Step-Ahead Smoothing-Based Joint Ensemble Kalman Filter for State-Parameter Estimation of Hydrological Models

    KAUST Repository

    El Gharamti, Mohamad

    2015-11-26

    The ensemble Kalman filter (EnKF) recursively integrates field data into simulation models to obtain a better characterization of the model’s state and parameters. These are generally estimated following a state-parameters joint augmentation strategy. In this study, we introduce a new smoothing-based joint EnKF scheme, in which we introduce a one-step-ahead smoothing of the state before updating the parameters. Numerical experiments are performed with a two-dimensional synthetic subsurface contaminant transport model. The improved performance of the proposed joint EnKF scheme compared to the standard joint EnKF compensates for the modest increase in the computational cost.

  11. A generalized polynomial chaos based ensemble Kalman filter with high accuracy

    International Nuclear Information System (INIS)

    Li Jia; Xiu Dongbin

    2009-01-01

    As one of the most adopted sequential data assimilation methods in many areas, especially those involving complex nonlinear dynamics, the ensemble Kalman filter (EnKF) has been under extensive investigation regarding its properties and efficiency. Compared to other variants of the Kalman filter (KF), EnKF is straightforward to implement, as it employs random ensembles to represent solution states. This, however, introduces sampling errors that affect the accuracy of EnKF in a negative manner. Though sampling errors can be easily reduced by using a large number of samples, in practice this is undesirable as each ensemble member is a solution of the system of state equations and can be time consuming to compute for large-scale problems. In this paper we present an efficient EnKF implementation via generalized polynomial chaos (gPC) expansion. The key ingredients of the proposed approach involve (1) solving the system of stochastic state equations via the gPC methodology to gain efficiency; and (2) sampling the gPC approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost, to drastically reduce the sampling errors. The resulting algorithm thus achieves a high accuracy at reduced computational cost, compared to the classical implementations of EnKF. Numerical examples are provided to verify the convergence property and accuracy improvement of the new algorithm. We also prove that for linear systems with Gaussian noise, the first-order gPC Kalman filter method is equivalent to the exact Kalman filter.

  12. A multi-model ensemble approach to seabed mapping

    Science.gov (United States)

    Diesing, Markus; Stephens, David

    2015-06-01

    Seabed habitat mapping based on swath acoustic data and ground-truth samples is an emergent and active marine science discipline. Significant progress could be achieved by transferring techniques and approaches that have been successfully developed and employed in such fields as terrestrial land cover mapping. One such promising approach is the multiple classifier system, which aims at improving classification performance by combining the outputs of several classifiers. Here we present results of a multi-model ensemble applied to multibeam acoustic data covering more than 5000 km2 of seabed in the North Sea with the aim to derive accurate spatial predictions of seabed substrate. A suite of six machine learning classifiers (k-Nearest Neighbour, Support Vector Machine, Classification Tree, Random Forest, Neural Network and Naïve Bayes) was trained with ground-truth sample data classified into seabed substrate classes and their prediction accuracy was assessed with an independent set of samples. The three and five best performing models were combined to classifier ensembles. Both ensembles led to increased prediction accuracy as compared to the best performing single classifier. The improvements were however not statistically significant at the 5% level. Although the three-model ensemble did not perform significantly better than its individual component models, we noticed that the five-model ensemble did perform significantly better than three of the five component models. A classifier ensemble might therefore be an effective strategy to improve classification performance. Another advantage is the fact that the agreement in predicted substrate class between the individual models of the ensemble could be used as a measure of confidence. We propose a simple and spatially explicit measure of confidence that is based on model agreement and prediction accuracy.

  13. Dynamic Security Assessment of Western Danish Power System Based on Ensemble Decision Trees

    DEFF Research Database (Denmark)

    Liu, Leo; Bak, Claus Leth; Chen, Zhe

    2014-01-01

    With the increasing penetration of renewable energy resources and other forms of dispersed generation, more and more uncertainties will be brought to the dynamic security assessment (DSA) of power systems. This paper proposes an approach that uses ensemble decision trees (EDT) for online DSA. Fed...... with online wide-area measurement data, it is capable of not only predicting the security states of current operating conditions (OC) with high accuracy, but also indicating the confidence of the security states 1 minute ahead of the real time by an outlier identification method. The results of EDT together...

  14. The Fault Diagnosis of Rolling Bearing Based on Ensemble Empirical Mode Decomposition and Random Forest

    Directory of Open Access Journals (Sweden)

    Xiwen Qin

    2017-01-01

    Full Text Available Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD and Random Forest (RF is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet method is also used in the proposed process, the same as EEMD. The results of the comparison show that the EEMD method is more accurate than the wavelet method.

  15. The image recognition based on neural network and Bayesian decision

    Science.gov (United States)

    Wang, Chugege

    2018-04-01

    The artificial neural network began in 1940, which is an important part of artificial intelligence. At present, it has become a hot topic in the fields of neuroscience, computer science, brain science, mathematics, and psychology. Thomas Bayes firstly reported the Bayesian theory in 1763. After the development in the twentieth century, it has been widespread in all areas of statistics. In recent years, due to the solution of the problem of high-dimensional integral calculation, Bayesian Statistics has been improved theoretically, which solved many problems that cannot be solved by classical statistics and is also applied to the interdisciplinary fields. In this paper, the related concepts and principles of the artificial neural network are introduced. It also summarizes the basic content and principle of Bayesian Statistics, and combines the artificial neural network technology and Bayesian decision theory and implement them in all aspects of image recognition, such as enhanced face detection method based on neural network and Bayesian decision, as well as the image classification based on the Bayesian decision. It can be seen that the combination of artificial intelligence and statistical algorithms has always been the hot research topic.

  16. Deep Neural Network-Based Chinese Semantic Role Labeling

    Institute of Scientific and Technical Information of China (English)

    ZHENG Xiaoqing; CHEN Jun; SHANG Guoqiang

    2017-01-01

    A recent trend in machine learning is to use deep architec-tures to discover multiple levels of features from data, which has achieved impressive results on various natural language processing (NLP) tasks. We propose a deep neural network-based solution to Chinese semantic role labeling (SRL) with its application on message analysis. The solution adopts a six-step strategy: text normalization, named entity recognition (NER), Chinese word segmentation and part-of-speech (POS) tagging, theme classification, SRL, and slot filling. For each step, a novel deep neural network - based model is designed and optimized, particularly for smart phone applications. Ex-periment results on all the NLP sub - tasks of the solution show that the proposed neural networks achieve state-of-the-art performance with the minimal computational cost. The speed advantage of deep neural networks makes them more competitive for large-scale applications or applications requir-ing real-time response, highlighting the potential of the pro-posed solution for practical NLP systems.

  17. Climatic Models Ensemble-based Mid-21st Century Runoff Projections: A Bayesian Framework

    Science.gov (United States)

    Achieng, K. O.; Zhu, J.

    2017-12-01

    There are a number of North American Regional Climate Change Assessment Program (NARCCAP) climatic models that have been used to project surface runoff in the mid-21st century. Statistical model selection techniques are often used to select the model that best fits data. However, model selection techniques often lead to different conclusions. In this study, ten models are averaged in Bayesian paradigm to project runoff. Bayesian Model Averaging (BMA) is used to project and identify effect of model uncertainty on future runoff projections. Baseflow separation - a two-digital filter which is also called Eckhardt filter - is used to separate USGS streamflow (total runoff) into two components: baseflow and surface runoff. We use this surface runoff as the a priori runoff when conducting BMA of runoff simulated from the ten RCM models. The primary objective of this study is to evaluate how well RCM multi-model ensembles simulate surface runoff, in a Bayesian framework. Specifically, we investigate and discuss the following questions: How well do ten RCM models ensemble jointly simulate surface runoff by averaging over all the models using BMA, given a priori surface runoff? What are the effects of model uncertainty on surface runoff simulation?

  18. Implementation of neural network based non-linear predictive control

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1999-01-01

    This paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems, including open-loop unstable and non-minimum phase systems, but has also been proposed to be extended for the control...... of non-linear systems. GPC is model based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model, a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis...... on an efficient quasi-Newton algorithm. The performance is demonstrated on a pneumatic servo system....

  19. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    Science.gov (United States)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  20. AN ENSEMBLE TEMPLATE MATCHING AND CONTENT-BASED IMAGE RETRIEVAL SCHEME TOWARDS EARLY STAGE DETECTION OF MELANOMA

    Directory of Open Access Journals (Sweden)

    Spiros Kostopoulos

    2016-12-01

    Full Text Available Malignant melanoma represents the most dangerous type of skin cancer. In this study we present an ensemble classification scheme, employing the mutual information, the cross-correlation and the clustering based on proximity of image features methods, for early stage assessment of melanomas on plain photography images. The proposed scheme performs two main operations. First, it retrieves the most similar, to the unknown case, image samples from an available image database with verified benign moles and malignant melanoma cases. Second, it provides an automated estimation regarding the nature of the unknown image sample based on the majority of the most similar images retrieved from the available database. Clinical material comprised 75 melanoma and 75 benign plain photography images collected from publicly available dermatological atlases. Results showed that the ensemble scheme outperformed all other methods tested in terms of accuracy with 94.9±1.5%, following an external cross-validation evaluation methodology. The proposed scheme may benefit patients by providing a second opinion consultation during the self-skin examination process and the physician by providing a second opinion estimation regarding the nature of suspicious moles that may assist towards decision making especially for ambiguous cases, safeguarding, in this way from potential diagnostic misinterpretations.

  1. Assessing a robust ensemble-based Kalman filter for efficient ecosystem data assimilation of the Cretan Sea

    KAUST Repository

    Triantafyllou, George N.; Hoteit, Ibrahim; Luo, Xiaodong; Tsiaras, Kostas P.; Petihakis, George

    2013-01-01

    An application of an ensemble-based robust filter for data assimilation into an ecosystem model of the Cretan Sea is presented and discussed. The ecosystem model comprises two on-line coupled sub-models: the Princeton Ocean Model (POM) and the European Regional Seas Ecosystem Model (ERSEM). The filtering scheme is based on the Singular Evolutive Interpolated Kalman (SEIK) filter which is implemented with a time-local H∞ filtering strategy to enhance robustness and performances during periods of strong ecosystem variability. Assimilation experiments in the Cretan Sea indicate that robustness can be achieved in the SEIK filter by introducing an adaptive inflation scheme of the modes of the filter error covariance matrix. Twin-experiments are performed to evaluate the performance of the assimilation system and to study the benefits of using robust filtering in an ensemble filtering framework. Pseudo-observations of surface chlorophyll, extracted from a model reference run, were assimilated every two days. Simulation results suggest that the adaptive inflation scheme significantly improves the behavior of the SEIK filter during periods of strong ecosystem variability. © 2012 Elsevier B.V.

  2. A novel computer-aided diagnosis system for breast MRI based on feature selection and ensemble learning.

    Science.gov (United States)

    Lu, Wei; Li, Zhe; Chu, Jinghui

    2017-04-01

    Breast cancer is a common cancer among women. With the development of modern medical science and information technology, medical imaging techniques have an increasingly important role in the early detection and diagnosis of breast cancer. In this paper, we propose an automated computer-aided diagnosis (CADx) framework for magnetic resonance imaging (MRI). The scheme consists of an ensemble of several machine learning-based techniques, including ensemble under-sampling (EUS) for imbalanced data processing, the Relief algorithm for feature selection, the subspace method for providing data diversity, and Adaboost for improving the performance of base classifiers. We extracted morphological, various texture, and Gabor features. To clarify the feature subsets' physical meaning, subspaces are built by combining morphological features with each kind of texture or Gabor feature. We tested our proposal using a manually segmented Region of Interest (ROI) data set, which contains 438 images of malignant tumors and 1898 images of normal tissues or benign tumors. Our proposal achieves an area under the ROC curve (AUC) value of 0.9617, which outperforms most other state-of-the-art breast MRI CADx systems. Compared with other methods, our proposal significantly reduces the false-positive classification rate. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Assessing a robust ensemble-based Kalman filter for efficient ecosystem data assimilation of the Cretan Sea

    KAUST Repository

    Triantafyllou, George N.

    2013-09-01

    An application of an ensemble-based robust filter for data assimilation into an ecosystem model of the Cretan Sea is presented and discussed. The ecosystem model comprises two on-line coupled sub-models: the Princeton Ocean Model (POM) and the European Regional Seas Ecosystem Model (ERSEM). The filtering scheme is based on the Singular Evolutive Interpolated Kalman (SEIK) filter which is implemented with a time-local H∞ filtering strategy to enhance robustness and performances during periods of strong ecosystem variability. Assimilation experiments in the Cretan Sea indicate that robustness can be achieved in the SEIK filter by introducing an adaptive inflation scheme of the modes of the filter error covariance matrix. Twin-experiments are performed to evaluate the performance of the assimilation system and to study the benefits of using robust filtering in an ensemble filtering framework. Pseudo-observations of surface chlorophyll, extracted from a model reference run, were assimilated every two days. Simulation results suggest that the adaptive inflation scheme significantly improves the behavior of the SEIK filter during periods of strong ecosystem variability. © 2012 Elsevier B.V.

  4. UNMANNED AIR VEHICLE STABILIZATION BASED ON NEURAL NETWORK REGULATOR

    Directory of Open Access Journals (Sweden)

    S. S. Andropov

    2016-09-01

    Full Text Available A problem of stabilizing for the multirotor unmanned aerial vehicle in an environment with external disturbances is researched. A classic proportional-integral-derivative controller is analyzed, its flaws are outlined: inability to respond to changing of external conditions and the need for manual adjustment of coefficients. The paper presents an adaptive adjustment method for coefficients of the proportional-integral-derivative controller based on neural networks. A neural network structure, its input and output data are described. Neural networks with three layers are used to create an adaptive stabilization system for the multirotor unmanned aerial vehicle. Training of the networks is done with the back propagation method. Each neural network produces regulator coefficients for each angle of stabilization as its output. A method for network training is explained. Several graphs of transition process on different stages of learning, including processes with external disturbances, are presented. It is shown that the system meets stabilization requirements with sufficient number of iterations. Described adjustment method for coefficients can be used in remote control of unmanned aerial vehicles, operating in the changing environment.

  5. Artificial Neural Network Based Optical Character Recognition

    OpenAIRE

    Vivek Shrivastava; Navdeep Sharma

    2012-01-01

    Optical Character Recognition deals in recognition and classification of characters from an image. For the recognition to be accurate, certain topological and geometrical properties are calculated, based on which a character is classified and recognized. Also, the Human psychology perceives characters by its overall shape and features such as strokes, curves, protrusions, enclosures etc. These properties, also called Features are extracted from the image by means of spatial pixel-...

  6. [Simulation of cropland soil moisture based on an ensemble Kalman filter].

    Science.gov (United States)

    Liu, Zhao; Zhou, Yan-Lian; Ju, Wei-Min; Gao, Ping

    2011-11-01

    By using an ensemble Kalman filter (EnKF) to assimilate the observed soil moisture data, the modified boreal ecosystem productivity simulator (BEPS) model was adopted to simulate the dynamics of soil moisture in winter wheat root zones at Xuzhou Agro-meteorological Station, Jiangsu Province of China during the growth seasons in 2000-2004. After the assimilation of observed data, the determination coefficient, root mean square error, and average absolute error of simulated soil moisture were in the ranges of 0.626-0.943, 0.018-0.042, and 0.021-0.041, respectively, with the simulation precision improved significantly, as compared with that before assimilation, indicating the applicability of data assimilation in improving the simulation of soil moisture. The experimental results at single point showed that the errors in the forcing data and observations and the frequency and soil depth of the assimilation of observed data all had obvious effects on the simulated soil moisture.

  7. Cryptography based on neural networks - analytical results

    International Nuclear Information System (INIS)

    Rosen-Zvi, Michal; Kanter, Ido; Kinzel, Wolfgang

    2002-01-01

    The mutual learning process between two parity feed-forward networks with discrete and continuous weights is studied analytically, and we find that the number of steps required to achieve full synchronization between the two networks in the case of discrete weights is finite. The synchronization process is shown to be non-self-averaging and the analytical solution is based on random auxiliary variables. The learning time of an attacker that is trying to imitate one of the networks is examined analytically and is found to be much longer than the synchronization time. Analytical results are found to be in agreement with simulations. (letter to the editor)

  8. Recursive Neural Networks Based on PSO for Image Parsing

    Directory of Open Access Journals (Sweden)

    Guo-Rong Cai

    2013-01-01

    Full Text Available This paper presents an image parsing algorithm which is based on Particle Swarm Optimization (PSO and Recursive Neural Networks (RNNs. State-of-the-art method such as traditional RNN-based parsing strategy uses L-BFGS over the complete data for learning the parameters. However, this could cause problems due to the nondifferentiable objective function. In order to solve this problem, the PSO algorithm has been employed to tune the weights of RNN for minimizing the objective. Experimental results obtained on the Stanford background dataset show that our PSO-based training algorithm outperforms traditional RNN, Pixel CRF, region-based energy, simultaneous MRF, and superpixel MRF.

  9. A recurrent neural network based on projection operator for extended general variational inequalities.

    Science.gov (United States)

    Liu, Qingshan; Cao, Jinde

    2010-06-01

    Based on the projection operator, a recurrent neural network is proposed for solving extended general variational inequalities (EGVIs). Sufficient conditions are provided to ensure the global convergence of the proposed neural network based on Lyapunov methods. Compared with the existing neural networks for variational inequalities, the proposed neural network is a modified version of the general projection neural network existing in the literature and capable of solving the EGVI problems. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed neural network.

  10. Artificial Neural Network Based Model of Photovoltaic Cell

    Directory of Open Access Journals (Sweden)

    Messaouda Azzouzi

    2017-03-01

    Full Text Available This work concerns the modeling of a photovoltaic system and the prediction of the sensitivity of electrical parameters (current, power of the six types of photovoltaic cells based on voltage applied between terminals using one of the best known artificial intelligence technique which is the Artificial Neural Networks. The results of the modeling and prediction have been well shown as a function of number of iterations and using different learning algorithms to obtain the best results. 

  11. Nanowire FET Based Neural Element for Robotic Tactile Sensing Skin

    Directory of Open Access Journals (Sweden)

    William Taube Navaraj

    2017-09-01

    Full Text Available This paper presents novel Neural Nanowire Field Effect Transistors (υ-NWFETs based hardware-implementable neural network (HNN approach for tactile data processing in electronic skin (e-skin. The viability of Si nanowires (NWs as the active material for υ-NWFETs in HNN is explored through modeling and demonstrated by fabricating the first device. Using υ-NWFETs to realize HNNs is an interesting approach as by printing NWs on large area flexible substrates it will be possible to develop a bendable tactile skin with distributed neural elements (for local data processing, as in biological skin in the backplane. The modeling and simulation of υ-NWFET based devices show that the overlapping areas between individual gates and the floating gate determines the initial synaptic weights of the neural network - thus validating the working of υ-NWFETs as the building block for HNN. The simulation has been further extended to υ-NWFET based circuits and neuronal computation system and this has been validated by interfacing it with a transparent tactile skin prototype (comprising of 6 × 6 ITO based capacitive tactile sensors array integrated on the palm of a 3D printed robotic hand. In this regard, a tactile data coding system is presented to detect touch gesture and the direction of touch. Following these simulation studies, a four-gated υ-NWFET is fabricated with Pt/Ti metal stack for gates, source and drain, Ni floating gate, and Al2O3 high-k dielectric layer. The current-voltage characteristics of fabricated υ-NWFET devices confirm the dependence of turn-off voltages on the (synaptic weight of each gate. The presented υ-NWFET approach is promising for a neuro-robotic tactile sensory system with distributed computing as well as numerous futuristic applications such as prosthetics, and electroceuticals.

  12. Control of GMA Butt Joint Welding Based on Neural Networks

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2004-01-01

    This paper presents results from an experimentally based research on Gas Metal Arc Welding (GMAW), controlled by the artificial neural network (ANN) technology. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a high degree of quality......-linear least square error minimization, has been used with the back-propagation algorithm for training the network, while a Bayesian regularization technique has been successfully applied for minimizing the risk of inexpedient over-training....

  13. Exploiting ensemble learning for automatic cataract detection and grading.

    Science.gov (United States)

    Yang, Ji-Jiang; Li, Jianqiang; Shen, Ruifang; Zeng, Yang; He, Jian; Bi, Jing; Li, Yong; Zhang, Qinyan; Peng, Lihui; Wang, Qing

    2016-02-01

    Cataract is defined as a lenticular opacity presenting usually with poor visual acuity. It is one of the most common causes of visual impairment worldwide. Early diagnosis demands the expertise of trained healthcare professionals, which may present a barrier to early intervention due to underlying costs. To date, studies reported in the literature utilize a single learning model for retinal image classification in grading cataract severity. We present an ensemble learning based approach as a means to improving diagnostic accuracy. Three independent feature sets, i.e., wavelet-, sketch-, and texture-based features, are extracted from each fundus image. For each feature set, two base learning models, i.e., Support Vector Machine and Back Propagation Neural Network, are built. Then, the ensemble methods, majority voting and stacking, are investigated to combine the multiple base learning models for final fundus image classification. Empirical experiments are conducted for cataract detection (two-class task, i.e., cataract or non-cataractous) and cataract grading (four-class task, i.e., non-cataractous, mild, moderate or severe) tasks. The best performance of the ensemble classifier is 93.2% and 84.5% in terms of the correct classification rates for cataract detection and grading tasks, respectively. The results demonstrate that the ensemble classifier outperforms the single learning model significantly, which also illustrates the effectiveness of the proposed approach. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Data systems and computer science: Neural networks base R/T program overview

    Science.gov (United States)

    Gulati, Sandeep

    1991-01-01

    The research base, in the U.S. and abroad, for the development of neural network technology is discussed. The technical objectives are to develop and demonstrate adaptive, neural information processing concepts. The leveraging of external funding is also discussed.

  15. Can-Evo-Ens: Classifier stacking based evolutionary ensemble system for prediction of human breast cancer using amino acid sequences.

    Science.gov (United States)

    Ali, Safdar; Majid, Abdul

    2015-04-01

    The diagnostic of human breast cancer is an intricate process and specific indicators may produce negative results. In order to avoid misleading results, accurate and reliable diagnostic system for breast cancer is indispensable. Recently, several interesting machine-learning (ML) approaches are proposed for prediction of breast cancer. To this end, we developed a novel classifier stacking based evolutionary ensemble system "Can-Evo-Ens" for predicting amino acid sequences associated with breast cancer. In this paper, first, we selected four diverse-type of ML algorithms of Naïve Bayes, K-Nearest Neighbor, Support Vector Machines, and Random Forest as base-level classifiers. These classifiers are trained individually in different feature spaces using physicochemical properties of amino acids. In order to exploit the decision spaces, the preliminary predictions of base-level classifiers are stacked. Genetic programming (GP) is then employed to develop a meta-classifier that optimal combine the predictions of the base classifiers. The most suitable threshold value of the best-evolved predictor is computed using Particle Swarm Optimization technique. Our experiments have demonstrated the robustness of Can-Evo-Ens system for independent validation dataset. The proposed system has achieved the highest value of Area Under Curve (AUC) of ROC Curve of 99.95% for cancer prediction. The comparative results revealed that proposed approach is better than individual ML approaches and conventional ensemble approaches of AdaBoostM1, Bagging, GentleBoost, and Random Subspace. It is expected that the proposed novel system would have a major impact on the fields of Biomedical, Genomics, Proteomics, Bioinformatics, and Drug Development. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Task-dependent neural bases of perceiving emotionally expressive targets

    Directory of Open Access Journals (Sweden)

    Jamil eZaki

    2012-08-01

    Full Text Available Social cognition is fundamentally interpersonal: individuals’ behavior and dispositions critically affect their interaction partners’ information processing. However, cognitive neuroscience studies, partially because of methodological constraints, have remained largely perceiver-centric: focusing on the abilities, motivations, and goals of social perceivers while largely ignoring interpersonal effects. Here, we address this knowledge gap by examining the neural bases of perceiving emotionally expressive and inexpressive social targets. Sixteen perceivers were scanned using fMRI while they watched targets discussing emotional autobiographical events. Perceivers continuously rated each target’s emotional state or eye-gaze direction. The effects of targets’ emotional expressivity on perceiver’s brain activity depended on task set: when perceivers explicitly attended to targets’ emotions, expressivity predicted activity in neural structures—including medial prefrontal and posterior cingulate cortex—associated with drawing inferences about mental states. When perceivers instead attended to targets’ eye-gaze, target expressivity predicted activity in regions—including somatosensory cortex, fusiform gyrus, and motor cortex—associated with monitoring sensorimotor states and biological motion. These findings suggest that expressive targets affect information processing in manner that depends on perceivers’ goals. More broadly, these data provide an early step towards understanding the neural bases of interpersonal social cognition.

  17. Chinese Sentence Classification Based on Convolutional Neural Network

    Science.gov (United States)

    Gu, Chengwei; Wu, Ming; Zhang, Chuang

    2017-10-01

    Sentence classification is one of the significant issues in Natural Language Processing (NLP). Feature extraction is often regarded as the key point for natural language processing. Traditional ways based on machine learning can not take high level features into consideration, such as Naive Bayesian Model. The neural network for sentence classification can make use of contextual information to achieve greater results in sentence classification tasks. In this paper, we focus on classifying Chinese sentences. And the most important is that we post a novel architecture of Convolutional Neural Network (CNN) to apply on Chinese sentence classification. In particular, most of the previous methods often use softmax classifier for prediction, we embed a linear support vector machine to substitute softmax in the deep neural network model, minimizing a margin-based loss to get a better result. And we use tanh as an activation function, instead of ReLU. The CNN model improve the result of Chinese sentence classification tasks. Experimental results on the Chinese news title database validate the effectiveness of our model.

  18. Parametric Jominy profiles predictor based on neural networks

    Directory of Open Access Journals (Sweden)

    Valentini, R.

    2005-12-01

    Full Text Available The paper presents a method for the prediction of the Jominy hardness profiles of steels for microalloyed Boron steel which is based on neural networks. The Jominy profile has been parameterized and the parameters, which are a sort of "compact representation" of the profile itself, are linked to the steel chemical composition through a neural network. Numerical results are presented and discussed.

    El trabajo presenta un método de estimación de perfiles de dureza Jominy para aceros microaleados al boro basado en redes neuronales. Los parámetros de perfil Jominy, que constituyen una especie de "representación compacta" del perfil mismo, son determinados y puestos en relación con la composición química del acero mediante una red neuronal. Los resultados numéricos son expuestos y discutidos.

  19. A Gain-Scheduling PI Control Based on Neural Networks

    Directory of Open Access Journals (Sweden)

    Stefania Tronci

    2017-01-01

    Full Text Available This paper presents a gain-scheduling design technique that relies upon neural models to approximate plant behaviour. The controller design is based on generic model control (GMC formalisms and linearization of the neural model of the process. As a result, a PI controller action is obtained, where the gain depends on the state of the system and is adapted instantaneously on-line. The algorithm is tested on a nonisothermal continuous stirred tank reactor (CSTR, considering both single-input single-output (SISO and multi-input multi-output (MIMO control problems. Simulation results show that the proposed controller provides satisfactory performance during set-point changes and disturbance rejection.

  20. Invariant moments based convolutional neural networks for image analysis

    Directory of Open Access Journals (Sweden)

    Vijayalakshmi G.V. Mahesh

    2017-01-01

    Full Text Available The paper proposes a method using convolutional neural network to effectively evaluate the discrimination between face and non face patterns, gender classification using facial images and facial expression recognition. The novelty of the method lies in the utilization of the initial trainable convolution kernels coefficients derived from the zernike moments by varying the moment order. The performance of the proposed method was compared with the convolutional neural network architecture that used random kernels as initial training parameters. The multilevel configuration of zernike moments was significant in extracting the shape information suitable for hierarchical feature learning to carry out image analysis and classification. Furthermore the results showed an outstanding performance of zernike moment based kernels in terms of the computation time and classification accuracy.

  1. ID card number detection algorithm based on convolutional neural network

    Science.gov (United States)

    Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan

    2018-04-01

    In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.

  2. On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles

    KAUST Repository

    Luo, Xiaodong

    2010-09-19

    The ensemble square root filter (EnSRF) [1, 2, 3, 4] is a popular method for data assimilation in high dimensional systems (e.g., geophysics models). Essentially the EnSRF is a Monte Carlo implementation of the conventional Kalman filter (KF) [5, 6]. It is mainly different from the KF at the prediction steps, where it is some ensembles, rather then the means and covariance matrices, of the system state that are propagated forward. In doing this, the EnSRF is computationally more efficient than the KF, since propagating a covariance matrix forward in high dimensional systems is prohibitively expensive. In addition, the EnSRF is also very convenient in implementation. By propagating the ensembles of the system state, the EnSRF can be directly applied to nonlinear systems without any change in comparison to the assimilation procedures in linear systems. However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].

  3. Ensemble-based assimilation of fractional snow-covered area satellite retrievals to estimate the snow distribution at Arctic sites

    Science.gov (United States)

    Aalstad, Kristoffer; Westermann, Sebastian; Vikhamar Schuler, Thomas; Boike, Julia; Bertino, Laurent

    2018-01-01

    With its high albedo, low thermal conductivity and large water storing capacity, snow strongly modulates the surface energy and water balance, which makes it a critical factor in mid- to high-latitude and mountain environments. However, estimating the snow water equivalent (SWE) is challenging in remote-sensing applications already at medium spatial resolutions of 1 km. We present an ensemble-based data assimilation framework that estimates the peak subgrid SWE distribution (SSD) at the 1 km scale by assimilating fractional snow-covered area (fSCA) satellite retrievals in a simple snow model forced by downscaled reanalysis data. The basic idea is to relate the timing of the snow cover depletion (accessible from satellite products) to the peak SSD. Peak subgrid SWE is assumed to be lognormally distributed, which can be translated to a modeled time series of fSCA through the snow model. Assimilation of satellite-derived fSCA facilitates the estimation of the peak SSD, while taking into account uncertainties in both the model and the assimilated data sets. As an extension to previous studies, our method makes use of the novel (to snow data assimilation) ensemble smoother with multiple data assimilation (ES-MDA) scheme combined with analytical Gaussian anamorphosis to assimilate time series of Moderate Resolution Imaging Spectroradiometer (MODIS) and Sentinel-2 fSCA retrievals. The scheme is applied to Arctic sites near Ny-Ålesund (79° N, Svalbard, Norway) where field measurements of fSCA and SWE distributions are available. The method is able to successfully recover accurate estimates of peak SSD on most of the occasions considered. Through the ES-MDA assimilation, the root-mean-square error (RMSE) for the fSCA, peak mean SWE and peak subgrid coefficient of variation is improved by around 75, 60 and 20 %, respectively, when compared to the prior, yielding RMSEs of 0.01, 0.09 m water equivalent (w.e.) and 0.13, respectively. The ES-MDA either outperforms or at least

  4. Ensemble-based assimilation of fractional snow-covered area satellite retrievals to estimate the snow distribution at Arctic sites

    Directory of Open Access Journals (Sweden)

    K. Aalstad

    2018-01-01

    Full Text Available With its high albedo, low thermal conductivity and large water storing capacity, snow strongly modulates the surface energy and water balance, which makes it a critical factor in mid- to high-latitude and mountain environments. However, estimating the snow water equivalent (SWE is challenging in remote-sensing applications already at medium spatial resolutions of 1 km. We present an ensemble-based data assimilation framework that estimates the peak subgrid SWE distribution (SSD at the 1 km scale by assimilating fractional snow-covered area (fSCA satellite retrievals in a simple snow model forced by downscaled reanalysis data. The basic idea is to relate the timing of the snow cover depletion (accessible from satellite products to the peak SSD. Peak subgrid SWE is assumed to be lognormally distributed, which can be translated to a modeled time series of fSCA through the snow model. Assimilation of satellite-derived fSCA facilitates the estimation of the peak SSD, while taking into account uncertainties in both the model and the assimilated data sets. As an extension to previous studies, our method makes use of the novel (to snow data assimilation ensemble smoother with multiple data assimilation (ES-MDA scheme combined with analytical Gaussian anamorphosis to assimilate time series of Moderate Resolution Imaging Spectroradiometer (MODIS and Sentinel-2 fSCA retrievals. The scheme is applied to Arctic sites near Ny-Ålesund (79° N, Svalbard, Norway where field measurements of fSCA and SWE distributions are available. The method is able to successfully recover accurate estimates of peak SSD on most of the occasions considered. Through the ES-MDA assimilation, the root-mean-square error (RMSE for the fSCA, peak mean SWE and peak subgrid coefficient of variation is improved by around 75, 60 and 20 %, respectively, when compared to the prior, yielding RMSEs of 0.01, 0.09 m water equivalent (w.e. and 0.13, respectively. The ES-MDA either

  5. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  6. Optical supervised filtering technique based on Hopfield neural network

    Science.gov (United States)

    Bal, Abdullah

    2004-11-01

    Hopfield neural network is commonly preferred for optimization problems. In image segmentation, conventional Hopfield neural networks (HNN) are formulated as a cost-function-minimization problem to perform gray level thresholding on the image histogram or the pixels' gray levels arranged in a one-dimensional array [R. Sammouda, N. Niki, H. Nishitani, Pattern Rec. 30 (1997) 921-927; K.S. Cheng, J.S. Lin, C.W. Mao, IEEE Trans. Med. Imag. 15 (1996) 560-567; C. Chang, P. Chung, Image and Vision comp. 19 (2001) 669-678]. In this paper, a new high speed supervised filtering technique is proposed for image feature extraction and enhancement problems by modifying the conventional HNN. The essential improvement in this technique is to use 2D convolution operation instead of weight-matrix multiplication. Thereby, neural network based a new filtering technique has been obtained that is required just 3 × 3 sized filter mask matrix instead of large size weight coefficient matrix. Optical implementation of the proposed filtering technique is executed easily using the joint transform correlator. The requirement of non-negative data for optical implementation is provided by bias technique to convert the bipolar data to non-negative data. Simulation results of the proposed optical supervised filtering technique are reported for various feature extraction problems such as edge detection, corner detection, horizontal and vertical line extraction, and fingerprint enhancement.

  7. A developmental perspective on the neural bases of human empathy.

    Science.gov (United States)

    Tousignant, Béatrice; Eugène, Fanny; Jackson, Philip L

    2017-08-01

    While empathy has been widely studied in philosophical and psychological literatures, recent advances in social neuroscience have shed light on the neural correlates of this complex interpersonal phenomenon. In this review, we provide an overview of brain imaging studies that have investigated the neural substrates of human empathy. Based on existing models of the functional architecture of empathy, we review evidence of the neural underpinnings of each main component, as well as their development from infancy. Although early precursors of affective sharing and self-other distinction appear to be present from birth, recent findings also suggest that even higher-order components of empathy such as perspective-taking and emotion regulation demonstrate signs of development during infancy. This merging of developmental and social neuroscience literature thus supports the view that ontogenic development of empathy is rooted in early infancy, well before the emergence of verbal abilities. With age, the refinement of top-down mechanisms may foster more appropriate empathic responses, thus promoting greater altruistic motivation and prosocial behaviors. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Neural bases of ingroup altruistic motivation in soccer fans.

    Science.gov (United States)

    Bortolini, Tiago; Bado, Patrícia; Hoefle, Sebastian; Engel, Annerose; Zahn, Roland; de Oliveira Souza, Ricardo; Dreher, Jean-Claude; Moll, Jorge

    2017-11-23

    Humans have a strong need to belong to social groups and a natural inclination to benefit ingroup members. Although the psychological mechanisms behind human prosociality have extensively been studied, the specific neural systems bridging group belongingness and altruistic motivation remain to be identified. Here, we used soccer fandom as an ecological framing of group membership to investigate the neural mechanisms underlying ingroup altruistic behaviour in male fans using event-related functional magnetic resonance. We designed an effort measure based on handgrip strength to assess the motivation to earn money (i) for oneself, (ii) for anonymous ingroup fans, or (iii) for a neutral group of anonymous non-fans. While overlapping valuation signals in the medial orbitofrontal cortex (mOFC) were observed for the three conditions, the subgenual cingulate cortex (SCC) exhibited increased functional connectivity with the mOFC as well as stronger hemodynamic responses for ingroup versus outgroup decisions. These findings indicate a key role for the SCC, a region previously implicated in altruistic decisions and group affiliation, in dovetailing altruistic motivations with neural valuation systems in real-life ingroup behaviour.

  9. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    Science.gov (United States)

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.

  10. Three-dimensional visualization of ensemble weather forecasts - Part 2: Forecasting warm conveyor belt situations for aircraft-based field campaigns

    Science.gov (United States)

    Rautenhaus, M.; Grams, C. M.; Schäfler, A.; Westermann, R.

    2015-07-01

    We present the application of interactive three-dimensional (3-D) visualization of ensemble weather predictions to forecasting warm conveyor belt situations during aircraft-based atmospheric research campaigns. Motivated by forecast requirements of the T-NAWDEX-Falcon 2012 (THORPEX - North Atlantic Waveguide and Downstream Impact Experiment) campaign, a method to predict 3-D probabilities of the spatial occurrence of warm conveyor belts (WCBs) has been developed. Probabilities are derived from Lagrangian particle trajectories computed on the forecast wind fields of the European Centre for Medium Range Weather Forecasts (ECMWF) ensemble prediction system. Integration of the method into the 3-D ensemble visualization tool Met.3D, introduced in the first part of this study, facilitates interactive visualization of WCB features and derived probabilities in the context of the ECMWF ensemble forecast. We investigate the sensitivity of the method with respect to trajectory seeding and grid spacing of the forecast wind field. Furthermore, we propose a visual analysis method to quantitatively analyse the contribution of ensemble members to a probability region and, thus, to assist the forecaster in interpreting the obtained probabilities. A case study, revisiting a forecast case from T-NAWDEX-Falcon, illustrates the practical application of Met.3D and demonstrates the use of 3-D and uncertainty visualization for weather forecasting and for planning flight routes in the medium forecast range (3 to 7 days before take-off).

  11. Three-dimensional visualization of ensemble weather forecasts – Part 2: Forecasting warm conveyor belt situations for aircraft-based field campaigns

    Directory of Open Access Journals (Sweden)

    M. Rautenhaus

    2015-07-01

    Full Text Available We present the application of interactive three-dimensional (3-D visualization of ensemble weather predictions to forecasting warm conveyor belt situations during aircraft-based atmospheric research campaigns. Motivated by forecast requirements of the T-NAWDEX-Falcon 2012 (THORPEX – North Atlantic Waveguide and Downstream Impact Experiment campaign, a method to predict 3-D probabilities of the spatial occurrence of warm conveyor belts (WCBs has been developed. Probabilities are derived from Lagrangian particle trajectories computed on the forecast wind fields of the European Centre for Medium Range Weather Forecasts (ECMWF ensemble prediction system. Integration of the method into the 3-D ensemble visualization tool Met.3D, introduced in the first part of this study, facilitates interactive visualization of WCB features and derived probabilities in the context of the ECMWF ensemble forecast. We investigate the sensitivity of the method with respect to trajectory seeding and grid spacing of the forecast wind field. Furthermore, we propose a visual analysis method to quantitatively analyse the contribution of ensemble members to a probability region and, thus, to assist the forecaster in interpreting the obtained probabilities. A case study, revisiting a forecast case from T-NAWDEX-Falcon, illustrates the practical application of Met.3D and demonstrates the use of 3-D and uncertainty visualization for weather forecasting and for planning flight routes in the medium forecast range (3 to 7 days before take-off.

  12. Improved predictive mapping of indoor radon concentrations using ensemble regression trees based on automatic clustering of geological units

    International Nuclear Information System (INIS)

    Kropat, Georg; Bochud, Francois; Jaboyedoff, Michel; Laedermann, Jean-Pascal; Murith, Christophe; Palacios, Martha; Baechler, Sébastien

    2015-01-01

    Purpose: According to estimations around 230 people die as a result of radon exposure in Switzerland. This public health concern makes reliable indoor radon prediction and mapping methods necessary in order to improve risk communication to the public. The aim of this study was to develop an automated method to classify lithological units according to their radon characteristics and to develop mapping and predictive tools in order to improve local radon prediction. Method: About 240 000 indoor radon concentration (IRC) measurements in about 150 000 buildings were available for our analysis. The automated classification of lithological units was based on k-medoids clustering via pair-wise Kolmogorov distances between IRC distributions of lithological units. For IRC mapping and prediction we used random forests and Bayesian additive regression trees (BART). Results: The automated classification groups lithological units well in terms of their IRC characteristics. Especially the IRC differences in metamorphic rocks like gneiss are well revealed by this method. The maps produced by random forests soundly represent the regional difference of IRCs in Switzerland and improve the spatial detail compared to existing approaches. We could explain 33% of the variations in IRC data with random forests. Additionally, the influence of a variable evaluated by random forests shows that building characteristics are less important predictors for IRCs than spatial/geological influences. BART could explain 29% of IRC variability and produced maps that indicate the prediction uncertainty. Conclusion: Ensemble regression trees are a powerful tool to model and understand the multidimensional influences on IRCs. Automatic clustering of lithological units complements this method by facilitating the interpretation of radon properties of rock types. This study provides an important element for radon risk communication. Future approaches should consider taking into account further variables

  13. Improving forecasting accuracy of medium and long-term runoff using artificial neural network based on EEMD decomposition.

    Science.gov (United States)

    Wang, Wen-chuan; Chau, Kwok-wing; Qiu, Lin; Chen, Yang-bo

    2015-05-01

    Hydrological time series forecasting is one of the most important applications in modern hydrology, especially for the effective reservoir management. In this research, an artificial neural network (ANN) model coupled with the ensemble empirical mode decomposition (EEMD) is presented for forecasting medium and long-term runoff time series. First, the original runoff time series is decomposed into a finite and often small number of intrinsic mode functions (IMFs) and a residual series using EEMD technique for attaining deeper insight into the data characteristics. Then all IMF components and residue are predicted, respectively, through appropriate ANN models. Finally, the forecasted results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff series. Two annual reservoir runoff time series from Biuliuhe and Mopanshan in China, are investigated using the developed model based on four performance evaluation measures (RMSE, MAPE, R and NSEC). The results obtained in this work indicate that EEMD can effectively enhance forecasting accuracy and the proposed EEMD-ANN model can attain significant improvement over ANN approach in medium and long-term runoff time series forecasting. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Ensemble Empirical Mode Decomposition based methodology for ultrasonic testing of coarse grain austenitic stainless steels.

    Science.gov (United States)

    Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N

    2015-03-01

    A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan. Copyright © 2014. Published by Elsevier B.V.

  15. Future hydroclimatological changes in South America based on an ensemble of regional climate models

    Science.gov (United States)

    Zaninelli, Pablo G.; Menéndez, Claudio G.; Falco, Magdalena; López-Franca, Noelia; Carril, Andrea F.

    2018-05-01

    Changes between two time slices (1961-1990 and 2071-2100) in hydroclimatological conditions for South America have been examined using an ensemble of regional climate models. Annual mean precipitation (P), evapotranspiration (E) and potential evapotranspiration (EP) are jointly considered through the balances of land water and energy. Drying or wetting conditions, associated with changes in land water availability and atmospheric demand, are analysed in the Budyko space. The water supply limit (E limited by P) is exceeded at about 2% of the grid points, while the energy limit to evapotranspiration (E = EP) is overall valid. Most of the continent, except for the southeast and some coastal areas, presents a shift toward drier conditions related to a decrease in water availability (the evaporation rate E/P increases) and, mostly over much of Brazil, to an increase in the aridity index (V = EP/P). These changes suggest less humid conditions with decreasing surface runoff over Amazonia and the Brazilian Highlands. In contrast, Argentina and the coasts of Ecuador and Peru are characterized by a tendency toward wetter conditions associated with an increase of water availability and a decrease of aridity index, primarily due to P increasing faster than both E and EP. This trend towards wetter soil conditions suggest that the chances of having larger periods of flooding and enhanced river discharges would increase over parts of southeastern South America. Interannual variability increases with V (for a given time slice) and with climate change (for a given aridity regimen). There are opposite interannual variability responses to the cliamte change in Argentina and Brazil by which the variability increases over the Brazilian Highlands and decreases in central-eastern Argentina.

  16. Real-Time Inverse Optimal Neural Control for Image Based Visual Servoing with Nonholonomic Mobile Robots

    Directory of Open Access Journals (Sweden)

    Carlos López-Franco

    2015-01-01

    Full Text Available We present an inverse optimal neural controller for a nonholonomic mobile robot with parameter uncertainties and unknown external disturbances. The neural controller is based on a discrete-time recurrent high order neural network (RHONN trained with an extended Kalman filter. The reference velocities for the neural controller are obtained with a visual sensor. The effectiveness of the proposed approach is tested by simulations and real-time experiments.

  17. Evaluating model performance of an ensemble-based chemical data assimilation system during INTEX-B field mission

    Directory of Open Access Journals (Sweden)

    A. F. Arellano Jr.

    2007-11-01

    Full Text Available We present a global chemical data assimilation system using a global atmosphere model, the Community Atmosphere Model (CAM3 with simplified chemistry and the Data Assimilation Research Testbed (DART assimilation package. DART is a community software facility for assimilation studies using the ensemble Kalman filter approach. Here, we apply the assimilation system to constrain global tropospheric carbon monoxide (CO by assimilating meteorological observations of temperature and horizontal wind velocity and satellite CO retrievals from the Measurement of Pollution in the Troposphere (MOPITT satellite instrument. We verify the system performance using independent CO observations taken on board the NSF/NCAR C-130 and NASA DC-8 aircrafts during the April 2006 part of the Intercontinental Chemical Transport Experiment (INTEX-B. Our evaluations show that MOPITT data assimilation provides significant improvements in terms of capturing the observed CO variability relative to no MOPITT assimilation (i.e. the correlation improves from 0.62 to 0.71, significant at 99% confidence. The assimilation provides evidence of median CO loading of about 150 ppbv at 700 hPa over the NE Pacific during April 2006. This is marginally higher than the modeled CO with no MOPITT assimilation (~140 ppbv. Our ensemble-based estimates of model uncertainty also show model overprediction over the source region (i.e. China and underprediction over the NE Pacific, suggesting model errors that cannot be readily explained by emissions alone. These results have important implications for improving regional chemical forecasts and for inverse modeling of CO sources and further demonstrate the utility of the assimilation system in comparing non-coincident measurements, e.g. comparing satellite retrievals of CO with in-situ aircraft measurements.

  18. Neural Network Based Sensory Fusion for Landmark Detection

    Science.gov (United States)

    Kumbla, Kishan -K.; Akbarzadeh, Mohammad R.

    1997-01-01

    NASA is planning to send numerous unmanned planetary missions to explore the space. This requires autonomous robotic vehicles which can navigate in an unstructured, unknown, and uncertain environment. Landmark based navigation is a new area of research which differs from the traditional goal-oriented navigation, where a mobile robot starts from an initial point and reaches a destination in accordance with a pre-planned path. The landmark based navigation has the advantage of allowing the robot to find its way without communication with the mission control station and without exact knowledge of its coordinates. Current algorithms based on landmark navigation however pose several constraints. First, they require large memories to store the images. Second, the task of comparing the images using traditional methods is computationally intensive and consequently real-time implementation is difficult. The method proposed here consists of three stages, First stage utilizes a heuristic-based algorithm to identify significant objects. The second stage utilizes a neural network (NN) to efficiently classify images of the identified objects. The third stage combines distance information with the classification results of neural networks for efficient and intelligent navigation.

  19. Layered Ensemble Architecture for Time Series Forecasting.

    Science.gov (United States)

    Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-01-01

    Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.

  20. ARTIFICIAL NEURAL NETWORKS BASED GEARS MATERIAL SELECTION HYBRID INTELLIGENT SYSTEM

    Institute of Scientific and Technical Information of China (English)

    X.C. Li; W.X. Zhu; G. Chen; D.S. Mei; J. Zhang; K.M. Chen

    2003-01-01

    An artificial neural networks(ANNs) based gear material selection hybrid intelligent system is established by analyzing the individual advantages and weakness of expert system (ES) and ANNs and the applications in material select of them. The system mainly consists of tow parts: ES and ANNs. By being trained with much data samples,the back propagation (BP) ANN gets the knowledge of gear materials selection, and is able to inference according to user input. The system realizes the complementing of ANNs and ES. Using this system, engineers without materials selection experience can conveniently deal with gear materials selection.

  1. Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network.

    Directory of Open Access Journals (Sweden)

    Seung Seog Han

    Full Text Available Although there have been reports of the successful diagnosis of skin disorders using deep learning, unrealistically large clinical image datasets are required for artificial intelligence (AI training. We created datasets of standardized nail images using a region-based convolutional neural network (R-CNN trained to distinguish the nail from the background. We used R-CNN to generate training datasets of 49,567 images, which we then used to fine-tune the ResNet-152 and VGG-19 models. The validation datasets comprised 100 and 194 images from Inje University (B1 and B2 datasets, respectively, 125 images from Hallym University (C dataset, and 939 images from Seoul National University (D dataset. The AI (ensemble model; ResNet-152 + VGG-19 + feedforward neural networks results showed test sensitivity/specificity/ area under the curve values of (96.0 / 94.7 / 0.98, (82.7 / 96.7 / 0.95, (92.3 / 79.3 / 0.93, (87.7 / 69.3 / 0.82 for the B1, B2, C, and D datasets. With a combination of the B1 and C datasets, the AI Youden index was significantly (p = 0.01 higher than that of 42 dermatologists doing the same assessment manually. For B1+C and B2+ D dataset combinations, almost none of the dermatologists performed as well as the AI. By training with a dataset comprising 49,567 images, we achieved a diagnostic accuracy for onychomycosis using deep learning that was superior to that of most of the dermatologists who participated in this study.

  2. Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network.

    Science.gov (United States)

    Han, Seung Seog; Park, Gyeong Hun; Lim, Woohyung; Kim, Myoung Shin; Na, Jung Im; Park, Ilwoo; Chang, Sung Eun

    2018-01-01

    Although there have been reports of the successful diagnosis of skin disorders using deep learning, unrealistically large clinical image datasets are required for artificial intelligence (AI) training. We created datasets of standardized nail images using a region-based convolutional neural network (R-CNN) trained to distinguish the nail from the background. We used R-CNN to generate training datasets of 49,567 images, which we then used to fine-tune the ResNet-152 and VGG-19 models. The validation datasets comprised 100 and 194 images from Inje University (B1 and B2 datasets, respectively), 125 images from Hallym University (C dataset), and 939 images from Seoul National University (D dataset). The AI (ensemble model; ResNet-152 + VGG-19 + feedforward neural networks) results showed test sensitivity/specificity/ area under the curve values of (96.0 / 94.7 / 0.98), (82.7 / 96.7 / 0.95), (92.3 / 79.3 / 0.93), (87.7 / 69.3 / 0.82) for the B1, B2, C, and D datasets. With a combination of the B1 and C datasets, the AI Youden index was significantly (p = 0.01) higher than that of 42 dermatologists doing the same assessment manually. For B1+C and B2+ D dataset combinations, almost none of the dermatologists performed as well as the AI. By training with a dataset comprising 49,567 images, we achieved a diagnostic accuracy for onychomycosis using deep learning that was superior to that of most of the dermatologists who participated in this study.

  3. Representing Color Ensembles.

    Science.gov (United States)

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-10-01

    Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.

  4. Tailored Random Graph Ensembles

    International Nuclear Information System (INIS)

    Roberts, E S; Annibale, A; Coolen, A C C

    2013-01-01

    Tailored graph ensembles are a developing bridge between biological networks and statistical mechanics. The aim is to use this concept to generate a suite of rigorous tools that can be used to quantify and compare the topology of cellular signalling networks, such as protein-protein interaction networks and gene regulation networks. We calculate exact and explicit formulae for the leading orders in the system size of the Shannon entropies of random graph ensembles constrained with degree distribution and degree-degree correlation. We also construct an ergodic detailed balance Markov chain with non-trivial acceptance probabilities which converges to a strictly uniform measure and is based on edge swaps that conserve all degrees. The acceptance probabilities can be generalized to define Markov chains that target any alternative desired measure on the space of directed or undirected graphs, in order to generate graphs with more sophisticated topological features.

  5. On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles

    KAUST Repository

    Luo, Xiaodong; Hoteit, Ibrahim; Moroz, Irene M.

    2010-01-01

    However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].

  6. Fluorescent Binary Ensemble Based on Pyrene Derivative and Sodium Dodecyl Sulfate Assemblies as a Chemical Tongue for Discriminating Metal Ions and Brand Water.

    Science.gov (United States)

    Zhang, Lijun; Huang, Xinyan; Cao, Yuan; Xin, Yunhong; Ding, Liping

    2017-12-22

    Enormous effort has been put to the detection and recognition of various heavy metal ions due to their involvement in serious environmental pollution and many major diseases. The present work has developed a single fluorescent sensor ensemble that can distinguish and identify a variety of heavy metal ions. A pyrene-based fluorophore (PB) containing a metal ion receptor group was specially designed and synthesized. Anionic surfactant sodium dodecyl sulfate (SDS) assemblies can effectively adjust its fluorescence behavior. The selected binary ensemble based on PB/SDS assemblies can exhibit multiple emission bands and provide wavelength-based cross-reactive responses to a series of metal ions to realize pattern recognition ability. The combination of surfactant assembly modulation and the receptor for metal ions empowers the present sensor ensemble with strong discrimination power, which could well differentiate 13 metal ions, including Cu 2+ , Co 2+ , Ni 2+ , Cr 3+ , Hg 2+ , Fe 3+ , Zn 2+ , Cd 2+ , Al 3+ , Pb 2+ , Ca 2+ , Mg 2+ , and Ba 2+ . Moreover, this single sensing ensemble could be further applied for identifying different brands of drinking water.

  7. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com; Gupta, Shikha

    2014-03-15

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R{sup 2}) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R{sup 2} and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the

  8. In silico prediction of toxicity of non-congeneric industrial chemicals using ensemble learning based modeling approaches

    International Nuclear Information System (INIS)

    Singh, Kunwar P.; Gupta, Shikha

    2014-01-01

    Ensemble learning approach based decision treeboost (DTB) and decision tree forest (DTF) models are introduced in order to establish quantitative structure–toxicity relationship (QSTR) for the prediction of toxicity of 1450 diverse chemicals. Eight non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals was evaluated using Tanimoto similarity index. Stochastic gradient boosting and bagging algorithms supplemented DTB and DTF models were constructed for classification and function optimization problems using the toxicity end-point in T. pyriformis. Special attention was drawn to prediction ability and robustness of the models, investigated both in external and 10-fold cross validation processes. In complete data, optimal DTB and DTF models rendered accuracies of 98.90%, 98.83% in two-category and 98.14%, 98.14% in four-category toxicity classifications. Both the models further yielded classification accuracies of 100% in external toxicity data of T. pyriformis. The constructed regression models (DTB and DTF) using five descriptors yielded correlation coefficients (R 2 ) of 0.945, 0.944 between the measured and predicted toxicities with mean squared errors (MSEs) of 0.059, and 0.064 in complete T. pyriformis data. The T. pyriformis regression models (DTB and DTF) applied to the external toxicity data sets yielded R 2 and MSE values of 0.637, 0.655; 0.534, 0.507 (marine bacteria) and 0.741, 0.691; 0.155, 0.173 (algae). The results suggest for wide applicability of the inter-species models in predicting toxicity of new chemicals for regulatory purposes. These approaches provide useful strategy and robust tools in the screening of ecotoxicological risk or environmental hazard potential of chemicals. - Graphical abstract: Importance of input variables in DTB and DTF classification models for (a) two-category, and (b) four-category toxicity intervals in T. pyriformis data. Generalization and predictive abilities of the

  9. Neural Network Based Intrusion Detection System for Critical Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Ondrej Linda; Milos Manic

    2009-07-01

    Resiliency and security in control systems such as SCADA and Nuclear plant’s in today’s world of hackers and malware are a relevant concern. Computer systems used within critical infrastructures to control physical functions are not immune to the threat of cyber attacks and may be potentially vulnerable. Tailoring an intrusion detection system to the specifics of critical infrastructures can significantly improve the security of such systems. The IDS-NNM – Intrusion Detection System using Neural Network based Modeling, is presented in this paper. The main contributions of this work are: 1) the use and analyses of real network data (data recorded from an existing critical infrastructure); 2) the development of a specific window based feature extraction technique; 3) the construction of training dataset using randomly generated intrusion vectors; 4) the use of a combination of two neural network learning algorithms – the Error-Back Propagation and Levenberg-Marquardt, for normal behavior modeling. The presented algorithm was evaluated on previously unseen network data. The IDS-NNM algorithm proved to be capable of capturing all intrusion attempts presented in the network communication while not generating any false alerts.

  10. Emotion Recognition of Weblog Sentences Based on an Ensemble Algorithm of Multi-label Classification and Word Emotions

    Science.gov (United States)

    Li, Ji; Ren, Fuji

    Weblogs have greatly changed the communication ways of mankind. Affective analysis of blog posts is found valuable for many applications such as text-to-speech synthesis or computer-assisted recommendation. Traditional emotion recognition in text based on single-label classification can not satisfy higher requirements of affective computing. In this paper, the automatic identification of sentence emotion in weblogs is modeled as a multi-label text categorization task. Experiments are carried out on 12273 blog sentences from the Chinese emotion corpus Ren_CECps with 8-dimension emotion annotation. An ensemble algorithm RAKEL is used to recognize dominant emotions from the writer's perspective. Our emotion feature using detailed intensity representation for word emotions outperforms the other main features such as the word frequency feature and the traditional lexicon-based feature. In order to deal with relatively complex sentences, we integrate grammatical characteristics of punctuations, disjunctive connectives, modification relations and negation into features. It achieves 13.51% and 12.49% increases for Micro-averaged F1 and Macro-averaged F1 respectively compared to the traditional lexicon-based feature. Result shows that multiple-dimension emotion representation with grammatical features can efficiently classify sentence emotion in a multi-label problem.

  11. VoIP attacks detection engine based on neural network

    Science.gov (United States)

    Safarik, Jakub; Slachta, Jiri

    2015-05-01

    The security is crucial for any system nowadays, especially communications. One of the most successful protocols in the field of communication over IP networks is Session Initiation Protocol. It is an open-source project used by different kinds of applications, both open-source and proprietary. High penetration and text-based principle made SIP number one target in IP telephony infrastructure, so security of SIP server is essential. To keep up with hackers and to detect potential malicious attacks, security administrator needs to monitor and evaluate SIP traffic in the network. But monitoring and following evaluation could easily overwhelm the security administrator in networks, typically in networks with a number of SIP servers, users and logically or geographically separated networks. The proposed solution lies in automatic attack detection systems. The article covers detection of VoIP attacks through a distributed network of nodes. Then the gathered data analyze aggregation server with artificial neural network. Artificial neural network means multilayer perceptron network trained with a set of collected attacks. Attack data could also be preprocessed and verified with a self-organizing map. The source data is detected by distributed network of detection nodes. Each node contains a honeypot application and traffic monitoring mechanism. Aggregation of data from each node creates an input for neural networks. The automatic classification on a centralized server with low false positive detection reduce the cost of attack detection resources. The detection system uses modular design for easy deployment in final infrastructure. The centralized server collects and process detected traffic. It also maintains all detection nodes.

  12. Assessing an ensemble Kalman filter inference of Manning’s n coefficient of an idealized tidal inlet against a polynomial chaos-based MCMC

    KAUST Repository

    Siripatana, Adil

    2017-06-08

    Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning’s n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and

  13. Assessing an ensemble Kalman filter inference of Manning's n coefficient of an idealized tidal inlet against a polynomial chaos-based MCMC

    Science.gov (United States)

    Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim

    2017-08-01

    Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and

  14. A One-Step-Ahead Smoothing-Based Joint Ensemble Kalman Filter for State-Parameter Estimation of Hydrological Models

    KAUST Repository

    El Gharamti, Mohamad; Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2015-01-01

    The ensemble Kalman filter (EnKF) recursively integrates field data into simulation models to obtain a better characterization of the model’s state and parameters. These are generally estimated following a state-parameters joint augmentation

  15. Reliability analysis of a consecutive r-out-of-n: F system based on neural networks

    International Nuclear Information System (INIS)

    Habib, Aziz; Alsieidi, Ragab; Youssef, Ghada

    2009-01-01

    In this paper, we present a generalized Markov reliability and fault-tolerant model, which includes the effects of permanent fault and intermittent fault for reliability evaluations based on neural network techniques. The reliability of a consecutive r-out-of-n: F system was obtained with a three-layer connected neural network represents a discrete time state reliability Markov model of the system. Such that we fed the neural network with the desired reliability of the system under design. Then we extracted the parameters of the system from the neural weights at the convergence of the neural network to the desired reliability. Finally, we obtain simulation results.

  16. A multi-stage intelligent approach based on an ensemble of two-way interaction model for forecasting the global horizontal radiation of India

    International Nuclear Information System (INIS)

    Jiang, He; Dong, Yao; Xiao, Ling

    2017-01-01

    Highlights: • Ensemble learning system is proposed to forecast the global solar radiation. • LASSO is utilized as feature selection method for subset model. • GSO is used to select the weight vector aggregating the response of subset model. • A simple and efficient algorithm is designed based on thresholding function. • Theoretical analysis focusing on error rate is provided. - Abstract: Forecasting of effective solar irradiation has developed a huge interest in recent decades, mainly due to its various applications in grid connect photovoltaic installations. This paper develops and investigates an ensemble learning based multistage intelligent approach to forecast 5 days global horizontal radiation at four given locations of India. The two-way interaction model is considered with purpose of detecting the associated correlation between the features. The main structure of the novel method is the ensemble learning, which is based on Divide and Conquer principle, is applied to enhance the forecasting accuracy and model stability. An efficient feature selection method LASSO is performed in the input space with the regularization parameter selected by Cross-Validation. A weight vector which best represents the importance of each individual model in ensemble system is provided by glowworm swarm optimization. The combination of feature selection and parameter selection are helpful in creating the diversity of the ensemble learning. In order to illustrate the validity of the proposed method, the datasets at four different locations of the India are split into training and test datasets. The results of the real data experiments demonstrate the efficiency and efficacy of the proposed method comparing with other competitors.

  17. Quantum neural network based machine translator for Hindi to English.

    Science.gov (United States)

    Narayan, Ravi; Singh, V P; Chakraverty, S

    2014-01-01

    This paper presents the machine learning based machine translation system for Hindi to English, which learns the semantically correct corpus. The quantum neural based pattern recognizer is used to recognize and learn the pattern of corpus, using the information of part of speech of individual word in the corpus, like a human. The system performs the machine translation using its knowledge gained during the learning by inputting the pair of sentences of Devnagri-Hindi and English. To analyze the effectiveness of the proposed approach, 2600 sentences have been evaluated during simulation and evaluation. The accuracy achieved on BLEU score is 0.7502, on NIST score is 6.5773, on ROUGE-L score is 0.9233, and on METEOR score is 0.5456, which is significantly higher in comparison with Google Translation and Bing Translation for Hindi to English Machine Translation.

  18. Sequence memory based on coherent spin-interaction neural networks.

    Science.gov (United States)

    Xia, Min; Wong, W K; Wang, Zhijie

    2014-12-01

    Sequence information processing, for instance, the sequence memory, plays an important role on many functions of brain. In the workings of the human brain, the steady-state period is alterable. However, in the existing sequence memory models using heteroassociations, the steady-state period cannot be changed in the sequence recall. In this work, a novel neural network model for sequence memory with controllable steady-state period based on coherent spininteraction is proposed. In the proposed model, neurons fire collectively in a phase-coherent manner, which lets a neuron group respond differently to different patterns and also lets different neuron groups respond differently to one pattern. The simulation results demonstrating the performance of the sequence memory are presented. By introducing a new coherent spin-interaction sequence memory model, the steady-state period can be controlled by dimension parameters and the overlap between the input pattern and the stored patterns. The sequence storage capacity is enlarged by coherent spin interaction compared with the existing sequence memory models. Furthermore, the sequence storage capacity has an exponential relationship to the dimension of the neural network.

  19. Comparison Of Power Quality Disturbances Classification Based On Neural Network

    Directory of Open Access Journals (Sweden)

    Nway Nway Kyaw Win

    2015-07-01

    Full Text Available Abstract Power quality disturbances PQDs result serious problems in the reliability safety and economy of power system network. In order to improve electric power quality events the detection and classification of PQDs must be made type of transient fault. Software analysis of wavelet transform with multiresolution analysis MRA algorithm and feed forward neural network probabilistic and multilayer feed forward neural network based methodology for automatic classification of eight types of PQ signals flicker harmonics sag swell impulse fluctuation notch and oscillatory will be presented. The wavelet family Db4 is chosen in this system to calculate the values of detailed energy distributions as input features for classification because it can perform well in detecting and localizing various types of PQ disturbances. This technique classifies the types of PQDs problem sevents.The classifiers classify and identify the disturbance type according to the energy distribution. The results show that the PNN can analyze different power disturbance types efficiently. Therefore it can be seen that PNN has better classification accuracy than MLFF.

  20. The Dissolved Oxygen Prediction Method Based on Neural Network

    Directory of Open Access Journals (Sweden)

    Zhong Xiao

    2017-01-01

    Full Text Available The dissolved oxygen (DO is oxygen dissolved in water, which is an important factor for the aquaculture. Using BP neural network method with the combination of purelin, logsig, and tansig activation functions is proposed for the prediction of aquaculture’s dissolved oxygen. The input layer, hidden layer, and output layer are introduced in detail including the weight adjustment process. The breeding data of three ponds in actual 10 consecutive days were used for experiments; these ponds were located in Beihai, Guangxi, a traditional aquaculture base in southern China. The data of the first 7 days are used for training, and the data of the latter 3 days are used for the test. Compared with the common prediction models, curve fitting (CF, autoregression (AR, grey model (GM, and support vector machines (SVM, the experimental results show that the prediction accuracy of the neural network is the highest, and all the predicted values are less than 5% of the error limit, which can meet the needs of practical applications, followed by AR, GM, SVM, and CF. The prediction model can help to improve the water quality monitoring level of aquaculture which will prevent the deterioration of water quality and the outbreak of disease.

  1. Deep Neural Network Based Demand Side Short Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Seunghyoung Ryu

    2016-12-01

    Full Text Available In the smart grid, one of the most important research areas is load forecasting; it spans from traditional time series analyses to recent machine learning approaches and mostly focuses on forecasting aggregated electricity consumption. However, the importance of demand side energy management, including individual load forecasting, is becoming critical. In this paper, we propose deep neural network (DNN-based load forecasting models and apply them to a demand side empirical load database. DNNs are trained in two different ways: a pre-training restricted Boltzmann machine and using the rectified linear unit without pre-training. DNN forecasting models are trained by individual customer’s electricity consumption data and regional meteorological elements. To verify the performance of DNNs, forecasting results are compared with a shallow neural network (SNN, a double seasonal Holt–Winters (DSHW model and the autoregressive integrated moving average (ARIMA. The mean absolute percentage error (MAPE and relative root mean square error (RRMSE are used for verification. Our results show that DNNs exhibit accurate and robust predictions compared to other forecasting models, e.g., MAPE and RRMSE are reduced by up to 17% and 22% compared to SNN and 9% and 29% compared to DSHW.

  2. Multimodel GCM-RCM Ensemble-Based Projections of Temperature and Precipitation over West Africa for the Early 21st Century

    Directory of Open Access Journals (Sweden)

    I. Diallo

    2012-01-01

    Full Text Available Reliable climate change scenarios are critical for West Africa, whose economy relies mostly on agriculture and, in this regard, multimodel ensembles are believed to provide the most robust climate change information. Toward this end, we analyze and intercompare the performance of a set of four regional climate models (RCMs driven by two global climate models (GCMs (for a total of 4 different GCM-RCM pairs in simulating present day and future climate over West Africa. The results show that the individual RCM members as well as their ensemble employing the same driving fields exhibit different biases and show mixed results in terms of outperforming the GCM simulation of seasonal temperature and precipitation, indicating a substantial sensitivity of RCMs to regional and local processes. These biases are reduced and GCM simulations improved upon by averaging all four RCM simulations, suggesting that multi-model RCM ensembles based on different driving GCMs help to compensate systematic errors from both the nested and the driving models. This confirms the importance of the multi-model approach for improving robustness of climate change projections. Illustrative examples of such ensemble reveal that the western Sahel undergoes substantial drying in future climate projections mostly due to a decrease in peak monsoon rainfall.

  3. Proposal of a novel ensemble learning based segmentation with a shape prior and its application to spleen segmentation from a 3D abdominal CT volume

    International Nuclear Information System (INIS)

    Shindo, Kiyo; Shimizu, Akinobu; Kobatake, Hidefumi; Nawano, Shigeru; Shinozaki, Kenji

    2010-01-01

    An organ segmentation learned by a conventional ensemble learning algorithm suffers from unnatural errors because each voxel is classified independently in the segmentation process. This paper proposes a novel ensemble learning algorithm that can take into account global shape and location of organs. It estimates the shape and location of an organ from a given image by combining an intermediate segmentation result with a statistical shape model. Once an ensemble learning algorithm could not improve the segmentation performance in the iterative learning process, it estimates the shape and location by finding an optimal model parameter set with maximum degree of correspondence between a statistical shape model and the intermediate segmentation result. Novel weak classifiers are generated based on a signed distance from a boundary of the estimated shape and a distance from a barycenter of the intermediate segmentation result. Subsequently it continues the learning process with the novel weak classifiers. This paper presents experimental results where the proposed ensemble learning algorithm generates a segmentation process that can extract a spleen from a 3D CT image more precisely than a conventional one. (author)

  4. Representing and Reasoning with the Internet of Things: a Modular Rule-Based Model for Ensembles of Context-Aware Smart Things

    Directory of Open Access Journals (Sweden)

    S. W. Loke

    2016-03-01

    Full Text Available Context-aware smart things are capable of computational behaviour based on sensing the physical world, inferring context from the sensed data, and acting on the sensed context. A collection of such things can form what we call a thing-ensemble, when they have the ability to communicate with one another (over a short range network such as Bluetooth, or the Internet, i.e. the Internet of Things (IoT concept, sense each other, and when each of them might play certain roles with respect to each other. Each smart thing in a thing-ensemble might have its own context-aware behaviours which when integrated with other smart things yield behaviours that are not straightforward to reason with. We present Sigma, a language of operators, inspired from modular logic programming, for specifying and reasoning with combined behaviours among smart things in a thing-ensemble. We show numerous examples of the use of Sigma for describing a range of behaviours over a diverse range of thing-ensembles, from sensor networks to smart digital frames, demonstrating the versatility of our approach. We contend that our operator approach abstracts away low-level communication and protocol details, and allows systems of context-aware things to be designed and built in a compositional and incremental manner.

  5. A Pilot Study of Biomedical Text Comprehension using an Attention-Based Deep Neural Reader: Design and Experimental Analysis.

    Science.gov (United States)

    Kim, Seongsoon; Park, Donghyeon; Choi, Yonghwa; Lee, Kyubum; Kim, Byounggun; Jeon, Minji; Kim, Jihye; Tan, Aik Choon; Kang, Jaewoo

    2018-01-05

    With the development of artificial intelligence (AI) technology centered on deep-learning, the computer has evolved to a point where it can read a given text and answer a question based on the context of the text. Such a specific task is known as the task of machine comprehension. Existing machine comprehension tasks mostly use datasets of general texts, such as news articles or elementary school-level storybooks. However, no attempt has been made to determine whether an up-to-date deep learning-based machine comprehension model can also process scientific literature containing expert-level knowledge, especially in the biomedical domain. This study aims to investigate whether a machine comprehension model can process biomedical articles as well as general texts. Since there is no dataset for the biomedical literature comprehension task, our work includes generating a large-scale question answering dataset using PubMed and manually evaluating the generated dataset. We present an attention-based deep neural model tailored to the biomedical domain. To further enhance the performance of our model, we used a pretrained word vector and biomedical entity type embedding. We also developed an ensemble method of combining the results of several independent models to reduce the variance of the answers from the models. The experimental results showed that our proposed deep neural network model outperformed the baseline model by more than 7% on the new dataset. We also evaluated human performance on the new dataset. The human evaluation result showed that our deep neural model outperformed humans in comprehension by 22% on average. In this work, we introduced a new task of machine comprehension in the biomedical domain using a deep neural model. Since there was no large-scale dataset for training deep neural models in the biomedical domain, we created the new cloze-style datasets Biomedical Knowledge Comprehension Title (BMKC_T) and Biomedical Knowledge Comprehension Last

  6. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  7. Fault diagnosis method for nuclear power plants based on neural networks and voting fusion

    International Nuclear Information System (INIS)

    Zhou Gang; Ge Shengqi; Yang Li

    2010-01-01

    A new fault diagnosis method based on multiple neural networks (ANNs) and voting fusion for nuclear power plants (NPPs) was proposed in view of the shortcoming of single neural network fault diagnosis method. In this method, multiple neural networks that the types of neural networks were different were trained for the fault diagnosis of NPP. The operation parameters of NPP, which have important affect on the safety of NPP, were selected as the input variable of neural networks. The output of neural networks is fault patterns of NPP. The last results of diagnosis for NPP were obtained by fusing the diagnosing results of different neural networks by voting fusion. The typical operation patterns of NPP were diagnosed to demonstrate the effect of the proposed method. The results show that employing the proposed diagnosing method can improve the precision and reliability of fault diagnosis results of NPPs. (authors)

  8. Identification-based chaos control via backstepping design using self-organizing fuzzy neural networks

    International Nuclear Information System (INIS)

    Peng Yafu; Hsu, C.-F.

    2009-01-01

    This paper proposes an identification-based adaptive backstepping control (IABC) for the chaotic systems. The IABC system is comprised of a neural backstepping controller and a robust compensation controller. The neural backstepping controller containing a self-organizing fuzzy neural network (SOFNN) identifier is the principal controller, and the robust compensation controller is designed to dispel the effect of minimum approximation error introduced by the SOFNN identifier. The SOFNN identifier is used to online estimate the chaotic dynamic function with structure and parameter learning phases of fuzzy neural network. The structure learning phase consists of the growing and pruning of fuzzy rules; thus the SOFNN identifier can avoid the time-consuming trial-and-error tuning procedure for determining the neural structure of fuzzy neural network. The parameter learning phase adjusts the interconnection weights of neural network to achieve favorable approximation performance. Finally, simulation results verify that the proposed IABC can achieve favorable tracking performance.

  9. Implementation of the ANNs ensembles in macro-BIM cost estimates of buildings' floor structural frames

    Science.gov (United States)

    Juszczyk, Michał

    2018-04-01

    This paper reports some results of the studies on the use of artificial intelligence tools for the purposes of cost estimation based on building information models. A problem of the cost estimates based on the building information models on a macro level supported by the ensembles of artificial neural networks is concisely discussed. In the course of the research a regression model has been built for the purposes of cost estimation of buildings' floor structural frames, as higher level elements. Building information models are supposed to serve as a repository of data used for the purposes of cost estimation. The core of the model is the ensemble of neural networks. The developed model allows the prediction of cost estimates with satisfactory accuracy.

  10. Risk assessment of agricultural water requirement based on a multi-model ensemble framework, southwest of Iran

    Science.gov (United States)

    Zamani, Reza; Akhond-Ali, Ali-Mohammad; Roozbahani, Abbas; Fattahi, Rouhollah

    2017-08-01

    Water shortage and climate change are the most important issues of sustainable agricultural and water resources development. Given the importance of water availability in crop production, the present study focused on risk assessment of climate change impact on agricultural water requirement in southwest of Iran, under two emission scenarios (A2 and B1) for the future period (2025-2054). A multi-model ensemble framework based on mean observed temperature-precipitation (MOTP) method and a combined probabilistic approach Long Ashton Research Station-Weather Generator (LARS-WG) and change factor (CF) have been used for downscaling to manage the uncertainty of outputs of 14 general circulation models (GCMs). The results showed an increasing temperature in all months and irregular changes of precipitation (either increasing or decreasing) in the future period. In addition, the results of the calculated annual net water requirement for all crops affected by climate change indicated an increase between 4 and 10 %. Furthermore, an increasing process is also expected regarding to the required water demand volume. The most and the least expected increase in the water demand volume is about 13 and 5 % for A2 and B1 scenarios, respectively. Considering the results and the limited water resources in the study area, it is crucial to provide water resources planning in order to reduce the negative effects of climate change. Therefore, the adaptation scenarios with the climate change related to crop pattern and water consumption should be taken into account.

  11. A Combined Methodology to Eliminate Artifacts in Multichannel Electrogastrogram Based on Independent Component Analysis and Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K

    2018-06-01

    Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.

  12. Faults Diagnostics of Railway Axle Bearings Based on IMF’s Confidence Index Algorithm for Ensemble EMD

    Science.gov (United States)

    Yi, Cai; Lin, Jianhui; Zhang, Weihua; Ding, Jianming

    2015-01-01

    As train loads and travel speeds have increased over time, railway axle bearings have become critical elements which require more efficient non-destructive inspection and fault diagnostics methods. This paper presents a novel and adaptive procedure based on ensemble empirical mode decomposition (EEMD) and Hilbert marginal spectrum for multi-fault diagnostics of axle bearings. EEMD overcomes the limitations that often hypothesize about data and computational efforts that restrict the application of signal processing techniques. The outputs of this adaptive approach are the intrinsic mode functions that are treated with the Hilbert transform in order to obtain the Hilbert instantaneous frequency spectrum and marginal spectrum. Anyhow, not all the IMFs obtained by the decomposition should be considered into Hilbert marginal spectrum. The IMFs’ confidence index arithmetic proposed in this paper is fully autonomous, overcoming the major limit of selection by user with experience, and allows the development of on-line tools. The effectiveness of the improvement is proven by the successful diagnosis of an axle bearing with a single fault or multiple composite faults, e.g., outer ring fault, cage fault and pin roller fault. PMID:25970256

  13. Short-Term Load Forecasting Model Based on Quantum Elman Neural Networks

    Directory of Open Access Journals (Sweden)

    Zhisheng Zhang

    2016-01-01

    Full Text Available Short-term load forecasting model based on quantum Elman neural networks was constructed in this paper. The quantum computation and Elman feedback mechanism were integrated into quantum Elman neural networks. Quantum computation can effectively improve the approximation capability and the information processing ability of the neural networks. Quantum Elman neural networks have not only the feedforward connection but also the feedback connection. The feedback connection between the hidden nodes and the context nodes belongs to the state feedback in the internal system, which has formed specific dynamic memory performance. Phase space reconstruction theory is the theoretical basis of constructing the forecasting model. The training samples are formed by means of K-nearest neighbor approach. Through the example simulation, the testing results show that the model based on quantum Elman neural networks is better than the model based on the quantum feedforward neural network, the model based on the conventional Elman neural network, and the model based on the conventional feedforward neural network. So the proposed model can effectively improve the prediction accuracy. The research in the paper makes a theoretical foundation for the practical engineering application of the short-term load forecasting model based on quantum Elman neural networks.

  14. Deep neural network and noise classification-based speech enhancement

    Science.gov (United States)

    Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei

    2017-07-01

    In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.

  15. CONEDEP: COnvolutional Neural network based Earthquake DEtection and Phase Picking

    Science.gov (United States)

    Zhou, Y.; Huang, Y.; Yue, H.; Zhou, S.; An, S.; Yun, N.

    2017-12-01

    We developed an automatic local earthquake detection and phase picking algorithm based on Fully Convolutional Neural network (FCN). The FCN algorithm detects and segments certain features (phases) in 3 component seismograms to realize efficient picking. We use STA/LTA algorithm and template matching algorithm to construct the training set from seismograms recorded 1 month before and after the Wenchuan earthquake. Precise P and S phases are identified and labeled to construct the training set. Noise data are produced by combining back-ground noise and artificial synthetic noise to form the equivalent scale of noise set as the signal set. Training is performed on GPUs to achieve efficient convergence. Our algorithm has significantly improved performance in terms of the detection rate and precision in comparison with STA/LTA and template matching algorithms.

  16. Prediction of flow boiling curves based on artificial neural network

    International Nuclear Information System (INIS)

    Wu Junmei; Xi'an Jiaotong Univ., Xi'an; Su Guanghui

    2007-01-01

    The effects of the main system parameters on flow boiling curves were analyzed by using an artificial neural network (ANN) based on the database selected from the 1960s. The input parameters of the ANN are system pressure, mass flow rate, inlet subcooling, wall superheat and steady/transition boiling, and the output parameter is heat flux. The results obtained by the ANN show that the heat flux increases with increasing inlet sub cooling for all heat transfer modes. Mass flow rate has no significant effects on nucleate boiling curves. The transition boiling and film boiling heat fluxes will increase with an increase of mass flow rate. The pressure plays a predominant role and improves heat transfer in whole boiling regions except film boiling. There are slight differences between the steady and the transient boiling curves in all boiling regions except the nucleate one. (authors)

  17. Neurally based measurement and evaluation of environmental noise

    CERN Document Server

    Soeta, Yoshiharu

    2015-01-01

    This book deals with methods of measurement and evaluation of environmental noise based on an auditory neural and brain-oriented model. The model consists of the autocorrelation function (ACF) and the interaural cross-correlation function (IACF) mechanisms for signals arriving at the two ear entrances. Even when the sound pressure level of a noise is only about 35 dBA, people may feel annoyed due to the aspects of sound quality. These aspects can be formulated by the factors extracted from the ACF and IACF. Several examples of measuring environmental noise—from outdoor noise such as that of aircraft, traffic, and trains, and indoor noise such as caused by floor impact, toilets, and air-conditioning—are demonstrated. According to the noise measurement and evaluation, applications for sound design are discussed. This book provides an excellent resource for students, researchers, and practitioners in a wide range of fields, such as the automotive, railway, and electronics industries, and soundscape, architec...

  18. Multivariate Cryptography Based on Clipped Hopfield Neural Network.

    Science.gov (United States)

    Wang, Jia; Cheng, Lee-Ming; Su, Tong

    2018-02-01

    Designing secure and efficient multivariate public key cryptosystems [multivariate cryptography (MVC)] to strengthen the security of RSA and ECC in conventional and quantum computational environment continues to be a challenging research in recent years. In this paper, we will describe multivariate public key cryptosystems based on extended Clipped Hopfield Neural Network (CHNN) and implement it using the MVC (CHNN-MVC) framework operated in space. The Diffie-Hellman key exchange algorithm is extended into the matrix field, which illustrates the feasibility of its new applications in both classic and postquantum cryptography. The efficiency and security of our proposed new public key cryptosystem CHNN-MVC are simulated and found to be NP-hard. The proposed algorithm will strengthen multivariate public key cryptosystems and allows hardware realization practicality.

  19. Deep neural network-based bandwidth enhancement of photoacoustic data.

    Science.gov (United States)

    Gutta, Sreedevi; Kadimesetty, Venkata Suryanarayana; Kalva, Sandeep Kumar; Pramanik, Manojit; Ganapathy, Sriram; Yalavarthy, Phaneendra K

    2017-11-01

    Photoacoustic (PA) signals collected at the boundary of tissue are always band-limited. A deep neural network was proposed to enhance the bandwidth (BW) of the detected PA signal, thereby improving the quantitative accuracy of the reconstructed PA images. A least square-based deconvolution method that utilizes the Tikhonov regularization framework was used for comparison with the proposed network. The proposed method was evaluated using both numerical and experimental data. The results indicate that the proposed method was capable of enhancing the BW of the detected PA signal, which inturn improves the contrast recovery and quality of reconstructed PA images without adding any significant computational burden. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  20. Finger vein recognition based on convolutional neural network

    Directory of Open Access Journals (Sweden)

    Meng Gesi

    2017-01-01

    Full Text Available Biometric Authentication Technology has been widely used in this information age. As one of the most important technology of authentication, finger vein recognition attracts our attention because of its high security, reliable accuracy and excellent performance. However, the current finger vein recognition system is difficult to be applied widely because its complicated image pre-processing and not representative feature vectors. To solve this problem, a finger vein recognition method based on the convolution neural network (CNN is proposed in the paper. The image samples are directly input into the CNN model to extract its feature vector so that we can make authentication by comparing the Euclidean distance between these vectors. Finally, the Deep Learning Framework Caffe is adopted to verify this method. The result shows that there are great improvements in both speed and accuracy rate compared to the previous research. And the model has nice robustness in illumination and rotation.

  1. Neural network-based QSAR and insecticide discovery: spinetoram

    Science.gov (United States)

    Sparks, Thomas C.; Crouse, Gary D.; Dripps, James E.; Anzeveno, Peter; Martynow, Jacek; DeAmicis, Carl V.; Gifford, James

    2008-06-01

    Improvements in the efficacy and spectrum of the spinosyns, novel fermentation derived insecticide, has long been a goal within Dow AgroSciences. As large and complex fermentation products identifying specific modifications to the spinosyns likely to result in improved activity was a difficult process, since most modifications decreased the activity. A variety of approaches were investigated to identify new synthetic directions for the spinosyn chemistry including several explorations of the quantitative structure activity relationships (QSAR) of spinosyns, which initially were unsuccessful. However, application of artificial neural networks (ANN) to the spinosyn QSAR problem identified new directions for improved activity in the chemistry, which subsequent synthesis and testing confirmed. The ANN-based analogs coupled with other information on substitution effects resulting from spinosyn structure activity relationships lead to the discovery of spinetoram (XDE-175). Launched in late 2007, spinetoram provides both improved efficacy and an expanded spectrum while maintaining the exceptional environmental and toxicological profile already established for the spinosyn chemistry.

  2. Reward-based training of recurrent neural networks for cognitive and value-based tasks.

    Science.gov (United States)

    Song, H Francis; Yang, Guangyu R; Wang, Xiao-Jing

    2017-01-13

    Trained neural network models, which exhibit features of neural activity recorded from behaving animals, may provide insights into the circuit mechanisms of cognitive functions through systematic analysis of network activity and connectivity. However, in contrast to the graded error signals commonly used to train networks through supervised learning, animals learn from reward feedback on definite actions through reinforcement learning. Reward maximization is particularly relevant when optimal behavior depends on an animal's internal judgment of confidence or subjective preferences. Here, we implement reward-based training of recurrent neural networks in which a value network guides learning by using the activity of the decision network to predict future reward. We show that such models capture behavioral and electrophysiological findings from well-known experimental paradigms. Our work provides a unified framework for investigating diverse cognitive and value-based computations, and predicts a role for value representation that is essential for learning, but not executing, a task.

  3. Complex catalysts from self-repairing ensembles to highly reactive air-based oxidation systems

    Science.gov (United States)

    Craig L. Hill; Laurent Delannoy; Dean C. Duncan; Ira A. Weinstock; Roman F. Renneke; Richard S. Reiner; Rajai H. Atalla; Jong Woo Han; Daniel A. Hillesheim; Rui Cao; Travis M. Anderson; Nelya M. Okun; Djamaladdin G. Musaev; Yurii V. Geletii

    2007-01-01

    Progress in four interrelated catalysis research efforts in our laboratory are summarized: (1) catalytic photochemical functionalization of unactivated CeH bonds by polyoxometalates (POMs); (2) self-repairing catalysts; (3) catalysts for air-based oxidations under ambient conditions; and (4) terminal oxo complexes of the late-transition metal elements and their...

  4. Identifying climate analogues for precipitation extremes for Denmark based on RCM simulations from the ENSEMBLES database

    DEFF Research Database (Denmark)

    Arnbjerg-Nielsen, Karsten; Funder, S. G.; Madsen, H.

    2015-01-01

    Climate analogues, also denoted Space-For-Time, may be used to identify regions where the present climatic conditions resemble conditions of a past or future state of another location or region based on robust climate variable statistics in combination with projections of how these statistics cha...

  5. An Integrated Ensemble-Based Operational Framework to Predict Urban Flooding: A Case Study of Hurricane Sandy in the Passaic and Hackensack River Basins

    Science.gov (United States)

    Saleh, F.; Ramaswamy, V.; Georgas, N.; Blumberg, A. F.; Wang, Y.

    2016-12-01

    Advances in computational resources and modeling techniques are opening the path to effectively integrate existing complex models. In the context of flood prediction, recent extreme events have demonstrated the importance of integrating components of the hydrosystem to better represent the interactions amongst different physical processes and phenomena. As such, there is a pressing need to develop holistic and cross-disciplinary modeling frameworks that effectively integrate existing models and better represent the operative dynamics. This work presents a novel Hydrologic-Hydraulic-Hydrodynamic Ensemble (H3E) flood prediction framework that operationally integrates existing predictive models representing coastal (New York Harbor Observing and Prediction System, NYHOPS), hydrologic (US Army Corps of Engineers Hydrologic Modeling System, HEC-HMS) and hydraulic (2-dimensional River Analysis System, HEC-RAS) components. The state-of-the-art framework is forced with 125 ensemble meteorological inputs from numerical weather prediction models including the Global Ensemble Forecast System, the European Centre for Medium-Range Weather Forecasts (ECMWF), the Canadian Meteorological Centre (CMC), the Short Range Ensemble Forecast (SREF) and the North American Mesoscale Forecast System (NAM). The framework produces, within a 96-hour forecast horizon, on-the-fly Google Earth flood maps that provide critical information for decision makers and emergency preparedness managers. The utility of the framework was demonstrated by retrospectively forecasting an extreme flood event, hurricane Sandy in the Passaic and Hackensack watersheds (New Jersey, USA). Hurricane Sandy caused significant damage to a number of critical facilities in this area including the New Jersey Transit's main storage and maintenance facility. The results of this work demonstrate that ensemble based frameworks provide improved flood predictions and useful information about associated uncertainties, thus

  6. World Music Ensemble: Kulintang

    Science.gov (United States)

    Beegle, Amy C.

    2012-01-01

    As instrumental world music ensembles such as steel pan, mariachi, gamelan and West African drums are becoming more the norm than the exception in North American school music programs, there are other world music ensembles just starting to gain popularity in particular parts of the United States. The kulintang ensemble, a drum and gong ensemble…

  7. Using ensemble weather forecast in a risk based real time optimization of urban drainage systems

    DEFF Research Database (Denmark)

    Courdent, Vianney Augustin Thomas; Vezzaro, Luca; Mikkelsen, Peter Steen

    2015-01-01

    Global Real Time Control (RTC) of urban drainage system is increasingly seen as cost-effective solution in order to respond to increasing performance demand (e.g. reduction of Combined Sewer Overflow, protection of sensitive areas as bathing water etc.). The Dynamic Overflow Risk Assessment (DORA......) strategy was developed to operate Urban Drainage Systems (UDS) in order to minimize the expected overflow risk by considering the water volume presently stored in the drainage network, the expected runoff volume based on a 2-hours radar forecast model and an estimated uncertainty of the runoff forecast....... However, such temporal horizon (1-2 hours) is relatively short when used for the operation of large storage facilities, which may require a few days to be emptied. This limits the performance of the optimization and control in reducing combined sewer overflow and in preparing for possible flooding. Based...

  8. Automated Bug Assignment: Ensemble-based Machine Learning in Large Scale Industrial Contexts

    OpenAIRE

    Jonsson, Leif; Borg, Markus; Broman, David; Sandahl, Kristian; Eldh, Sigrid; Runeson, Per

    2016-01-01

    Bug report assignment is an important part of software maintenance. In particular, incorrect assignments of bug reports to development teams can be very expensive in large software development projects. Several studies propose automating bug assignment techniques using machine learning in open source software contexts, but no study exists for large-scale proprietary projects in industry. The goal of this study is to evaluate automated bug assignment techniques that are based on machine learni...

  9. Neural network-based nonlinear model predictive control vs. linear quadratic gaussian control

    Science.gov (United States)

    Cho, C.; Vance, R.; Mardi, N.; Qian, Z.; Prisbrey, K.

    1997-01-01

    One problem with the application of neural networks to the multivariable control of mineral and extractive processes is determining whether and how to use them. The objective of this investigation was to compare neural network control to more conventional strategies and to determine if there are any advantages in using neural network control in terms of set-point tracking, rise time, settling time, disturbance rejection and other criteria. The procedure involved developing neural network controllers using both historical plant data and simulation models. Various control patterns were tried, including both inverse and direct neural network plant models. These were compared to state space controllers that are, by nature, linear. For grinding and leaching circuits, a nonlinear neural network-based model predictive control strategy was superior to a state space-based linear quadratic gaussian controller. The investigation pointed out the importance of incorporating state space into neural networks by making them recurrent, i.e., feeding certain output state variables into input nodes in the neural network. It was concluded that neural network controllers can have better disturbance rejection, set-point tracking, rise time, settling time and lower set-point overshoot, and it was also concluded that neural network controllers can be more reliable and easy to implement in complex, multivariable plants.

  10. Design ensemble machine learning model for breast cancer diagnosis.

    Science.gov (United States)

    Hsieh, Sheau-Ling; Hsieh, Sung-Huai; Cheng, Po-Hsun; Chen, Chi-Huang; Hsu, Kai-Ping; Lee, I-Shun; Wang, Zhenyu; Lai, Feipei

    2012-10-01

    In this paper, we classify the breast cancer of medical diagnostic data. Information gain has been adapted for feature selections. Neural fuzzy (NF), k-nearest neighbor (KNN), quadratic classifier (QC), each single model scheme as well as their associated, ensemble ones have been developed for classifications. In addition, a combined ensemble model with these three schemes has been constructed for further validations. The experimental results indicate that the ensemble learning performs better than individual single ones. Moreover, the combined ensemble model illustrates the highest accuracy of classifications for the breast cancer among all models.

  11. An Ensemble of Classifiers based Approach for Prediction of Alzheimer's Disease using fMRI Images based on Fusion of Volumetric, Textural and Hemodynamic Features

    Directory of Open Access Journals (Sweden)

    MALIK, F.

    2018-02-01

    Full Text Available Alzheimer's is a neurodegenerative disease caused by the destruction and death of brain neurons resulting in memory loss, impaired thinking ability, and in certain behavioral changes. Alzheimer disease is a major cause of dementia and eventually death all around the world. Early diagnosis of the disease is crucial which can help the victims to maintain their level of independence for comparatively longer time and live a best life possible. For early detection of Alzheimer's disease, we are proposing a novel approach based on fusion of multiple types of features including hemodynamic, volumetric and textural features of the brain. Our approach uses non-invasive fMRI with ensemble of classifiers, for the classification of the normal controls and the Alzheimer patients. For performance evaluation, ten-fold cross validation is used. Individual feature sets and fusion of features have been investigated with ensemble classifiers for successful classification of Alzheimer's patients from normal controls. It is observed that fusion of features resulted in improved results for accuracy, specificity and sensitivity.

  12. Dynamic neural network-based methods for compensation of nonlinear effects in multimode communication lines

    Science.gov (United States)

    Sidelnikov, O. S.; Redyuk, A. A.; Sygletos, S.

    2017-12-01

    We consider neural network-based schemes of digital signal processing. It is shown that the use of a dynamic neural network-based scheme of signal processing ensures an increase in the optical signal transmission quality in comparison with that provided by other methods for nonlinear distortion compensation.

  13. The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.

    Science.gov (United States)

    Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun

    2018-01-01

    Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.

  14. A sequence-based dynamic ensemble learning system for protein ligand-binding site prediction

    KAUST Repository

    Chen, Peng

    2015-12-03

    Background: Proteins have the fundamental ability to selectively bind to other molecules and perform specific functions through such interactions, such as protein-ligand binding. Accurate prediction of protein residues that physically bind to ligands is important for drug design and protein docking studies. Most of the successful protein-ligand binding predictions were based on known structures. However, structural information is not largely available in practice due to the huge gap between the number of known protein sequences and that of experimentally solved structures

  15. A sequence-based dynamic ensemble learning system for protein ligand-binding site prediction

    KAUST Repository

    Chen, Peng; Hu, ShanShan; Zhang, Jun; Gao, Xin; Li, Jinyan; Xia, Junfeng; Wang, Bing

    2015-01-01

    Background: Proteins have the fundamental ability to selectively bind to other molecules and perform specific functions through such interactions, such as protein-ligand binding. Accurate prediction of protein residues that physically bind to ligands is important for drug design and protein docking studies. Most of the successful protein-ligand binding predictions were based on known structures. However, structural information is not largely available in practice due to the huge gap between the number of known protein sequences and that of experimentally solved structures

  16. Ensemble regression model-based anomaly detection for cyber-physical intrusion detection in smart grids

    DEFF Research Database (Denmark)

    Kosek, Anna Magdalena; Gehrke, Oliver

    2016-01-01

    The shift from centralised large production to distributed energy production has several consequences for current power system operation. The replacement of large power plants by growing numbers of distributed energy resources (DERs) increases the dependency of the power system on small scale......, distributed production. Many of these DERs can be accessed and controlled remotely, posing a cybersecurity risk. This paper investigates an intrusion detection system which evaluates the DER operation in order to discover unauthorized control actions. The proposed anomaly detection method is based...

  17. Vision-Based Fall Detection with Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Adrián Núñez-Marcos

    2017-01-01

    Full Text Available One of the biggest challenges in modern societies is the improvement of healthy aging and the support to older persons in their daily activities. In particular, given its social and economic impact, the automatic detection of falls has attracted considerable attention in the computer vision and pattern recognition communities. Although the approaches based on wearable sensors have provided high detection rates, some of the potential users are reluctant to wear them and thus their use is not yet normalized. As a consequence, alternative approaches such as vision-based methods have emerged. We firmly believe that the irruption of the Smart Environments and the Internet of Things paradigms, together with the increasing number of cameras in our daily environment, forms an optimal context for vision-based systems. Consequently, here we propose a vision-based solution using Convolutional Neural Networks to decide if a sequence of frames contains a person falling. To model the video motion and make the system scenario independent, we use optical flow images as input to the networks followed by a novel three-step training phase. Furthermore, our method is evaluated in three public datasets achieving the state-of-the-art results in all three of them.

  18. A two-stage method of quantitative flood risk analysis for reservoir real-time operation using ensemble-based hydrologic forecasts

    Science.gov (United States)

    Liu, P.

    2013-12-01

    Quantitative analysis of the risk for reservoir real-time operation is a hard task owing to the difficulty of accurate description of inflow uncertainties. The ensemble-based hydrologic forecasts directly depict the inflows not only the marginal distributions but also their persistence via scenarios. This motivates us to analyze the reservoir real-time operating risk with ensemble-based hydrologic forecasts as inputs. A method is developed by using the forecast horizon point to divide the future time into two stages, the forecast lead-time and the unpredicted time. The risk within the forecast lead-time is computed based on counting the failure number of forecast scenarios, and the risk in the unpredicted time is estimated using reservoir routing with the design floods and the reservoir water levels of forecast horizon point. As a result, a two-stage risk analysis method is set up to quantify the entire flood risks by defining the ratio of the number of scenarios that excessive the critical value to the total number of scenarios. The China's Three Gorges Reservoir (TGR) is selected as a case study, where the parameter and precipitation uncertainties are implemented to produce ensemble-based hydrologic forecasts. The Bayesian inference, Markov Chain Monte Carlo, is used to account for the parameter uncertainty. Two reservoir operation schemes, the real operated and scenario optimization, are evaluated for the flood risks and hydropower profits analysis. With the 2010 flood, it is found that the improvement of the hydrologic forecast accuracy is unnecessary to decrease the reservoir real-time operation risk, and most risks are from the forecast lead-time. It is therefore valuable to decrease the avarice of ensemble-based hydrologic forecasts with less bias for a reservoir operational purpose.

  19. An ensemble based top performing approach for NCI-DREAM drug sensitivity prediction challenge.

    Directory of Open Access Journals (Sweden)

    Qian Wan

    Full Text Available We consider the problem of predicting sensitivity of cancer cell lines to new drugs based on supervised learning on genomic profiles. The genetic and epigenetic characterization of a cell line provides observations on various aspects of regulation including DNA copy number variations, gene expression, DNA methylation and protein abundance. To extract relevant information from the various data types, we applied a random forest based approach to generate sensitivity predictions from each type of data and combined the predictions in a linear regression model to generate the final drug sensitivity prediction. Our approach when applied to the NCI-DREAM drug sensitivity prediction challenge was a top performer among 47 teams and produced high accuracy predictions. Our results show that the incorporation of multiple genomic characterizations lowered the mean and variance of the estimated bootstrap prediction error. We also applied our approach to the Cancer Cell Line Encyclopedia database for sensitivity prediction and the ability to extract the top targets of an anti-cancer drug. The results illustrate the effectiveness of our approach in predicting drug sensitivity from heterogeneous genomic datasets.

  20. Adaptive PID control based on orthogonal endocrine neural networks.

    Science.gov (United States)

    Milovanović, Miroslav B; Antić, Dragan S; Milojković, Marko T; Nikolić, Saša S; Perić, Staniša Lj; Spasić, Miodrag D

    2016-12-01

    A new intelligent hybrid structure used for online tuning of a PID controller is proposed in this paper. The structure is based on two adaptive neural networks, both with built-in Chebyshev orthogonal polynomials. First substructure network is a regular orthogonal neural network with implemented artificial endocrine factor (OENN), in the form of environmental stimuli, to its weights. It is used for approximation of control signals and for processing system deviation/disturbance signals which are introduced in the form of environmental stimuli. The output values of OENN are used to calculate artificial environmental stimuli (AES), which represent required adaptation measure of a second network-orthogonal endocrine adaptive neuro-fuzzy inference system (OEANFIS). OEANFIS is used to process control, output and error signals of a system and to generate adjustable values of proportional, derivative, and integral parameters, used for online tuning of a PID controller. The developed structure is experimentally tested on a laboratory model of the 3D crane system in terms of analysing tracking performances and deviation signals (error signals) of a payload. OENN-OEANFIS performances are compared with traditional PID and 6 intelligent PID type controllers. Tracking performance comparisons (in transient and steady-state period) showed that the proposed adaptive controller possesses performances within the range of other tested controllers. The main contribution of OENN-OEANFIS structure is significant minimization of deviation signals (17%-79%) compared to other controllers. It is recommended to exploit it when dealing with a highly nonlinear system which operates in the presence of undesirable disturbances. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Web based educational tool for neural network robot control

    Directory of Open Access Journals (Sweden)

    Jure Čas

    2007-05-01

    Full Text Available Abstract— This paper describes the application for teleoperations of the SCARA robot via the internet. The SCARA robot is used by students of mehatronics at the University of Maribor as a remote educational tool. The developed software consists of two parts i.e. the continuous neural network sliding mode controller (CNNSMC and the graphical user interface (GUI. Application is based on two well-known commercially available software packages i.e. MATLAB/Simulink and LabVIEW. Matlab/Simulink and the DSP2 Library for Simulink are used for control algorithm development, simulation and executable code generation. While this code is executing on the DSP-2 Roby controller and through the analog and digital I/O lines drives the real process, LabVIEW virtual instrument (VI, running on the PC, is used as a user front end. LabVIEW VI provides the ability for on-line parameter tuning, signal monitoring, on-line analysis and via Remote Panels technology also teleoperation. The main advantage of a CNNSMC is the exploitation of its self-learning capability. When friction or an unexpected impediment occurs for example, the user of a remote application has no information about any changed robot dynamic and thus is unable to dispatch it manually. This is not a control problem anymore because, when a CNNSMC is used, any approximation of changed robot dynamic is estimated independently of the remote’s user. Index Terms—LabVIEW; Matlab/Simulink; Neural network control; remote educational tool; robotics

  2. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    Science.gov (United States)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  3. A semiautomatic CT-based ensemble segmentation of lung tumors: comparison with oncologists' delineations and with the surgical specimen.

    Science.gov (United States)

    Rios Velazquez, Emmanuel; Aerts, Hugo J W L; Gu, Yuhua; Goldgof, Dmitry B; De Ruysscher, Dirk; Dekker, Andre; Korn, René; Gillies, Robert J; Lambin, Philippe

    2012-11-01

    To assess the clinical relevance of a semiautomatic CT-based ensemble segmentation method, by comparing it to pathology and to CT/PET manual delineations by five independent radiation oncologists in non-small cell lung cancer (NSCLC). For 20 NSCLC patients (stages Ib-IIIb) the primary tumor was delineated manually on CT/PET scans by five independent radiation oncologists and segmented using a CT based semi-automatic tool. Tumor volume and overlap fractions between manual and semiautomatic-segmented volumes were compared. All measurements were correlated with the maximal diameter on macroscopic examination of the surgical specimen. Imaging data are available on www.cancerdata.org. High overlap fractions were observed between the semi-automatically segmented volumes and the intersection (92.5±9.0, mean±SD) and union (94.2±6.8) of the manual delineations. No statistically significant differences in tumor volume were observed between the semiautomatic segmentation (71.4±83.2 cm(3), mean±SD) and manual delineations (81.9±94.1 cm(3); p=0.57). The maximal tumor diameter of the semiautomatic-segmented tumor correlated strongly with the macroscopic diameter of the primary tumor (r=0.96). Semiautomatic segmentation of the primary tumor on CT demonstrated high agreement with CT/PET manual delineations and strongly correlated with the macroscopic diameter considered as the "gold standard". This method may be used routinely in clinical practice and could be employed as a starting point for treatment planning, target definition in multi-center clinical trials or for high throughput data mining research. This method is particularly suitable for peripherally located tumors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    Science.gov (United States)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2018-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  5. A Neural Network Based Dutch Part of Speech Tagger

    NARCIS (Netherlands)

    Boschman, E.; op den Akker, Hendrikus J.A.; Nijholt, A.; Nijholt, Antinus; Pantic, Maja; Pantic, M.; Poel, M.; Poel, Mannes; Hondorp, G.H.W.

    2008-01-01

    In this paper a Neural Network is designed for Part-of-Speech Tagging of Dutch text. Our approach uses the Corpus Gesproken Nederlands (CGN) consisting of almost 9 million transcribed words of spoken Dutch, divided into 15 different categories. The outcome of the design is a Neural Network with an

  6. Thermoelastic steam turbine rotor control based on neural network

    Science.gov (United States)

    Rzadkowski, Romuald; Dominiczak, Krzysztof; Radulski, Wojciech; Szczepanik, R.

    2015-12-01

    Considered here are Nonlinear Auto-Regressive neural networks with eXogenous inputs (NARX) as a mathematical model of a steam turbine rotor for controlling steam turbine stress on-line. In order to obtain neural networks that locate critical stress and temperature points in the steam turbine during transient states, an FE rotor model was built. This model was used to train the neural networks on the basis of steam turbine transient operating data. The training included nonlinearity related to steam turbine expansion, heat exchange and rotor material properties during transients. Simultaneous neural networks are algorithms which can be implemented on PLC controllers. This allows for the application neural networks to control steam turbine stress in industrial power plants.

  7. Template measurement for plutonium pit based on neural networks

    International Nuclear Information System (INIS)

    Zhang Changfan; Gong Jian; Liu Suping; Hu Guangchun; Xiang Yongchun

    2012-01-01

    Template measurement for plutonium pit extracts characteristic data from-ray spectrum and the neutron counts emitted by plutonium. The characteristic data of the suspicious object are compared with data of the declared plutonium pit to verify if they are of the same type. In this paper, neural networks are enhanced as the comparison algorithm for template measurement of plutonium pit. Two kinds of neural networks are created, i.e. the BP and LVQ neural networks. They are applied in different aspects for the template measurement and identification. BP neural network is used for classification for different types of plutonium pits, which is often used for management of nuclear materials. LVQ neural network is used for comparison of inspected objects to the declared one, which is usually applied in the field of nuclear disarmament and verification. (authors)

  8. Artificial neural network based approach to transmission lines protection

    International Nuclear Information System (INIS)

    Joorabian, M.

    1999-05-01

    The aim of this paper is to present and accurate fault detection technique for high speed distance protection using artificial neural networks. The feed-forward multi-layer neural network with the use of supervised learning and the common training rule of error back-propagation is chosen for this study. Information available locally at the relay point is passed to a neural network in order for an assessment of the fault location to be made. However in practice there is a large amount of information available, and a feature extraction process is required to reduce the dimensionality of the pattern vectors, whilst retaining important information that distinguishes the fault point. The choice of features is critical to the performance of the neural networks learning and operation. A significant feature in this paper is that an artificial neural network has been designed and tested to enhance the precision of the adaptive capabilities for distance protection

  9. Ensemble-based computational approach discriminates functional activity of p53 cancer and rescue mutants.

    Directory of Open Access Journals (Sweden)

    Özlem Demir

    2011-10-01

    Full Text Available The tumor suppressor protein p53 can lose its function upon single-point missense mutations in the core DNA-binding domain ("cancer mutants". Activity can be restored by second-site suppressor mutations ("rescue mutants". This paper relates the functional activity of p53 cancer and rescue mutants to their overall molecular dynamics (MD, without focusing on local structural details. A novel global measure of protein flexibility for the p53 core DNA-binding domain, the number of clusters at a certain RMSD cutoff, was computed by clustering over 0.7 µs of explicitly solvated all-atom MD simulations. For wild-type p53 and a sample of p53 cancer or rescue mutants, the number of clusters was a good predictor of in vivo p53 functional activity in cell-based assays. This number-of-clusters (NOC metric was strongly correlated (r(2 = 0.77 with reported values of experimentally measured ΔΔG protein thermodynamic stability. Interpreting the number of clusters as a measure of protein flexibility: (i p53 cancer mutants were more flexible than wild-type protein, (ii second-site rescue mutations decreased the flexibility of cancer mutants, and (iii negative controls of non-rescue second-site mutants did not. This new method reflects the overall stability of the p53 core domain and can discriminate which second-site mutations restore activity to p53 cancer mutants.

  10. Ensembles-based predictions of climate change impacts on bioclimatic zones in Northeast Asia

    Science.gov (United States)

    Choi, Y.; Jeon, S. W.; Lim, C. H.; Ryu, J.

    2017-12-01

    Biodiversity is rapidly declining globally and efforts are needed to mitigate this continually increasing loss of species. Clustering of areas with similar habitats can be used to prioritize protected areas and distribute resources for the conservation of species, selection of representative sample areas for research, and evaluation of impacts due to environmental changes. In this study, Northeast Asia (NEA) was classified into 14 bioclimatic zones using statistical techniques, which are correlation analysis and principal component analysis (PCA), and the iterative self-organizing data analysis technique algorithm (ISODATA). Based on these bioclimatic classification, we predicted shift of bioclimatic zones due to climate change. The input variables include the current climatic data (1960-1990) and the future climatic data of the HadGEM2-AO model (RCP 4.5(2050, 2070) and 8.5(2050, 2070)) provided by WorldClim. Using these data, multi-modeling methods including maximum likelihood classification, random forest, and species distribution modelling have been used to project the impact of climate change on the spatial distribution of bioclimatic zones within NEA. The results of various models were compared and analyzed by overlapping each result. As the result, significant changes in bioclimatic conditions can be expected throughout the NEA by 2050s and 2070s. The overall zones moved upward and some zones were predicted to disappear. This analysis provides the basis for understanding potential impacts of climate change on biodiversity and ecosystem. Also, this could be used more effectively to support decision making on climate change adaptation.

  11. Biologically based neural network for mobile robot navigation

    Science.gov (United States)

    Torres Muniz, Raul E.

    1999-01-01

    The new tendency in mobile robots is to crete non-Cartesian system based on reactions to their environment. This emerging technology is known as Evolutionary Robotics, which is combined with the Biorobotic field. This new approach brings cost-effective solutions, flexibility, robustness, and dynamism into the design of mobile robots. It also provides fast reactions to the sensory inputs, and new interpretation of the environment or surroundings of the mobile robot. The Subsumption Architecture (SA) and the action selection dynamics developed by Brooks and Maes, respectively, have successfully obtained autonomous mobile robots initiating this new trend of the Evolutionary Robotics. Their design keeps the mobile robot control simple. This work present a biologically inspired modification of these schemes. The hippocampal-CA3-based neural network developed by Williams Levy is used to implement the SA, while the action selection dynamics emerge from iterations of the levels of competence implemented with the HCA3. This replacement by the HCA3 results in a closer biological model than the SA, combining the Behavior-based intelligence theory with neuroscience. The design is kept simple, and it is implemented in the Khepera Miniature Mobile Robot. The used control scheme obtains an autonomous mobile robot that can be used to execute a mail delivery system and surveillance task inside a building floor.

  12. MR-based imaging of neural stem cells

    Energy Technology Data Exchange (ETDEWEB)

    Politi, Letterio S. [San Raffaele Scientific Institute, Neuroradiology Department, Milano (Italy)

    2007-06-15

    The efficacy of therapies based on neural stem cells (NSC) has been demonstrated in preclinical models of several central nervous system (CNS) diseases. Before any potential human application of such promising therapies can be envisaged, there are some important issues that need to be solved. The most relevant one is the requirement for a noninvasive technique capable of monitoring NSC delivery, homing to target sites and trafficking. Knowledge of the location and temporospatial migration of either transplanted or genetically modified NSC is of the utmost importance in analyzing mechanisms of correction and cell distribution. Further, such a technique may represent a crucial step toward clinical application of NSC-based approaches in humans, for both designing successful protocols and monitoring their outcome. Among the diverse imaging approaches available for noninvasive cell tracking, such as nuclear medicine techniques, fluorescence and bioluminescence, magnetic resonance imaging (MRI) has unique advantages. Its high temporospatial resolution, high sensitivity and specificity render MRI one of the most promising imaging modalities available, since it allows dynamic visualization of migration of transplanted cells in animal models and patients during clinically useful time periods. Different cellular and molecular labeling approaches for MRI depiction of NSC are described and discussed in this review, as well as the most relevant issues to be considered in optimizing molecular imaging techniques for clinical application. (orig.)

  13. Symptom based diagnostic system using artificial neural networks

    International Nuclear Information System (INIS)

    Santosh; Vinod, Gopika; Saraf, R.K.

    2003-01-01

    Nuclear power plant experiences a number of transients during its operations. In case of such an undesired plant condition generally known as an initiating event, the operator has to carry out diagnostic and corrective actions. The operator's response may be too late to mitigate or minimize the negative consequences in such scenarios. The objective of this work is to develop an operator support system based on artificial neural networks that will assist the operator to identify the initiating events at the earliest stages of their developments. A symptom based diagnostic system has been developed to investigate the initiating events. Neutral networks are utilized for carrying out the event identification by continuously monitoring process parameters. Whenever an event is detected, the system will display the necessary operator actions along with the initiating event. The system will also show the graphical trend of process parameters that are relevant to the event. This paper describes the features of the software that is used to monitor the reactor. (author)

  14. Variance decomposition-based sensitivity analysis via neural networks

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo

    2003-01-01

    This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project

  15. MR-based imaging of neural stem cells

    International Nuclear Information System (INIS)

    Politi, Letterio S.

    2007-01-01

    The efficacy of therapies based on neural stem cells (NSC) has been demonstrated in preclinical models of several central nervous system (CNS) diseases. Before any potential human application of such promising therapies can be envisaged, there are some important issues that need to be solved. The most relevant one is the requirement for a noninvasive technique capable of monitoring NSC delivery, homing to target sites and trafficking. Knowledge of the location and temporospatial migration of either transplanted or genetically modified NSC is of the utmost importance in analyzing mechanisms of correction and cell distribution. Further, such a technique may represent a crucial step toward clinical application of NSC-based approaches in humans, for both designing successful protocols and monitoring their outcome. Among the diverse imaging approaches available for noninvasive cell tracking, such as nuclear medicine techniques, fluorescence and bioluminescence, magnetic resonance imaging (MRI) has unique advantages. Its high temporospatial resolution, high sensitivity and specificity render MRI one of the most promising imaging modalities available, since it allows dynamic visualization of migration of transplanted cells in animal models and patients during clinically useful time periods. Different cellular and molecular labeling approaches for MRI depiction of NSC are described and discussed in this review, as well as the most relevant issues to be considered in optimizing molecular imaging techniques for clinical application. (orig.)

  16. Comparison of Back propagation neural network and Back propagation neural network Based Particle Swarm intelligence in Diagnostic Breast Cancer

    Directory of Open Access Journals (Sweden)

    Farahnaz SADOUGHI

    2014-03-01

    Full Text Available Breast cancer is the most commonly diagnosed cancer and the most common cause of death in women all over the world. Use of computer technology supporting breast cancer diagnosing is now widespread and pervasive across a broad range of medical areas. Early diagnosis of this disease can greatly enhance the chances of long-term survival of breast cancer victims. Artificial Neural Networks (ANN as mainly method play important role in early diagnoses breast cancer. This paper studies Levenberg Marquardet Backpropagation (LMBP neural network and Levenberg Marquardet Backpropagation based Particle Swarm Optimization(LMBP-PSO for the diagnosis of breast cancer. The obtained results show that LMBP and LMBP based PSO system provides higher classification efficiency. But LMBP based PSO needs minimum training and testing time. It helps in developing Medical Decision System (MDS for breast cancer diagnosing. It can also be used as secondary observer in clinical decision making.

  17. Neural Signature of Value-Based Sensorimotor Prioritization in Humans.

    Science.gov (United States)

    Blangero, Annabelle; Kelly, Simon P

    2017-11-01

    value biases in sensorimotor decision making have been widely studied, little is known about the neural processes that set these biases in place beforehand. Here, we report the discovery of a transient, spatially selective neural signal in humans that encodes the relative value of competing decision alternatives and strongly predicts behavioral value biases in decisions made ∼500 ms later. Follow-up manipulations of value differential, reward valence, response modality, sensory features, and time constraints establish that the signal reflects an active, feature- and effector-general preparatory mechanism for value-based prioritization. Copyright © 2017 the authors 0270-6474/17/3710725-13$15.00/0.

  18. Proposed hybrid-classifier ensemble algorithm to map snow cover area

    Science.gov (United States)

    Nijhawan, Rahul; Raman, Balasubramanian; Das, Josodhir

    2018-01-01

    Metaclassification ensemble approach is known to improve the prediction performance of snow-covered area. The methodology adopted in this case is based on neural network along with four state-of-art machine learning algorithms: support vector machine, artificial neural networks, spectral angle mapper, K-mean clustering, and a snow index: normalized difference snow index. An AdaBoost ensemble algorithm related to decision tree for snow-cover mapping is also proposed. According to available literature, these methods have been rarely used for snow-cover mapping. Employing the above techniques, a study was conducted for Raktavarn and Chaturangi Bamak glaciers, Uttarakhand, Himalaya using multispectral Landsat 7 ETM+ (enhanced thematic mapper) image. The study also compares the results with those obtained from statistical combination methods (majority rule and belief functions) and accuracies of individual classifiers. Accuracy assessment is performed by computing the quantity and allocation disagreement, analyzing statistic measures (accuracy, precision, specificity, AUC, and sensitivity) and receiver operating characteristic curves. A total of 225 combinations of parameters for individual classifiers were trained and tested on the dataset and results were compared with the proposed approach. It was observed that the proposed methodology produced the highest classification accuracy (95.21%), close to (94.01%) that was produced by the proposed AdaBoost ensemble algorithm. From the sets of observations, it was concluded that the ensemble of classifiers produced better results compared to individual classifiers.

  19. A multi-scale ensemble-based framework for forecasting compound coastal-riverine flooding: The Hackensack-Passaic watershed and Newark Bay

    Science.gov (United States)

    Saleh, F.; Ramaswamy, V.; Wang, Y.; Georgas, N.; Blumberg, A.; Pullen, J.

    2017-12-01

    Estuarine regions can experience compound impacts from coastal storm surge and riverine flooding. The challenges in forecasting flooding in such areas are multi-faceted due to uncertainties associated with meteorological drivers and interactions between hydrological and coastal processes. The objective of this work is to evaluate how uncertainties from meteorological predictions propagate through an ensemble-based flood prediction framework and translate into uncertainties in simulated inundation extents. A multi-scale framework, consisting of hydrologic, coastal and hydrodynamic models, was used to simulate two extreme flood events at the confluence of the Passaic and Hackensack rivers and Newark Bay. The events were Hurricane Irene (2011), a combination of inland flooding and coastal storm surge, and Hurricane Sandy (2012) where coastal storm surge was the dominant component. The hydrodynamic component of the framework was first forced with measured streamflow and ocean water level data to establish baseline inundation extents with the best available forcing data. The coastal and hydrologic models were then forced with meteorological predictions from 21 ensemble members of the Global Ensemble Forecast System (GEFS) to retrospectively represent potential future conditions up to 96 hours prior to the events. Inundation extents produced by the hydrodynamic model, forced with the 95th percentile of the ensemble-based coastal and hydrologic boundary conditions, were in good agreement with baseline conditions for both events. The USGS reanalysis of Hurricane Sandy inundation extents was encapsulated between the 50th and 95th percentile of the forecasted inundation extents, and that of Hurricane Irene was similar but with caveats associated with data availability and reliability. This work highlights the importance of accounting for meteorological uncertainty to represent a range of possible future inundation extents at high resolution (∼m).

  20. Chaos Control and Synchronization of Cellular Neural Network with Delays Based on OPNCL Control

    International Nuclear Information System (INIS)

    Qian, Tang; Xing-Yuan, Wang

    2010-01-01

    The problem of chaos control and complete synchronization of cellular neural network with delays is studied. Based on the open plus nonlinear closed loop (OPNCL) method, the control scheme and synchronization scheme are designed. Both the schemes can achieve the chaos control and complete synchronization of chaotic neural network respectively, and their validity is further verified by numerical simulation experiments. (general)

  1. Simultaneous surface and depth neural activity recording with graphene transistor-based dual-modality probes.

    Science.gov (United States)

    Du, Mingde; Xu, Xianchen; Yang, Long; Guo, Yichuan; Guan, Shouliang; Shi, Jidong; Wang, Jinfen; Fang, Ying

    2018-05-15

    Subdural surface and penetrating depth probes are widely applied to record neural activities from the cortical surface and intracortical locations of the brain, respectively. Simultaneous surface and depth neural activity recording is essential to understand the linkage between the two modalities. Here, we develop flexible dual-modality neural probes based on graphene transistors. The neural probes exhibit stable electrical performance even under 90° bending because of the excellent mechanical properties of graphene, and thus allow multi-site recording from the subdural surface of rat cortex. In addition, finite element analysis was carried out to investigate the mechanical interactions between probe and cortex tissue during intracortical implantation. Based on the simulation results, a sharp tip angle of π/6 was chosen to facilitate tissue penetration of the neural probes. Accordingly, the graphene transistor-based dual-modality neural probes have been successfully applied for simultaneous surface and depth recording of epileptiform activity of rat brain in vivo. Our results show that graphene transistor-based dual-modality neural probes can serve as a facile and versatile tool to study tempo-spatial patterns of neural activities. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    KAUST Repository

    Naous, Rawan; Alshedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.

    2016-01-01

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors

  3. A probabilistic approach of the Flash Flood Early Warning System (FF-EWS) in Catalonia based on radar ensemble generation

    Science.gov (United States)

    Velasco, David; Sempere-Torres, Daniel; Corral, Carles; Llort, Xavier; Velasco, Enrique

    2010-05-01

    probabilistic component to the FF-EWS. As a first step, we have incorporated the uncertainty in rainfall estimates and forecasts based on an ensemble of equiprobable rainfall scenarios. The presented study has focused on a number of rainfall events and the performance of the FF-EWS evaluated in terms of its ability to produce probabilistic hazard warnings for decision-making support.

  4. Prediction of Shanghai Index based on Additive Legendre Neural Network

    Directory of Open Access Journals (Sweden)

    Yang Bin

    2017-01-01

    Full Text Available In this paper, a novel Legendre neural network model is proposed, namely additive Legendre neural network (ALNN. A new hybrid evolutionary method besed on binary particle swarm optimization (BPSO algorithm and firefly algorithm is proposed to optimize the structure and parameters of ALNN model. Shanghai stock exchange composite index is used to evaluate the performance of ALNN. Results reveal that ALNN performs better than LNN model.

  5. Gear Fault Diagnosis Based on BP Neural Network

    Science.gov (United States)

    Huang, Yongsheng; Huang, Ruoshi

    2018-03-01

    Gear transmission is more complex, widely used in machinery fields, which form of fault has some nonlinear characteristics. This paper uses BP neural network to train the gear of four typical failure modes, and achieves satisfactory results. Tested by using test data, test results have an agreement with the actual results. The results show that the BP neural network can effectively solve the complex state of gear fault in the gear fault diagnosis.

  6. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    Science.gov (United States)

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  7. Linear programming based on neural networks for radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Xingen Wu; Limin Luo

    2000-01-01

    In this paper, we propose a neural network model for linear programming that is designed to optimize radiotherapy treatment planning (RTP). This kind of neural network can be easily implemented by using a kind of 'neural' electronic system in order to obtain an optimization solution in real time. We first give an introduction to the RTP problem and construct a non-constraint objective function for the neural network model. We adopt a gradient algorithm to minimize the objective function and design the structure of the neural network for RTP. Compared to traditional linear programming methods, this neural network model can reduce the time needed for convergence, the size of problems (i.e., the number of variables to be searched) and the number of extra slack and surplus variables needed. We obtained a set of optimized beam weights that result in a better dose distribution as compared to that obtained using the simplex algorithm under the same initial condition. The example presented in this paper shows that this model is feasible in three-dimensional RTP. (author)

  8. Depression in chronic ketamine users: Sex differences and neural bases.

    Science.gov (United States)

    Li, Chiang-Shan R; Zhang, Sheng; Hung, Chia-Chun; Chen, Chun-Ming; Duann, Jeng-Ren; Lin, Ching-Po; Lee, Tony Szu-Hsien

    2017-11-30

    Chronic ketamine use leads to cognitive and affective deficits including depression. Here, we examined sex differences and neural bases of depression in chronic ketamine users. Compared to non-drug using healthy controls (HC), ketamine-using females but not males showed increased depression score as assessed by the Center of Epidemiological Studies Depression Scale (CES-D). We evaluated resting state functional connectivity (rsFC) of the subgenual anterior cingulate cortex (sgACC), a prefrontal structure consistently implicated in the pathogenesis of depression. Compared to HC, ketamine users (KU) did not demonstrate significant changes in sgACC connectivities at a corrected threshold. However, in KU, a linear regression against CES-D score showed less sgACC connectivity to the orbitofrontal cortex (OFC) with increasing depression severity. Examined separately, male and female KU showed higher sgACC connectivity to bilateral superior temporal gyrus and dorsomedial prefrontal cortex (dmPFC), respectively, in correlation with depression. The linear correlation of sgACC-OFC and sgACC-dmPFC connectivity with depression was significantly different in slope between KU and HC. These findings highlighted changes in rsFC of the sgACC as associated with depression and sex differences in these changes in chronic ketamine users. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Artificial Neural Network-Based System for PET Volume Segmentation

    Directory of Open Access Journals (Sweden)

    Mhd Saeed Sharif

    2010-01-01

    Full Text Available Tumour detection, classification, and quantification in positron emission tomography (PET imaging at early stage of disease are important issues for clinical diagnosis, assessment of response to treatment, and radiotherapy planning. Many techniques have been proposed for segmenting medical imaging data; however, some of the approaches have poor performance, large inaccuracy, and require substantial computation time for analysing large medical volumes. Artificial intelligence (AI approaches can provide improved accuracy and save decent amount of time. Artificial neural networks (ANNs, as one of the best AI techniques, have the capability to classify and quantify precisely lesions and model the clinical evaluation for a specific problem. This paper presents a novel application of ANNs in the wavelet domain for PET volume segmentation. ANN performance evaluation using different training algorithms in both spatial and wavelet domains with a different number of neurons in the hidden layer is also presented. The best number of neurons in the hidden layer is determined according to the experimental results, which is also stated Levenberg-Marquardt backpropagation training algorithm as the best training approach for the proposed application. The proposed intelligent system results are compared with those obtained using conventional techniques including thresholding and clustering based approaches. Experimental and Monte Carlo simulated PET phantom data sets and clinical PET volumes of nonsmall cell lung cancer patients were utilised to validate the proposed algorithm which has demonstrated promising results.

  10. Convolution neural-network-based detection of lung structures

    Science.gov (United States)

    Hasegawa, Akira; Lo, Shih-Chung B.; Freedman, Matthew T.; Mun, Seong K.

    1994-05-01

    Chest radiography is one of the most primary and widely used techniques in diagnostic imaging. Nowadays with the advent of digital radiology, the digital medical image processing techniques for digital chest radiographs have attracted considerable attention, and several studies on the computer-aided diagnosis (CADx) as well as on the conventional image processing techniques for chest radiographs have been reported. In the automatic diagnostic process for chest radiographs, it is important to outline the areas of the lungs, the heart, and the diaphragm. This is because the original chest radiograph is composed of important anatomic structures and, without knowing exact positions of the organs, the automatic diagnosis may result in unexpected detections. The automatic extraction of an anatomical structure from digital chest radiographs can be a useful tool for (1) the evaluation of heart size, (2) automatic detection of interstitial lung diseases, (3) automatic detection of lung nodules, and (4) data compression, etc. Based on the clearly defined boundaries of heart area, rib spaces, rib positions, and rib cage extracted, one should be able to use this information to facilitate the tasks of the CADx on chest radiographs. In this paper, we present an automatic scheme for the detection of lung field from chest radiographs by using a shift-invariant convolution neural network. A novel algorithm for smoothing boundaries of lungs is also presented.

  11. Neural Online Filtering Based on Preprocessed Calorimeter Data

    CERN Document Server

    Torres, R C; The ATLAS collaboration; Simas Filho, E F; De Seixas, J M

    2009-01-01

    Among LHC detectors, ATLAS aims at coping with such high event rate by designing a three-level online triggering system. The first level trigger output will be ~75 kHz. This level will mark the regions where relevant events were found. The second level will validate LVL1 decision by looking only at the approved data using full granularity. At the level two output, the event rate will be reduced to ~2 kHz. Finally, the third level will look at full event information and a rate of ~200 Hz events is expected to be approved, and stored in persistent media for further offline analysis. Many interesting events decay into electrons, which have to be identified from the huge background noise (jets). This work proposes a high-efficient LVL2 electron / jet discrimination system based on neural networks fed from preprocessed calorimeter information. The feature extraction part of the proposed system performs a ring structure of data description. A set of concentric rings centered at the highest energy cell is generated ...

  12. Noisy Ocular Recognition Based on Three Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Min Beom Lee

    2017-12-01

    Full Text Available In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user’s eyes looking somewhere else, not into the front of the camera, specular reflection (SR and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs. Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II training dataset (selected from the university of Beira iris (UBIRIS.v2 database, mobile iris challenge evaluation (MICHE database, and institute of automation of Chinese academy of sciences (CASIA-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.

  13. Supervised Learning Based on Temporal Coding in Spiking Neural Networks.

    Science.gov (United States)

    Mostafa, Hesham

    2017-08-01

    Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.

  14. Route Selection Problem Based on Hopfield Neural Network

    Directory of Open Access Journals (Sweden)

    N. Kojic

    2013-12-01

    Full Text Available Transport network is a key factor of economic, social and every other form of development in the region and the state itself. One of the main conditions for transport network development is the construction of new routes. Often, the construction of regional roads is dominant, since the design and construction in urban areas is quite limited. The process of analysis and planning the new roads is a complex process that depends on many factors (the physical characteristics of the terrain, the economic situation, political decisions, environmental impact, etc. and can take several months. These factors directly or indirectly affect the final solution, and in combination with project limitations and requirements, sometimes can be mutually opposed. In this paper, we present one software solution that aims to find Pareto optimal path for preliminary design of the new roadway. The proposed algorithm is based on many different factors (physical and social with the ability of their increase. This solution is implemented using Hopfield's neural network, as a kind of artificial intelligence, which has shown very good results for solving complex optimization problems.

  15. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  16. Noisy Ocular Recognition Based on Three Convolutional Neural Networks.

    Science.gov (United States)

    Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung

    2017-12-17

    In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user's eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.

  17. Traffic sign recognition based on deep convolutional neural network

    Science.gov (United States)

    Yin, Shi-hao; Deng, Ji-cai; Zhang, Da-wei; Du, Jing-yuan

    2017-11-01

    Traffic sign recognition (TSR) is an important component of automated driving systems. It is a rather challenging task to design a high-performance classifier for the TSR system. In this paper, we propose a new method for TSR system based on deep convolutional neural network. In order to enhance the expression of the network, a novel structure (dubbed block-layer below) which combines network-in-network and residual connection is designed. Our network has 10 layers with parameters (block-layer seen as a single layer): the first seven are alternate convolutional layers and block-layers, and the remaining three are fully-connected layers. We train our TSR network on the German traffic sign recognition benchmark (GTSRB) dataset. To reduce overfitting, we perform data augmentation on the training images and employ a regularization method named "dropout". The activation function we employ in our network adopts scaled exponential linear units (SELUs), which can induce self-normalizing properties. To speed up the training, we use an efficient GPU to accelerate the convolutional operation. On the test dataset of GTSRB, we achieve the accuracy rate of 99.67%, exceeding the state-of-the-art results.

  18. Neural network based adaptive control for nonlinear dynamic regimes

    Science.gov (United States)

    Shin, Yoonghyun

    Adaptive control designs using neural networks (NNs) based on dynamic inversion are investigated for aerospace vehicles which are operated at highly nonlinear dynamic regimes. NNs play a key role as the principal element of adaptation to approximately cancel the effect of inversion error, which subsequently improves robustness to parametric uncertainty and unmodeled dynamics in nonlinear regimes. An adaptive control scheme previously named 'composite model reference adaptive control' is further developed so that it can be applied to multi-input multi-output output feedback dynamic inversion. It can have adaptive elements in both the dynamic compensator (linear controller) part and/or in the conventional adaptive controller part, also utilizing state estimation information for NN adaptation. This methodology has more flexibility and thus hopefully greater potential than conventional adaptive designs for adaptive flight control in highly nonlinear flight regimes. The stability of the control system is proved through Lyapunov theorems, and validated with simulations. The control designs in this thesis also include the use of 'pseudo-control hedging' techniques which are introduced to prevent the NNs from attempting to adapt to various actuation nonlinearities such as actuator position and rate saturations. Control allocation is introduced for the case of redundant control effectors including thrust vectoring nozzles. A thorough comparison study of conventional and NN-based adaptive designs for a system under a limit cycle, wing-rock, is included in this research, and the NN-based adaptive control designs demonstrate their performances for two highly maneuverable aerial vehicles, NASA F-15 ACTIVE and FQM-117B unmanned aerial vehicle (UAV), operated under various nonlinearities and uncertainties.

  19. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    Science.gov (United States)

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Modeling task-specific neuronal ensembles improves decoding of grasp

    Science.gov (United States)

    Smith, Ryan J.; Soares, Alcimar B.; Rouse, Adam G.; Schieber, Marc H.; Thakor, Nitish V.

    2018-06-01

    Objective. Dexterous movement involves the activation and coordination of networks of neuronal populations across multiple cortical regions. Attempts to model firing of individual neurons commonly treat the firing rate as directly modulating with motor behavior. However, motor behavior may additionally be associated with modulations in the activity and functional connectivity of neurons in a broader ensemble. Accounting for variations in neural ensemble connectivity may provide additional information about the behavior being performed. Approach. In this study, we examined neural ensemble activity in primary motor cortex (M1) and premotor cortex (PM) of two male rhesus monkeys during performance of a center-out reach, grasp and manipulate task. We constructed point process encoding models of neuronal firing that incorporated task-specific variations in the baseline firing rate as well as variations in functional connectivity with the neural ensemble. Models were evaluated both in terms of their encoding capabilities and their ability to properly classify the grasp being performed. Main results. Task-specific ensemble models correctly predicted the performed grasp with over 95% accuracy and were shown to outperform models of neuronal activity that assume only a variable baseline firing rate. Task-specific ensemble models exhibited superior decoding performance in 82% of units in both monkeys (p  <  0.01). Inclusion of ensemble activity also broadly improved the ability of models to describe observed spiking. Encoding performance of task-specific ensemble models, measured by spike timing predictability, improved upon baseline models in 62% of units. Significance. These results suggest that additional discriminative information about motor behavior found in the variations in functional connectivity of neuronal ensembles located in motor-related cortical regions is relevant to decode complex tasks such as grasping objects, and may serve the basis for more

  1. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  2. Photosensitive-polyimide based method for fabricating various neural electrode architectures

    Directory of Open Access Journals (Sweden)

    Yasuhiro X Kato

    2012-06-01

    Full Text Available An extensive photosensitive polyimide (PSPI-based method for designing and fabricating various neural electrode architectures was developed. The method aims to broaden the design flexibility and expand the fabrication capability for neural electrodes to improve the quality of recorded signals and integrate other functions. After characterizing PSPI’s properties for micromachining processes, we successfully designed and fabricated various neural electrodes even on a non-flat substrate using only one PSPI as an insulation material and without the time-consuming dry etching processes. The fabricated neural electrodes were an electrocorticogram electrode, a mesh intracortical electrode with a unique lattice-like mesh structure to fixate neural tissue, and a guide cannula electrode with recording microelectrodes placed on the curved surface of a guide cannula as a microdialysis probe. In vivo neural recordings using anesthetized rats demonstrated that these electrodes can be used to record neural activities repeatedly without any breakage and mechanical failures, which potentially promises stable recordings for long periods of time. These successes make us believe that this PSPI-based fabrication is a powerful method, permitting flexible design and easy optimization of electrode architectures for a variety of electrophysiological experimental research with improved neural recording performance.

  3. Rainfall downscaling of weekly ensemble forecasts using self-organising maps

    Directory of Open Access Journals (Sweden)

    Masamichi Ohba

    2016-03-01

    Full Text Available This study presents an application of self-organising maps (SOMs to downscaling medium-range ensemble forecasts and probabilistic prediction of local precipitation in Japan. SOM was applied to analyse and connect the relationship between atmospheric patterns over Japan and local high-resolution precipitation data. Multiple SOM was simultaneously employed on four variables derived from the JRA-55 reanalysis over the area of study (south-western Japan, and a two-dimensional lattice of weather patterns (WPs was obtained. Weekly ensemble forecasts can be downscaled to local precipitation using the obtained multiple SOM. The downscaled precipitation is derived by the five SOM lattices based on the WPs of the global model ensemble forecasts for a particular day in 2009–2011. Because this method effectively handles the stochastic uncertainties from the large number of ensemble members, a probabilistic local precipitation is easily and quickly obtained from the ensemble forecasts. This downscaling of ensemble forecasts provides results better than those from a 20-km global spectral model (i.e. capturing the relatively detailed precipitation distribution over the region. To capture the effect of the detailed pattern differences in each SOM node, a statistical model is additionally concreted for each SOM node. The predictability skill of the ensemble forecasts is significantly improved under the neural network-statistics hybrid-downscaling technique, which then brings a much better skill score than the traditional method. It is expected that the results of this study will provide better guidance to the user community and contribute to the future development of dam-management models.

  4. Constraining a compositional flow model with flow-chemical data using an ensemble-based Kalman filter

    KAUST Repository

    Gharamti, M. E.; Kadoura, A.; Valstar, J.; Sun, S.; Hoteit, Ibrahim

    2014-01-01

    Isothermal compositional flow models require coupling transient compressible flows and advective transport systems of various chemical species in subsurface porous media. Building such numerical models is quite challenging and may be subject to many sources of uncertainties because of possible incomplete representation of some geological parameters that characterize the system's processes. Advanced data assimilation methods, such as the ensemble Kalman filter (EnKF), can be used to calibrate these models by incorporating available data. In this work, we consider the problem of estimating reservoir permeability using information about phase pressure as well as the chemical properties of fluid components. We carry out state-parameter estimation experiments using joint and dual updating schemes in the context of the EnKF with a two-dimensional single-phase compositional flow model (CFM). Quantitative and statistical analyses are performed to evaluate and compare the performance of the assimilation schemes. Our results indicate that including chemical composition data significantly enhances the accuracy of the permeability estimates. In addition, composition data provide more information to estimate system states and parameters than do standard pressure data. The dual state-parameter estimation scheme provides about 10% more accurate permeability estimates on average than the joint scheme when implemented with the same ensemble members, at the cost of twice more forward model integrations. At similar computational cost, the dual approach becomes only beneficial after using large enough ensembles.

  5. Constraining a compositional flow model with flow-chemical data using an ensemble-based Kalman filter

    KAUST Repository

    Gharamti, M. E.

    2014-03-01

    Isothermal compositional flow models require coupling transient compressible flows and advective transport systems of various chemical species in subsurface porous media. Building such numerical models is quite challenging and may be subject to many sources of uncertainties because of possible incomplete representation of some geological parameters that characterize the system\\'s processes. Advanced data assimilation methods, such as the ensemble Kalman filter (EnKF), can be used to calibrate these models by incorporating available data. In this work, we consider the problem of estimating reservoir permeability using information about phase pressure as well as the chemical properties of fluid components. We carry out state-parameter estimation experiments using joint and dual updating schemes in the context of the EnKF with a two-dimensional single-phase compositional flow model (CFM). Quantitative and statistical analyses are performed to evaluate and compare the performance of the assimilation schemes. Our results indicate that including chemical composition data significantly enhances the accuracy of the permeability estimates. In addition, composition data provide more information to estimate system states and parameters than do standard pressure data. The dual state-parameter estimation scheme provides about 10% more accurate permeability estimates on average than the joint scheme when implemented with the same ensemble members, at the cost of twice more forward model integrations. At similar computational cost, the dual approach becomes only beneficial after using large enough ensembles.

  6. Neural network based method for conversion of solar radiation data

    International Nuclear Information System (INIS)

    Celik, Ali N.; Muneer, Tariq

    2013-01-01

    Highlights: ► Generalized regression neural network is used to predict the solar radiation on tilted surfaces. ► The above network, amongst many such as multilayer perceptron, is the most successful one. ► The present neural network returns a relative mean absolute error value of 9.1%. ► The present model leads to a mean absolute error value of estimate of 14.9 Wh/m 2 . - Abstract: The receiving ends of the solar energy conversion systems that generate heat or electricity from radiation is usually tilted at an optimum angle to increase the solar incident on the surface. Solar irradiation data measured on horizontal surfaces is readily available for many locations where such solar energy conversion systems are installed. Various equations have been developed to convert solar irradiation data measured on horizontal surface to that on tilted one. These equations constitute the conventional approach. In this article, an alternative approach, generalized regression type of neural network, is used to predict the solar irradiation on tilted surfaces, using the minimum number of variables involved in the physical process, namely the global solar irradiation on horizontal surface, declination and hour angles. Artificial neural networks have been successfully used in recent years for optimization, prediction and modeling in energy systems as alternative to conventional modeling approaches. To show the merit of the presently developed neural network, the solar irradiation data predicted from the novel model was compared to that from the conventional approach (isotropic and anisotropic models), with strict reference to the irradiation data measured in the same location. The present neural network model was found to provide closer solar irradiation values to the measured than the conventional approach, with a mean absolute error value of 14.9 Wh/m 2 . The other statistical values of coefficient of determination and relative mean absolute error also indicate the

  7. Daily Peak Load Forecasting Based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-01-01

    Full Text Available Daily peak load forecasting is an important part of power load forecasting. The accuracy of its prediction has great influence on the formulation of power generation plan, power grid dispatching, power grid operation and power supply reliability of power system. Therefore, it is of great significance to construct a suitable model to realize the accurate prediction of the daily peak load. A novel daily peak load forecasting model, CEEMDAN-MGWO-SVM (Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, is proposed in this paper. Firstly, the model uses the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN algorithm to decompose the daily peak load sequence into multiple sub sequences. Then, the model of modified grey wolf optimization and support vector machine (MGWO-SVM is adopted to forecast the sub sequences. Finally, the forecasting sequence is reconstructed and the forecasting result is obtained. Using CEEMDAN can realize noise reduction for non-stationary daily peak load sequence, which makes the daily peak load sequence more regular. The model adopts the grey wolf optimization algorithm improved by introducing the population dynamic evolution operator and the nonlinear convergence factor to enhance the global search ability and avoid falling into the local optimum, which can better optimize the parameters of the SVM algorithm for improving the forecasting accuracy of daily peak load. In this paper, three cases are used to test the forecasting accuracy of the CEEMDAN-MGWO-SVM model. We choose the models EEMD-MGWO-SVM (Ensemble Empirical Mode Decomposition and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, MGWO-SVM (Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, GWO-SVM (Support Vector Machine Optimized by Grey Wolf Optimization Algorithm, SVM (Support Vector

  8. Classification of urine sediment based on convolution neural network

    Science.gov (United States)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  9. A Clustering-Oriented Closeness Measure Based on Neighborhood Chain and Its Application in the Clustering Ensemble Framework Based on the Fusion of Different Closeness Measures

    Directory of Open Access Journals (Sweden)

    Shaoyi Liang

    2017-09-01

    Full Text Available Closeness measures are crucial to clustering methods. In most traditional clustering methods, the closeness between data points or clusters is measured by the geometric distance alone. These metrics quantify the closeness only based on the concerned data points’ positions in the feature space, and they might cause problems when dealing with clustering tasks having arbitrary clusters shapes and different clusters densities. In this paper, we first propose a novel Closeness Measure between data points based on the Neighborhood Chain (CMNC. Instead of using geometric distances alone, CMNC measures the closeness between data points by quantifying the difficulty for one data point to reach another through a chain of neighbors. Furthermore, based on CMNC, we also propose a clustering ensemble framework that combines CMNC and geometric-distance-based closeness measures together in order to utilize both of their advantages. In this framework, the “bad data points” that are hard to cluster correctly are identified; then different closeness measures are applied to different types of data points to get the unified clustering results. With the fusion of different closeness measures, the framework can get not only better clustering results in complicated clustering tasks, but also higher efficiency.

  10. Planning music-based amelioration and training in infancy and childhood based on neural evidence.

    Science.gov (United States)

    Huotilainen, Minna; Tervaniemi, Mari

    2018-05-04

    Music-based amelioration and training of the developing auditory system has a long tradition, and recent neuroscientific evidence supports using music in this manner. Here, we present the available evidence showing that various music-related activities result in positive changes in brain structure and function, becoming helpful for auditory cognitive processes in everyday life situations for individuals with typical neural development and especially for individuals with hearing, learning, attention, or other deficits that may compromise auditory processing. We also compare different types of music-based training and show how their effects have been investigated with neural methods. Finally, we take a critical position on the multitude of error sources found in amelioration and training studies and on publication bias in the field. We discuss some future improvements of these issues in the field of music-based training and their potential results at the neural and behavioral levels in infants and children for the advancement of the field and for a more complete understanding of the possibilities and significance of the training. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

  11. Template-based procedures for neural network interpretation.

    Science.gov (United States)

    Alexander, J A.; Mozer, M C.

    1999-04-01

    Although neural networks often achieve impressive learning and generalization performance, their internal workings are typically all but impossible to decipher. This characteristic of the networks, their opacity, is one of the disadvantages of connectionism compared to more traditional, rule-oriented approaches to artificial intelligence. Without a thorough understanding of the network behavior, confidence in a system's results is lowered, and the transfer of learned knowledge to other processing systems - including humans - is precluded. Methods that address the opacity problem by casting network weights in symbolic terms are commonly referred to as rule extraction techniques. This work describes a principled approach to symbolic rule extraction from standard multilayer feedforward networks based on the notion of weight templates, parameterized regions of weight space corresponding to specific symbolic expressions. With an appropriate choice of representation, we show how template parameters may be efficiently identified and instantiated to yield the optimal match to the actual weights of a unit. Depending on the requirements of the application domain, the approach can accommodate n-ary disjunctions and conjunctions with O(k) complexity, simple n-of-m expressions with O(k(2)) complexity, or more general classes of recursive n-of-m expressions with O(k(L+2)) complexity, where k is the number of inputs to an unit and L the recursion level of the expression class. Compared to other approaches in the literature, our method of rule extraction offers benefits in simplicity, computational performance, and overall flexibility. Simulation results on a variety of problems demonstrate the application of our procedures as well as the strengths and the weaknesses of our general approach.

  12. PID Neural Network Based Speed Control of Asynchronous Motor Using Programmable Logic Controller

    Directory of Open Access Journals (Sweden)

    MARABA, V. A.

    2011-11-01

    Full Text Available This paper deals with the structure and characteristics of PID Neural Network controller for single input and single output systems. PID Neural Network is a new kind of controller that includes the advantages of artificial neural networks and classic PID controller. Functioning of this controller is based on the update of controller parameters according to the value extracted from system output pursuant to the rules of back propagation algorithm used in artificial neural networks. Parameters obtained from the application of PID Neural Network training algorithm on the speed model of the asynchronous motor exhibiting second order linear behavior were used in the real time speed control of the motor. Programmable logic controller (PLC was used as real time controller. The real time control results show that reference speed successfully maintained under various load conditions.

  13. A case study to estimate costs using Neural Networks and regression based models

    Directory of Open Access Journals (Sweden)

    Nadia Bhuiyan

    2012-07-01

    Full Text Available Bombardier Aerospace’s high performance aircrafts and services set the utmost standard for the Aerospace industry. A case study in collaboration with Bombardier Aerospace is conducted in order to estimate the target cost of a landing gear. More precisely, the study uses both parametric model and neural network models to estimate the cost of main landing gears, a major aircraft commodity. A comparative analysis between the parametric based model and those upon neural networks model will be considered in order to determine the most accurate method to predict the cost of a main landing gear. Several trials are presented for the design and use of the neural network model. The analysis for the case under study shows the flexibility in the design of the neural network model. Furthermore, the performance of the neural network model is deemed superior to the parametric models for this case study.

  14. Particle Swarm Based Approach of a Real-Time Discrete Neural Identifier for Linear Induction Motors

    Directory of Open Access Journals (Sweden)

    Alma Y. Alanis

    2013-01-01

    Full Text Available This paper focusses on a discrete-time neural identifier applied to a linear induction motor (LIM model, whose model is assumed to be unknown. This neural identifier is robust in presence of external and internal uncertainties. The proposed scheme is based on a discrete-time recurrent high-order neural network (RHONN trained with a novel algorithm based on extended Kalman filter (EKF and particle swarm optimization (PSO, using an online series-parallel con…figuration. Real-time results are included in order to illustrate the applicability of the proposed scheme.

  15. Evolving Neural Turing Machines for Reward-based Learning

    DEFF Research Database (Denmark)

    Greve, Rasmus Boll; Jacobsen, Emil Juul; Risi, Sebastian

    2016-01-01

    An unsolved problem in neuroevolution (NE) is to evolve artificial neural networks (ANN) that can store and use information to change their behavior online. While plastic neural networks have shown promise in this context, they have difficulties retaining information over longer periods of time...... version of the double T-Maze, a complex reinforcement-like learning problem. In the T-Maze learning task the agent uses the memory bank to display adaptive behavior that normally requires a plastic ANN, thereby suggesting a complementary and effective mechanism for adaptive behavior in NE....

  16. Neural network based photovoltaic electrical forecasting in south Algeria

    International Nuclear Information System (INIS)

    Hamid Oudjana, S.; Hellal, A.; Hadj Mahammed, I

    2014-01-01

    Photovoltaic electrical forecasting is significance for the optimal operation and power predication of grid-connected photovoltaic (PV) plants, and it is important task in renewable energy electrical system planning and operating. This paper explores the application of neural networks (NN) to study the design of photovoltaic electrical forecasting systems for one week ahead using weather databases include the global irradiance, and temperature of Ghardaia city (south of Algeria) for one year of 2013 using a data acquisition system. Simulations were run and the results are discussed showing that neural networks Technique is capable to decrease the photovoltaic electrical forecasting error. (author)

  17. The method in γ spectrum analysis with artificial neural network based on MATLAB

    International Nuclear Information System (INIS)

    Bai Lixin; Zhang Yiyun; Xu Jiayun; Wu Liping

    2003-01-01

    Analyzing γ spectrum with artificial neural network have the advantage of using the information of whole spectrum and having high analyzing precision. A convenient realization based on MATLAB was present in this

  18. Effects of Some Neurobiological Factors in a Self-organized Critical Model Based on Neural Networks

    International Nuclear Information System (INIS)

    Zhou Liming; Zhang Yingyue; Chen Tianlun

    2005-01-01

    Based on an integrate-and-fire mechanism, we investigate the effect of changing the efficacy of the synapse, the transmitting time-delayed, and the relative refractoryperiod on the self-organized criticality in our neural network model.

  19. Estimation of Muscle Force Based on Neural Drive in a Hemispheric Stroke Survivor.

    Science.gov (United States)

    Dai, Chenyun; Zheng, Yang; Hu, Xiaogang

    2018-01-01

    Robotic assistant-based therapy holds great promise to improve the functional recovery of stroke survivors. Numerous neural-machine interface techniques have been used to decode the intended movement to control robotic systems for rehabilitation therapies. In this case report, we tested the feasibility of estimating finger extensor muscle forces of a stroke survivor, based on the decoded descending neural drive through population motoneuron discharge timings. Motoneuron discharge events were obtained by decomposing high-density surface electromyogram (sEMG) signals of the finger extensor muscle. The neural drive was extracted from the normalized frequency of the composite discharge of the motoneuron pool. The neural-drive-based estimation was also compared with the classic myoelectric-based estimation. Our results showed that the neural-drive-based approach can better predict the force output, quantified by lower estimation errors and higher correlations with the muscle force, compared with the myoelectric-based estimation. Our findings suggest that the neural-drive-based approach can potentially be used as a more robust interface signal for robotic therapies during the stroke rehabilitation.

  20. Enhanced Neural Cell Adhesion and Neurite Outgrowth on Graphene-Based Biomimetic Substrates

    Directory of Open Access Journals (Sweden)

    Suck Won Hong

    2014-01-01

    Full Text Available Neural cell adhesion and neurite outgrowth were examined on graphene-based biomimetic substrates. The biocompatibility of carbon nanomaterials such as graphene and carbon nanotubes (CNTs, that is, single-walled and multiwalled CNTs, against pheochromocytoma-derived PC-12 neural cells was also evaluated by quantifying metabolic activity (with WST-8 assay, intracellular oxidative stress (with ROS assay, and membrane integrity (with LDH assay. Graphene films were grown by using chemical vapor deposition and were then coated onto glass coverslips by using the scooping method. Graphene sheets were patterned on SiO2/Si substrates by using photolithography and were then covered with serum for a neural cell culture. Both types of CNTs induced significant dose-dependent decreases in the viability of PC-12 cells, whereas graphene exerted adverse effects on the neural cells just at over 62.5 ppm. This result implies that graphene and CNTs, even though they were the same carbon-based nanomaterials, show differential influences on neural cells. Furthermore, graphene-coated or graphene-patterned substrates were shown to substantially enhance the adhesion and neurite outgrowth of PC-12 cells. These results suggest that graphene-based substrates as biomimetic cues have good biocompatibility as well as a unique surface property that can enhance the neural cells, which would open up enormous opportunities in neural regeneration and nanomedicine.

  1. An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

    Science.gov (United States)

    Cabessa, Jérémie; Villa, Alessandro E. P.

    2014-01-01

    We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866

  2. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    Science.gov (United States)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  3. Radioactivity nuclide identification based on BP and LM algorithm neural network

    International Nuclear Information System (INIS)

    Wang Jihong; Sun Jian; Wang Lianghou

    2012-01-01

    The paper provides the method which can identify radioactive nuclide based on the BP and LM algorithm neural network. Then, this paper compares the above-mentioned method with FR algorithm. Through the result of the Matlab simulation, the method of radioactivity nuclide identification based on the BP and LM algorithm neural network is superior to the FR algorithm. With the better effect and the higher accuracy, it will be the best choice. (authors)

  4. High Speed PAM -8 Optical Interconnects with Digital Equalization based on Neural Network

    DEFF Research Database (Denmark)

    Gaiarin, Simone; Pang, Xiaodan; Ozolins, Oskars

    2016-01-01

    We experimentally evaluate a high-speed optical interconnection link with neural network equalization. Enhanced equalization performances are shown comparing to standard linear FFE for an EML-based 32 GBd PAM-8 signal after 4-km SMF transmission.......We experimentally evaluate a high-speed optical interconnection link with neural network equalization. Enhanced equalization performances are shown comparing to standard linear FFE for an EML-based 32 GBd PAM-8 signal after 4-km SMF transmission....

  5. A Sliding Mode Control-based on a RBF Neural Network for Deburring Industry Robotic Systems

    OpenAIRE

    Tao, Yong; Zheng, Jiaqi; Lin, Yuanchang

    2016-01-01

    A sliding mode control method based on radial basis function (RBF) neural network is proposed for the deburring of industry robotic systems. First, a dynamic model for deburring the robot system is established. Then, a conventional SMC scheme is introduced for the joint position tracking of robot manipulators. The RBF neural network based sliding mode control (RBFNN-SMC) has the ability to learn uncertain control actions. In the RBFNN-SMC scheme, the adaptive tuning algorithms for network par...

  6. Adaptive Learning Rule for Hardware-based Deep Neural Networks Using Electronic Synapse Devices

    OpenAIRE

    Lim, Suhwan; Bae, Jong-Ho; Eum, Jai-Ho; Lee, Sungtae; Kim, Chul-Heung; Kwon, Dongseok; Park, Byung-Gook; Lee, Jong-Ho

    2017-01-01

    In this paper, we propose a learning rule based on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network (HW-DNN) using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron net...

  7. Multilevel ensemble Kalman filter

    KAUST Repository

    Chernov, Alexey; Hoel, Haakon; Law, Kody; Nobile, Fabio; Tempone, Raul

    2016-01-01

    This work embeds a multilevel Monte Carlo (MLMC) sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF). In terms of computational cost vs. approximation error the asymptotic performance of the multilevel ensemble Kalman filter (MLEnKF) is superior to the EnKF s.

  8. Entropy of network ensembles

    Science.gov (United States)

    Bianconi, Ginestra

    2009-03-01

    In this paper we generalize the concept of random networks to describe network ensembles with nontrivial features by a statistical mechanics approach. This framework is able to describe undirected and directed network ensembles as well as weighted network ensembles. These networks might have nontrivial community structure or, in the case of networks embedded in a given space, they might have a link probability with a nontrivial dependence on the distance between the nodes. These ensembles are characterized by their entropy, which evaluates the cardinality of networks in the ensemble. In particular, in this paper we define and evaluate the structural entropy, i.e., the entropy of the ensembles of undirected uncorrelated simple networks with given degree sequence. We stress the apparent paradox that scale-free degree distributions are characterized by having small structural entropy while they are so widely encountered in natural, social, and technological complex systems. We propose a solution to the paradox by proving that scale-free degree distributions are the most likely degree distribution with the corresponding value of the structural entropy. Finally, the general framework we present in this paper is able to describe microcanonical ensembles of networks as well as canonical or hidden-variable network ensembles with significant implications for the formulation of network-constructing algorithms.

  9. Multilevel ensemble Kalman filter

    KAUST Repository

    Chernov, Alexey

    2016-01-06

    This work embeds a multilevel Monte Carlo (MLMC) sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF). In terms of computational cost vs. approximation error the asymptotic performance of the multilevel ensemble Kalman filter (MLEnKF) is superior to the EnKF s.

  10. ProLanGO: Protein Function Prediction Using Neural Machine Translation Based on a Recurrent Neural Network.

    Science.gov (United States)

    Cao, Renzhi; Freitas, Colton; Chan, Leong; Sun, Miao; Jiang, Haiqing; Chen, Zhangxin

    2017-10-17

    With the development of next generation sequencing techniques, it is fast and cheap to determine protein sequences but relatively slow and expensive to extract useful information from protein sequences because of limitations of traditional biological experimental techniques. Protein function prediction has been a long standing challenge to fill the gap between the huge amount of protein sequences and the known function. In this paper, we propose a novel method to convert the protein function problem into a language translation problem by the new proposed protein sequence language "ProLan" to the protein function language "GOLan", and build a neural machine translation model based on recurrent neural networks to translate "ProLan" language to "GOLan" language. We blindly tested our method by attending the latest third Critical Assessment of Function Annotation (CAFA 3) in 2016, and also evaluate the performance of our methods on selected proteins whose function was released after CAFA competition. The good performance on the training and testing datasets demonstrates that our new proposed method is a promising direction for protein function prediction. In summary, we first time propose a method which converts the protein function prediction problem to a language translation problem and applies a neural machine translation model for protein function prediction.

  11. The Ensembl REST API: Ensembl Data for Any Language.

    Science.gov (United States)

    Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R S; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul

    2015-01-01

    We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. © The Author 2014. Published by Oxford University Press.

  12. A neural network based seafloor classification using acoustic backscatter

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    This paper presents a study results of the Artificial Neural Network (ANN) architectures [Self-Organizing Map (SOM) and Multi-Layer Perceptron (MLP)] using single beam echosounding data. The single beam echosounder, operable at 12 kHz, has been used...

  13. Artificial Neural Networks for SCADA Data based Load Reconstruction (poster)

    NARCIS (Netherlands)

    Hofemann, C.; Van Bussel, G.J.W.; Veldkamp, H.

    2011-01-01

    If at least one reference wind turbine is available, which provides sufficient information about the wind turbine loads, the loads acting on the neighbouring wind turbines can be predicted via an artificial neural network (ANN). This research explores the possibilities to apply such a network not

  14. neural network based load frequency control for restructuring power

    African Journals Online (AJOL)

    2012-03-01

    Mar 1, 2012 ... the system in the back propagation chain used in controller training. For this application, .... The partial derivative of E with respect to ele- ments of Γ, for example W, ... Ki = any non-negative value. Figure 7: Neural Network ...

  15. MODELLING OF CONCENTRATION LIMITS BASED ON NEURAL NETWORKS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available We study the forecasting model with the concentration limits is-the use of neural network technology. The software for the implementation of these models. It is shown that the efficiency of the system in the experimental material.

  16. Active Control of Sound based on Diagonal Recurrent Neural Network

    NARCIS (Netherlands)

    Jayawardhana, Bayu; Xie, Lihua; Yuan, Shuqing

    2002-01-01

    Recurrent neural network has been known for its dynamic mapping and better suited for nonlinear dynamical system. Nonlinear controller may be needed in cases where the actuators exhibit the nonlinear characteristics, or in cases when the structure to be controlled exhibits nonlinear behavior. The

  17. A fuzzy art neural network based color image processing and ...

    African Journals Online (AJOL)

    To improve the learning process from the input data, a new learning rule was suggested. In this paper, a new method is proposed to deal with the RGB color image pixels, which enables a Fuzzy ART neural network to process the RGB color images. The application of the algorithm was implemented and tested on a set of ...

  18. Image objects detection based on boosting neural network

    NARCIS (Netherlands)

    Liang, N.; Hegt, J.A.; Mladenov, V.M.

    2010-01-01

    This paper discusses the problem of object area detection of video frames. The goal is to design a pixel accurate detector for grass, which could be used for object adaptive video enhancement. A boosting neural network is used for creating such a detector. The resulted detector uses both textural

  19. Neural network based satellite tracking for deep space applications

    Science.gov (United States)

    Amoozegar, F.; Ruggier, C.

    2003-01-01

    The objective of this paper is to provide a survey of neural network trends as applied to the tracking of spacecrafts in deep space at Ka-band under various weather conditions and examine the trade-off between tracing accuracy and communication link performance.

  20. Artificial-neural-network-based failure detection and isolation

    Science.gov (United States)

    Sadok, Mokhtar; Gharsalli, Imed; Alouani, Ali T.

    1998-03-01

    This paper presents the design of a systematic failure detection and isolation system that uses the concept of failure sensitive variables (FSV) and artificial neural networks (ANN). The proposed approach was applied to tube leak detection in a utility boiler system. Results of the experimental testing are presented in the paper.

  1. Neural feedback linearization adaptive control for affine nonlinear systems based on neural network estimator

    Directory of Open Access Journals (Sweden)

    Bahita Mohamed

    2011-01-01

    Full Text Available In this work, we introduce an adaptive neural network controller for a class of nonlinear systems. The approach uses two Radial Basis Functions, RBF networks. The first RBF network is used to approximate the ideal control law which cannot be implemented since the dynamics of the system are unknown. The second RBF network is used for on-line estimating the control gain which is a nonlinear and unknown function of the states. The updating laws for the combined estimator and controller are derived through Lyapunov analysis. Asymptotic stability is established with the tracking errors converging to a neighborhood of the origin. Finally, the proposed method is applied to control and stabilize the inverted pendulum system.

  2. Can the combined use of an ensemble based modelling approach and the analysis of measured meteorological trends lead to increased confidence in climate change impact assessments?

    Science.gov (United States)

    Gädeke, Anne; Koch, Hagen; Pohle, Ina; Grünewald, Uwe

    2014-05-01

    In anthropogenically heavily impacted river catchments, such as the Lusatian river catchments of Spree and Schwarze Elster (Germany), the robust assessment of possible impacts of climate change on the regional water resources is of high relevance for the development and implementation of suitable climate change adaptation strategies. Large uncertainties inherent in future climate projections may, however, reduce the willingness of regional stakeholder to develop and implement suitable adaptation strategies to climate change. This study provides an overview of different possibilities to consider uncertainties in climate change impact assessments by means of (1) an ensemble based modelling approach and (2) the incorporation of measured and simulated meteorological trends. The ensemble based modelling approach consists of the meteorological output of four climate downscaling approaches (DAs) (two dynamical and two statistical DAs (113 realisations in total)), which drive different model configurations of two conceptually different hydrological models (HBV-light and WaSiM-ETH). As study area serve three near natural subcatchments of the Spree and Schwarze Elster river catchments. The objective of incorporating measured meteorological trends into the analysis was twofold: measured trends can (i) serve as a mean to validate the results of the DAs and (ii) be regarded as harbinger for the future direction of change. Moreover, regional stakeholders seem to have more trust in measurements than in modelling results. In order to evaluate the nature of the trends, both gradual (Mann-Kendall test) and step changes (Pettitt test) are considered as well as both temporal and spatial correlations in the data. The results of the ensemble based modelling chain show that depending on the type (dynamical or statistical) of DA used, opposing trends in precipitation, actual evapotranspiration and discharge are simulated in the scenario period (2031-2060). While the statistical DAs

  3. Representing and Reasoning with the Internet of Things: a Modular Rule-Based Model for Ensembles of Context-Aware Smart Things

    OpenAIRE

    S. W. Loke

    2016-01-01

    Context-aware smart things are capable of computational behaviour based on sensing the physical world, inferring context from the sensed data, and acting on the sensed context. A collection of such things can form what we call a thing-ensemble, when they have the ability to communicate with one another (over a short range network such as Bluetooth, or the Internet, i.e. the Internet of Things (IoT) concept), sense each other, and when each of them might play certain roles with respect to each...

  4. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning.

    Science.gov (United States)

    Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J

    2016-08-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.

  5. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning

    DEFF Research Database (Denmark)

    Olesen, Alexander Neergaard; Christensen, Julie Anja Engelhard; Sørensen, Helge Bjarup Dissing

    2016-01-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography...... (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen’s kappa of 0.74 indicating substantial agreement between...

  6. Automated implementation of rule-based expert systems with neural networks for time-critical applications

    Science.gov (United States)

    Ramamoorthy, P. A.; Huang, Song; Govind, Girish

    1991-01-01

    In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.

  7. History Matching and Parameter Estimation of Surface Deformation Data for a CO2 Sequestration Field Project Using Ensemble-Based Algorithms

    Science.gov (United States)

    Tavakoli, Reza; Srinivasan, Sanjay; Wheeler, Mary

    2015-04-01

    The application of ensemble-based algorithms for history matching reservoir models has been steadily increasing over the past decade. However, the majority of implementations in the reservoir engineering have dealt only with production history matching. During geologic sequestration, the injection of large quantities of CO2 into the subsurface may alter the stress/strain field which in turn can lead to surface uplift or subsidence. Therefore, it is essential to couple multiphase flow and geomechanical response in order to predict and quantify the uncertainty of CO2 plume movement for long-term, large-scale CO2 sequestration projects. In this work, we simulate and estimate the properties of a reservoir that is being used to store CO2 as part of the In Salah Capture and Storage project in Algeria. The CO2 is separated from produced natural gas and is re-injected into downdip aquifer portion of the field from three long horizontal wells. The field observation data includes ground surface deformations (uplift) measured using satellite-based radar (InSAR), injection well locations and CO2 injection rate histories provided by the operators. We implement variations of ensemble Kalman filter and ensemble smoother algorithms for assimilating both injection rate data as well as geomechanical observations (surface uplift) into reservoir model. The preliminary estimation results of horizontal permeability and material properties such as Young Modulus and Poisson Ratio are consistent with available measurements and previous studies in this field. Moreover, the existence of high-permeability channels (fractures) within the reservoir; especially in the regions around the injection wells are confirmed. This estimation results can be used to accurately and efficiently predict and quantify the uncertainty in the movement of CO2 plume.

  8. Musical ensembles in Ancient Mesapotamia

    NARCIS (Netherlands)

    Krispijn, T.J.H.; Dumbrill, R.; Finkel, I.

    2010-01-01

    Identification of musical instruments from ancient Mesopotamia by comparing musical ensembles attested in Sumerian and Akkadian texts with depicted ensembles. Lexicographical contributions to the Sumerian and Akkadian lexicon.

  9. Image object recognition based on the Zernike moment and neural networks

    Science.gov (United States)

    Wan, Jianwei; Wang, Ling; Huang, Fukan; Zhou, Liangzhu

    1998-03-01

    This paper first give a comprehensive discussion about the concept of artificial neural network its research methods and the relations with information processing. On the basis of such a discussion, we expound the mathematical similarity of artificial neural network and information processing. Then, the paper presents a new method of image recognition based on invariant features and neural network by using image Zernike transform. The method not only has the invariant properties for rotation, shift and scale of image object, but also has good fault tolerance and robustness. Meanwhile, it is also compared with statistical classifier and invariant moments recognition method.

  10. Prediction of Industrial Electric Energy Consumption in Anhui Province Based on GA-BP Neural Network

    Science.gov (United States)

    Zhang, Jiajing; Yin, Guodong; Ni, Youcong; Chen, Jinlan

    2018-01-01

    In order to improve the prediction accuracy of industrial electrical energy consumption, a prediction model of industrial electrical energy consumption was proposed based on genetic algorithm and neural network. The model use genetic algorithm to optimize the weights and thresholds of BP neural network, and the model is used to predict the energy consumption of industrial power in Anhui Province, to improve the prediction accuracy of industrial electric energy consumption in Anhui province. By comparing experiment of GA-BP prediction model and BP neural network model, the GA-BP model is more accurate with smaller number of neurons in the hidden layer.

  11. Error Concealment using Neural Networks for Block-Based Image Coding

    Directory of Open Access Journals (Sweden)

    M. Mokos

    2006-06-01

    Full Text Available In this paper, a novel adaptive error concealment (EC algorithm, which lowers the requirements for channel coding, is proposed. It conceals errors in block-based image coding systems by using neural network. In this proposed algorithm, only the intra-frame information is used for reconstruction of the image with separated damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the MSE evaluation of a reconstructed image are significantly improved using the proposed EC algorithm. We propose also a simple non-neural approach for comparison.

  12. An Application to the Prediction of LOD Change Based on General Regression Neural Network

    Science.gov (United States)

    Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.

    2011-07-01

    Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.

  13. Optical implementation of a feature-based neural network with application to automatic target recognition

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1993-01-01

    An optical neural network based on the neocognitron paradigm is introduced. A novel aspect of the architecture design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by feeding back the ouput of the feature correlator interatively to the input spatial light modulator and by updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intraclass fault tolerance and interclass discrimination is achieved. A detailed system description is provided. Experimental demonstrations of a two-layer neural network for space-object discrimination is also presented.

  14. Automatic target recognition using a feature-based optical neural network

    Science.gov (United States)

    Chao, Tien-Hsin

    1992-01-01

    An optical neural network based upon the Neocognitron paradigm (K. Fukushima et al. 1983) is introduced. A novel aspect of the architectural design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator and updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intra-class fault tolerance and inter-class discrimination is achieved. A detailed system description is provided. Experimental demonstration of a two-layer neural network for space objects discrimination is also presented.

  15. Prediction of welding shrinkage deformation of bridge steel box girder based on wavelet neural network

    Science.gov (United States)

    Tao, Yulong; Miao, Yunshui; Han, Jiaqi; Yan, Feiyun

    2018-05-01

    Aiming at the low accuracy of traditional forecasting methods such as linear regression method, this paper presents a prediction method for predicting the relationship between bridge steel box girder and its displacement with wavelet neural network. Compared with traditional forecasting methods, this scheme has better local characteristics and learning ability, which greatly improves the prediction ability of deformation. Through analysis of the instance and found that after compared with the traditional prediction method based on wavelet neural network, the rigid beam deformation prediction accuracy is higher, and is superior to the BP neural network prediction results, conform to the actual demand of engineering design.

  16. Robust synchronization of delayed neural networks based on adaptive control and parameters identification

    International Nuclear Information System (INIS)

    Zhou Jin; Chen Tianping; Xiang Lan

    2006-01-01

    This paper investigates synchronization dynamics of delayed neural networks with all the parameters unknown. By combining the adaptive control and linear feedback with the updated law, some simple yet generic criteria for determining the robust synchronization based on the parameters identification of uncertain chaotic delayed neural networks are derived by using the invariance principle of functional differential equations. It is shown that the approaches developed here further extend the ideas and techniques presented in recent literature, and they are also simple to implement in practice. Furthermore, the theoretical results are applied to a typical chaotic delayed Hopfied neural networks, and numerical simulation also demonstrate the effectiveness and feasibility of the proposed technique

  17. Impacts of calibration strategies and ensemble methods on ensemble flood forecasting over Lanjiang basin, Southeast China

    Science.gov (United States)

    Liu, Li; Xu, Yue-Ping

    2017-04-01

    Ensemble flood forecasting driven by numerical weather prediction products is becoming more commonly used in operational flood forecasting applications.In this study, a hydrological ensemble flood forecasting system based on Variable Infiltration Capacity (VIC) model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated.The hydrological model is optimized by parallel programmed ɛ-NSGAII multi-objective algorithm and two respectively parameterized models are determined to simulate daily flows and peak flows coupled with a modular approach.The results indicatethat the ɛ-NSGAII algorithm permits more efficient optimization and rational determination on parameter setting.It is demonstrated that the multimodel ensemble streamflow mean have better skills than the best singlemodel ensemble mean (ECMWF) and the multimodel ensembles weighted on members and skill scores outperform other multimodel ensembles. For typical flood event, it is proved that the flood can be predicted 3-4 days in advance, but the flows in rising limb can be captured with only 1-2 days ahead due to the flash feature. With respect to peak flows selected by Peaks Over Threshold approach, the ensemble means from either singlemodel or multimodels are generally underestimated as the extreme values are smoothed out by ensemble process.

  18. Towards a GME ensemble forecasting system: Ensemble initialization using the breeding technique

    Directory of Open Access Journals (Sweden)

    Jan D. Keller

    2008-12-01

    Full Text Available The quantitative forecast of precipitation requires a probabilistic background particularly with regard to forecast lead times of more than 3 days. As only ensemble simulations can provide useful information of the underlying probability density function, we built a new ensemble forecasting system (GME-EFS based on the GME model of the German Meteorological Service (DWD. For the generation of appropriate initial ensemble perturbations we chose the breeding technique developed by Toth and Kalnay (1993, 1997, which develops perturbations by estimating the regions of largest model error induced uncertainty. This method is applied and tested in the framework of quasi-operational forecasts for a three month period in 2007. The performance of the resulting ensemble forecasts are compared to the operational ensemble prediction systems ECMWF EPS and NCEP GFS by means of ensemble spread of free atmosphere parameters (geopotential and temperature and ensemble skill of precipitation forecasting. This comparison indicates that the GME ensemble forecasting system (GME-EFS provides reasonable forecasts with spread skill score comparable to that of the NCEP GFS. An analysis with the continuous ranked probability score exhibits a lack of resolution for the GME forecasts compared to the operational ensembles. However, with significant enhancements during the 3 month test period, the first results of our work with the GME-EFS indicate possibilities for further development as well as the potential for later operational usage.

  19. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  20. Multivariate localization methods for ensemble Kalman filtering

    KAUST Repository

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, Marc G.

    2015-01-01

    the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function

  1. Loss of conformational entropy in protein folding calculated using realistic ensembles and its implications for NMR-based calculations

    Science.gov (United States)

    Baxa, Michael C.; Haddadian, Esmael J.; Jumper, John M.; Freed, Karl F.; Sosnick, Tobin R.

    2014-01-01

    The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol−1 per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix−sheet = 0.5 kcal⋅mol−1), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR. PMID:25313044

  2. Neural Network-Based Resistance Spot Welding Control and Quality Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D., Jr.; Ivezic, N.D.; Zacharia, T.

    1999-07-10

    This paper describes the development and evaluation of neural network-based systems for industrial resistance spot welding process control and weld quality assessment. The developed systems utilize recurrent neural networks for process control and both recurrent networks and static networks for quality prediction. The first section describes a system capable of both welding process control and real-time weld quality assessment, The second describes the development and evaluation of a static neural network-based weld quality assessment system that relied on experimental design to limit the influence of environmental variability. Relevant data analysis methods are also discussed. The weld classifier resulting from the analysis successfldly balances predictive power and simplicity of interpretation. The results presented for both systems demonstrate clearly that neural networks can be employed to address two significant problems common to the resistance spot welding industry, control of the process itself, and non-destructive determination of resulting weld quality.

  3. Nonlinear control strategy based on using a shape-tunable neural controller

    Energy Technology Data Exchange (ETDEWEB)

    Chen, C.; Peng, S. [Feng Chia Univ, Taichung (Taiwan, Province of China). Department of chemical Engineering; Chang, W. [Feng Chia Univ, Taichung (Taiwan, Province of China). Department of Automatic Control

    1997-08-01

    In this paper, a nonlinear control strategy based on using a shape-tunable neural network is developed for adaptive control of nonlinear processes. Based on the steepest descent method, a learning algorithm that enables the neural controller to possess the ability of automatic controller output range adjustment is derived. The novel feature of automatic output range adjustment provides the neural controller more flexibility and capability, and therefore the scaling procedure, which is usually unavoidable for the conventional fixed-shape neural controllers, becomes unnecessary. The advantages and effectiveness of the proposed nonlinear control strategy are demonstrated through the challenge problem of controlling an open-loop unstable nonlinear continuous stirred tank reactor (CSTR). 14 refs., 11 figs.

  4. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    Science.gov (United States)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  5. Results from a MA16-based neural trigger in an experiment looking for beauty

    International Nuclear Information System (INIS)

    Baldanza, C.; Beichter, J.; Bisi, F.; Bruels, N.; Bruschini, C.; Cotta-Ramusino, A.; D'Antone, I.; Malferrari, L.; Mazzanti, P.; Musico, P.; Novelli, P.; Odorici, F.; Odorico, R.; Passaseo, M.; Zuffa, M.

    1996-01-01

    Results from a neural-network trigger based on the digital MA16 chip of Siemens are reported. The neural trigger has been applied to data from the WA92 experiment, looking for beauty particles, which have been collected during a run in which a neural trigger module based on Intel's analog neural chip ETANN operated, as already reported. The MA16 board hosting the chip has a 16-bit I/O precision and a 53-bit precision for internal calculations. It operated at 50 MHz, yielding a response time for a 16 input-variable net of 3 μs for a Fisher discriminant (1-layer net) and of 6 μs for a 2-layer net. Results are compared with those previously obtained with the ETANN trigger. (orig.)

  6. Results from a MA16-based neural trigger in an experiment looking for beauty

    Energy Technology Data Exchange (ETDEWEB)

    Baldanza, C. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Beichter, J. [Siemens AG, ZFE T ME2, 81730 Munich (Germany); Bisi, F. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Bruels, N. [Siemens AG, ZFE T ME2, 81730 Munich (Germany); Bruschini, C. [INFN/Genoa, Via Dodecaneso 33, 16146 Genoa (Italy); Cotta-Ramusino, A. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); D`Antone, I. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Malferrari, L. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Mazzanti, P. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Musico, P. [INFN/Genoa, Via Dodecaneso 33, 16146 Genoa (Italy); Novelli, P. [INFN/Genoa, Via Dodecaneso 33, 16146 Genoa (Italy); Odorici, F. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Odorico, R. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy); Passaseo, M. [CERN, 1211 Geneva 23 (Switzerland); Zuffa, M. [Istituto Nazionale di Fisica Nucleare, Bologna (Italy)

    1996-07-11

    Results from a neural-network trigger based on the digital MA16 chip of Siemens are reported. The neural trigger has been applied to data from the WA92 experiment, looking for beauty particles, which have been collected during a run in which a neural trigger module based on Intel`s analog neural chip ETANN operated, as already reported. The MA16 board hosting the chip has a 16-bit I/O precision and a 53-bit precision for internal calculations. It operated at 50 MHz, yielding a response time for a 16 input-variable net of 3 {mu}s for a Fisher discriminant (1-layer net) and of 6 {mu}s for a 2-layer net. Results are compared with those previously obtained with the ETANN trigger. (orig.).

  7. Computational neural network regression model for Host based Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Gautam

    2016-09-01

    Full Text Available The current scenario of information gathering and storing in secure system is a challenging task due to increasing cyber-attacks. There exists computational neural network techniques designed for intrusion detection system, which provide security to single machine and entire network's machine. In this paper, we have used two types of computational neural network models, namely, Generalized Regression Neural Network (GRNN model and Multilayer Perceptron Neural Network (MPNN model for Host based Intrusion Detection System using log files that are generated by a single personal computer. The simulation results show correctly classified percentage of normal and abnormal (intrusion class using confusion matrix. On the basis of results and discussion, we found that the Host based Intrusion Systems Model (HISM significantly improved the detection accuracy while retaining minimum false alarm rate.

  8. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    KAUST Repository

    Naous, Rawan

    2016-11-02

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  9. Neural network based PWM AC chopper fed induction motor drive

    Directory of Open Access Journals (Sweden)

    Venkatesan Jamuna

    2009-01-01

    Full Text Available In this paper, a new Simulink model for a neural network controlled PWM AC chopper fed single phase induction motor is proposed. Closed loop speed control is achieved using a neural network controller. To maintain a constant fluid flow with a variation in pressure head, drives like fan and pump are operated with closed loop speed control. The need to improve the quality and reliability of the drive circuit has increased because of the growing demand for improving the performance of motor drives. With the increased availability of MOSFET's and IGBT's, PWM converters can be used efficiently in low and medium power applications. From the simulation studies, it is seen that the PWM AC chopper has a better harmonic spectrum and lesser copper loss than the Phase controlled AC chopper. It is observed that the drive system with the proposed model produces better dynamic performance, reduced overshoot and fast transient response. .

  10. Stock prices forecasting based on wavelet neural networks with PSO

    OpenAIRE

    Wang Kai-Cheng; Yang Chi-I; Chang Kuei-Fang

    2017-01-01

    This research examines the forecasting performance of wavelet neural network (WNN) model using published stock data obtained from Financial Times Stock Exchange (FTSE) Taiwan Stock Exchange (TWSE) 50 index, also known as Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX), hereinafter referred to as Taiwan 50. Our WNN model uses particle swarm optimization (PSO) to choose the appropriate initial network values for different companies. The findings come with two advantages. First...

  11. Neuronal spike sorting based on radial basis function neural networks

    Directory of Open Access Journals (Sweden)

    Taghavi Kani M

    2011-02-01

    Full Text Available "nBackground: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system."n "nMethods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used."n "nResults: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity."n "nConclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting.

  12. Neural bases of selective attention in action video game players

    OpenAIRE

    Bavelier, D; Achtman, RL; Mani, M; Föcker, J

    2011-01-01

    Over the past few years, the very act of playing action video games has been shown to enhance several different aspects of visual selective attention. Yet little is known about the neural mechanisms that mediate such attentional benefits. A review of the aspects of attention enhanced in action game players suggests there are changes in the mechanisms that control attention allocation and its efficiency (Hubert-Wallander et al., 2010). The present study used brain imaging to test this hypothes...

  13. Neural network-based expert system for severe accident management

    International Nuclear Information System (INIS)

    Klopp, G.T.; Silverman, E.B.

    1992-01-01

    This paper presents the results of the second phase of a three-phase Severe Accident Management expert system program underway at Commonwealth Edison Company (CECo). Phase I successfully demonstrated the feasibility of Artificial Neural Networks to support several of the objectives of severe accident management. Simulated accident scenarios were generated by the Modular Accident Analysis Program (MAAP) code currently in use by CECo as part of their Individual Plant Evaluations (IPE)/Accident Management Program. The primary objectives of the second phase were to develop and demonstrate four capabilities of neural networks with respect to nuclear power plant severe accident monitoring and prediction. The results of this work would form the foundation of a demonstration system which included expert system performance features. These capabilities included the ability to: (1) Predict the time available prior to support plate (and reactor vessel) failure; (2) Calculate the time remaining until recovery actions were too late to prevent core damage; (3) Predict future parameter values of each of the MAAP parameter variables; and (4) Detect simulated sensor failure and provide best-value estimates for further processing in the presence of a sensor failure. A variety of accident scenarios for the Zion and Dresden plants were used to train and test the neural network expert system. These included large and small break LOCAs as well as a range of transient events. 3 refs., 1 fig., 1 tab

  14. Three neural network based sensor systems for environmental monitoring

    International Nuclear Information System (INIS)

    Keller, P.E.; Kouzes, R.T.; Kangas, L.J.

    1994-05-01

    Compact, portable systems capable of quickly identifying contaminants in the field are of great importance when monitoring the environment. One of the missions of the Pacific Northwest Laboratory is to examine and develop new technologies for environmental restoration and waste management at the Hanford Site. In this paper, three prototype sensing systems are discussed. These prototypes are composed of sensing elements, data acquisition system, computer, and neural network implemented in software, and are capable of automatically identifying contaminants. The first system employs an array of tin-oxide gas sensors and is used to identify chemical vapors. The second system employs an array of optical sensors and is used to identify the composition of chemical dyes in liquids. The third system contains a portable gamma-ray spectrometer and is used to identify radioactive isotopes. In these systems, the neural network is used to identify the composition of the sensed contaminant. With a neural network, the intense computation takes place during the training process. Once the network is trained, operation consists of propagating the data through the network. Since the computation involved during operation consists of vector-matrix multiplication and application of look-up tables unknown samples can be rapidly identified in the field

  15. Down image recognition based on deep convolutional neural network

    Directory of Open Access Journals (Sweden)

    Wenzhu Yang

    2018-06-01

    Full Text Available Since of the scale and the various shapes of down in the image, it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy, even for the Traditional Convolutional Neural Network (TCNN. To deal with the above problems, a Deep Convolutional Neural Network (DCNN for down image classification is constructed, and a new weight initialization method is proposed. Firstly, the salient regions of a down image were cut from the image using the visual saliency model. Then, these salient regions of the image were used to train a sparse autoencoder and get a collection of convolutional filters, which accord with the statistical characteristics of dataset. At last, a DCNN with Inception module and its variants was constructed. To improve the recognition accuracy, the depth of the network is deepened. The experiment results indicate that the constructed DCNN increases the recognition accuracy by 2.7% compared to TCNN, when recognizing the down in the images. The convergence rate of the proposed DCNN with the new weight initialization method is improved by 25.5% compared to TCNN. Keywords: Deep convolutional neural network, Weight initialization, Sparse autoencoder, Visual saliency model, Image recognition

  16. Parallel protein secondary structure prediction based on neural networks.

    Science.gov (United States)

    Zhong, Wei; Altun, Gulsah; Tian, Xinmin; Harrison, Robert; Tai, Phang C; Pan, Yi

    2004-01-01

    Protein secondary structure prediction has a fundamental influence on today's bioinformatics research. In this work, binary and tertiary classifiers of protein secondary structure prediction are implemented on Denoeux belief neural network (DBNN) architecture. Hydrophobicity matrix, orthogonal matrix, BLOSUM62 and PSSM (position specific scoring matrix) are experimented separately as the encoding schemes for DBNN. The experimental results contribute to the design of new encoding schemes. New binary classifier for Helix versus not Helix ( approximately H) for DBNN produces prediction accuracy of 87% when PSSM is used for the input profile. The performance of DBNN binary classifier is comparable to other best prediction methods. The good test results for binary classifiers open a new approach for protein structure prediction with neural networks. Due to the time consuming task of training the neural networks, Pthread and OpenMP are employed to parallelize DBNN in the hyperthreading enabled Intel architecture. Speedup for 16 Pthreads is 4.9 and speedup for 16 OpenMP threads is 4 in the 4 processors shared memory architecture. Both speedup performance of OpenMP and Pthread is superior to that of other research. With the new parallel training algorithm, thousands of amino acids can be processed in reasonable amount of time. Our research also shows that hyperthreading technology for Intel architecture is efficient for parallel biological algorithms.

  17. Expert music performance: cognitive, neural, and developmental bases.

    Science.gov (United States)

    Brown, Rachel M; Zatorre, Robert J; Penhune, Virginia B

    2015-01-01

    In this chapter, we explore what happens in the brain of an expert musician during performance. Understanding expert music performance is interesting to cognitive neuroscientists not only because it tests the limits of human memory and movement, but also because studying expert musicianship can help us understand skilled human behavior in general. In this chapter, we outline important facets of our current understanding of the cognitive and neural basis for music performance, and developmental factors that may underlie musical ability. We address three main questions. (1) What is expert performance? (2) How do musicians achieve expert-level performance? (3) How does expert performance come about? We address the first question by describing musicians' ability to remember, plan, execute, and monitor their performances in order to perform music accurately and expressively. We address the second question by reviewing evidence for possible cognitive and neural mechanisms that may underlie or contribute to expert music performance, including the integration of sound and movement, feedforward and feedback motor control processes, expectancy, and imagery. We further discuss how neural circuits in auditory, motor, parietal, subcortical, and frontal cortex all contribute to different facets of musical expertise. Finally, we address the third question by reviewing evidence for the heritability of musical expertise and for how expertise develops through training and practice. We end by discussing outlooks for future work. © 2015 Elsevier B.V. All rights reserved.

  18. The classicality and quantumness of a quantum ensemble

    International Nuclear Information System (INIS)

    Zhu Xuanmin; Pang Shengshi; Wu Shengjun; Liu Quanhui

    2011-01-01

    In this Letter, we investigate the classicality and quantumness of a quantum ensemble. We define a quantity called ensemble classicality based on classical cloning strategy (ECCC) to characterize how classical a quantum ensemble is. An ensemble of commuting states has a unit ECCC, while a general ensemble can have a ECCC less than 1. We also study how quantum an ensemble is by defining a related quantity called quantumness. We find that the classicality of an ensemble is closely related to how perfectly the ensemble can be cloned, and that the quantumness of the ensemble used in a quantum key distribution (QKD) protocol is exactly the attainable lower bound of the error rate in the sifted key. - Highlights: → A quantity is defined to characterize how classical a quantum ensemble is. → The classicality of an ensemble is closely related to the cloning performance. → Another quantity is also defined to investigate how quantum an ensemble is. → This quantity gives the lower bound of the error rate in a QKD protocol.

  19. A Bootstrap Neural Network Based Heterogeneous Panel Unit Root Test: Application to Exchange Rates

    OpenAIRE

    Christian de Peretti; Carole Siani; Mario Cerrato

    2010-01-01

    This paper proposes a bootstrap artificial neural network based panel unit root test in a dynamic heterogeneous panel context. An application to a panel of bilateral real exchange rate series with the US Dollar from the 20 major OECD countries is provided to investigate the Purchase Power Parity (PPP). The combination of neural network and bootstrapping significantly changes the findings of the economic study in favour of PPP.

  20. Breakout Prediction Based on BP Neural Network in Continuous Casting Process

    Directory of Open Access Journals (Sweden)

    Zhang Ben-guo

    2016-01-01

    Full Text Available An improved BP neural network model was presented by modifying the learning algorithm of the traditional BP neural network, based on the Levenberg-Marquardt algorithm, and was applied to the breakout prediction system in the continuous casting process. The results showed that the accuracy rate of the model for the temperature pattern of sticking breakout was 96.43%, and the quote rate was 100%, that verified the feasibility of the model.

  1. Cyclone track forecasting based on satellite images using artificial neural networks

    OpenAIRE

    Kovordanyi, Rita; Roy, Chandan

    2009-01-01

    Many places around the world are exposed to tropical cyclones and associated storm surges. In spite of massive efforts, a great number of people die each year as a result of cyclone events. To mitigate this damage, improved forecasting techniques must be developed. The technique presented here uses artificial neural networks to interpret NOAA-AVHRR satellite images. A multi-layer neural network, resembling the human visual system, was trained to forecast the movement of cyclones based on sate...

  2. Finite time synchronization of memristor-based Cohen-Grossberg neural networks with mixed delays

    OpenAIRE

    Chen, Chuan; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-01-01

    Finite time synchronization, which means synchronization can be achieved in a settling time, is desirable in some practical applications. However, most of the published results on finite time synchronization don't include delays or only include discrete delays. In view of the fact that distributed delays inevitably exist in neural networks, this paper aims to investigate the finite time synchronization of memristor-based Cohen-Grossberg neural networks (MCGNNs) with both discrete delay and di...

  3. Neural Network based Minimization of BER in Multi-User Detection in SDMA

    OpenAIRE

    VENKATA REDDY METTU; KRISHAN KUMAR,; SRIKANTH PULLABHATLA

    2011-01-01

    In this paper we investigate the use of neural network based minimization of BER in MUD. Neural networks can be used for linear design, Adaptive prediction, Amplitude detection, Character Recognition and many other applications. Adaptive prediction is used in detecting the errors caused in AWGN channel. These errors are rectified by using Widrow-Hoff algorithm by updating their weights andAdaptive prediction methods. Both Widrow-Hoff and Adaptive prediction have been used for rectifying the e...

  4. ReSeg: A Recurrent Neural Network-Based Model for Semantic Segmentation

    OpenAIRE

    Visin, Francesco; Ciccone, Marco; Romero, Adriana; Kastner, Kyle; Cho, Kyunghyun; Bengio, Yoshua; Matteucci, Matteo; Courville, Aaron

    2015-01-01

    We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally ...

  5. Creating ensembles of decision trees through sampling

    Science.gov (United States)

    Kamath, Chandrika; Cantu-Paz, Erick

    2005-08-30

    A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.

  6. DWI-Based Neural Fingerprinting Technology: A Preliminary Study on Stroke Analysis

    Directory of Open Access Journals (Sweden)

    Chenfei Ye

    2014-01-01

    Full Text Available Stroke is a common neural disorder in neurology clinics. Magnetic resonance imaging (MRI has become an important tool to assess the neural physiological changes under stroke, such as diffusion weighted imaging (DWI and diffusion tensor imaging (DTI. Quantitative analysis of MRI images would help medical doctors to localize the stroke area in the diagnosis in terms of structural information and physiological characterization. However, current quantitative approaches can only provide localization of the disorder rather than measure physiological variation of subtypes of ischemic stroke. In the current study, we hypothesize that each kind of neural disorder would have its unique physiological characteristics, which could be reflected by DWI images on different gradients. Based on this hypothesis, a DWI-based neural fingerprinting technology was proposed to classify subtypes of ischemic stroke. The neural fingerprint was constructed by the signal intensity of the region of interest (ROI on the DWI images under different gradients. The fingerprint derived from the manually drawn ROI could classify the subtypes with accuracy 100%. However, the classification accuracy was worse when using semiautomatic and automatic method in ROI segmentation. The preliminary results showed promising potential of DWI-based neural fingerprinting technology in stroke subtype classification. Further studies will be carried out for enhancing the fingerprinting accuracy and its application in other clinical practices.

  7. DWI-based neural fingerprinting technology: a preliminary study on stroke analysis.

    Science.gov (United States)

    Ye, Chenfei; Ma, Heather Ting; Wu, Jun; Yang, Pengfei; Chen, Xuhui; Yang, Zhengyi; Ma, Jingbo

    2014-01-01

    Stroke is a common neural disorder in neurology clinics. Magnetic resonance imaging (MRI) has become an important tool to assess the neural physiological changes under stroke, such as diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI). Quantitative analysis of MRI images would help medical doctors to localize the stroke area in the diagnosis in terms of structural information and physiological characterization. However, current quantitative approaches can only provide localization of the disorder rather than measure physiological variation of subtypes of ischemic stroke. In the current study, we hypothesize that each kind of neural disorder would have its unique physiological characteristics, which could be reflected by DWI images on different gradients. Based on this hypothesis, a DWI-based neural fingerprinting technology was proposed to classify subtypes of ischemic stroke. The neural fingerprint was constructed by the signal intensity of the region of interest (ROI) on the DWI images under different gradients. The fingerprint derived from the manually drawn ROI could classify the subtypes with accuracy 100%. However, the classification accuracy was worse when using semiautomatic and automatic method in ROI segmentation. The preliminary results showed promising potential of DWI-based neural fingerprinting technology in stroke subtype classification. Further studies will be carried out for enhancing the fingerprinting accuracy and its application in other clinical practices.

  8. Data assimilation in integrated hydrological modeling using ensemble Kalman filtering

    DEFF Research Database (Denmark)

    Rasmussen, Jørn; Madsen, H.; Jensen, Karsten Høgh

    2015-01-01

    Groundwater head and stream discharge is assimilated using the ensemble transform Kalman filter in an integrated hydrological model with the aim of studying the relationship between the filter performance and the ensemble size. In an attempt to reduce the required number of ensemble members...... and estimating parameters requires a much larger ensemble size than just assimilating groundwater head observations. However, the required ensemble size can be greatly reduced with the use of adaptive localization, which by far outperforms distance-based localization. The study is conducted using synthetic data...

  9. Credit scoring using ensemble of various classifiers on reduced feature set

    Directory of Open Access Journals (Sweden)

    Dahiya Shashi

    2015-01-01

    Full Text Available Credit scoring methods are widely used for evaluating loan applications in financial and banking institutions. Credit score identifies if applicant customers belong to good risk applicant group or a bad risk applicant group. These decisions are based on the demographic data of the customers, overall business by the customer with bank, and loan payment history of the loan applicants. The advantages of using credit scoring models include reducing the cost of credit analysis, enabling faster credit decisions and diminishing possible risk. Many statistical and machine learning techniques such as Logistic Regression, Support Vector Machines, Neural Networks and Decision tree algorithms have been used independently and as hybrid credit scoring models. This paper proposes an ensemble based technique combining seven individual models to increase the classification accuracy. Feature selection has also been used for selecting important attributes for classification. Cross classification was conducted using three data partitions. German credit dataset having 1000 instances and 21 attributes is used in the present study. The results of the experiments revealed that the ensemble model yielded a very good accuracy when compared to individual models. In all three different partitions, the ensemble model was able to classify more than 80% of the loan customers as good creditors correctly. Also, for 70:30 partition there was a good impact of feature selection on the accuracy of classifiers. The results were improved for almost all individual models including the ensemble model.

  10. Short-range ensemble predictions based on convection perturbations in the Eta Model for the Serra do Mar region in Brazil

    Science.gov (United States)

    Bustamante, J. F. F.; Chou, S. C.; Gomes, J. L.

    2009-04-01

    The Southeast Brazil, in the coastal and mountain region called Serra do Mar, between Sao Paulo and Rio de Janeiro, is subject to frequent events of landslides and floods. The Eta Model has been producing good quality forecasts over South America at about 40-km horizontal resolution. For that type of hazards, however, more detailed and probabilistic information on the risks should be provided with the forecasts. Thus, a short-range ensemble prediction system (SREPS) based on the Eta Model is being constructed. Ensemble members derived from perturbed initial and lateral boundary conditions did not provide enough spread for the forecasts. Members with model physics perturbation are being included and tested. The objective of this work is to construct more members for the Eta SREPS by adding physics perturbed members. The Eta Model is configured at 10-km resolution and 38 layers in the vertical. The domain covered is most of Southeast Brazil, centered over the Serra do Mar region. The constructed members comprise variations of the cumulus parameterization Betts-Miller-Janjic (BMJ) and Kain-Fritsch (KF) schemes. Three members were constructed from the BMJ scheme by varying the deficit of saturation pressure profile over land and sea, and 2 members of the KF scheme were included using the standard KF and a momentum flux added to KF scheme version. One of the runs with BMJ scheme is the control run as it was used for the initial condition perturbation SREPS. The forecasts were tested for 6 cases of South America Convergence Zone (SACZ) events. The SACZ is a common summer season feature of Southern Hemisphere that causes persistent rain for a few days over the Southeast Brazil and it frequently organizes over Serra do Mar region. These events are particularly interesting because of the persistent rains that can accumulate large amounts and cause generalized landslides and death. With respect to precipitation, the KF scheme versions have shown to be able to reach the

  11. Evaluating an ensemble classification approach for crop diversityverification in Danish greening subsidy control

    DEFF Research Database (Denmark)

    Chellasamy, Menaka; Ferre, Ty; Greve, Mogens Humlekrog

    2016-01-01

    Beginning in 2015, Danish farmers are obliged to meet specific crop diversification rules based on total land area and number of crops cultivated to be eligible for new greening subsidies. Hence, there is a need for the Danish government to extend their subsidy control system to verify farmers......’ declarations to war-rant greening payments under the new crop diversification rules. Remote Sensing (RS) technology has been used since 1992 to control farmers’ subsidies in Denmark. However, a proper RS-based approach is yet to be finalised to validate new crop diversity requirements designed for assessing...... compliance under the recent subsidy scheme (2014–2020); This study uses an ensemble classification approach(proposed by the authors in previous studies) for validating the crop diversity requirements of the new rules. The approach uses a neural network ensemble classification system with bi-temporal (spring...

  12. A web-based system for neural network based classification in temporomandibular joint osteoarthritis.

    Science.gov (United States)

    de Dumast, Priscille; Mirabel, Clément; Cevidanes, Lucia; Ruellas, Antonio; Yatabe, Marilia; Ioshida, Marcos; Ribera, Nina Tubau; Michoud, Loic; Gomes, Liliane; Huang, Chao; Zhu, Hongtu; Muniz, Luciana; Shoukri, Brandon; Paniagua, Beatriz; Styner, Martin; Pieper, Steve; Budin, Francois; Vimort, Jean-Baptiste; Pascal, Laura; Prieto, Juan Carlos

    2018-07-01

    The purpose of this study is to describe the methodological innovations of a web-based system for storage, integration and computation of biomedical data, using a training imaging dataset to remotely compute a deep neural network classifier of temporomandibular joint osteoarthritis (TMJOA). This study imaging dataset consisted of three-dimensional (3D) surface meshes of mandibular condyles constructed from cone beam computed tomography (CBCT) scans. The training dataset consisted of 259 condyles, 105 from control subjects and 154 from patients with diagnosis of TMJ OA. For the image analysis classification, 34 right and left condyles from 17 patients (39.9 ± 11.7 years), who experienced signs and symptoms of the disease for less than 5 years, were included as the testing dataset. For the integrative statistical model of clinical, biological and imaging markers, the sample consisted of the same 17 test OA subjects and 17 age and sex matched control subjects (39.4 ± 15.4 years), who did not show any sign or symptom of OA. For these 34 subjects, a standardized clinical questionnaire, blood and saliva samples were also collected. The technological methodologies in this study include a deep neural network classifier of 3D condylar morphology (ShapeVariationAnalyzer, SVA), and a flexible web-based system for data storage, computation and integration (DSCI) of high dimensional imaging, clinical, and biological data. The DSCI system trained and tested the neural network, indicating 5 stages of structural degenerative changes in condylar morphology in the TMJ with 91% close agreement between the clinician consensus and the SVA classifier. The DSCI remotely ran with a novel application of a statistical analysis, the Multivariate Functional Shape Data Analysis, that computed high dimensional correlations between shape 3D coordinates, clinical pain levels and levels of biological markers, and then graphically displayed the computation results. The findings of this

  13. Ensemble Data Mining Methods

    Science.gov (United States)

    Oza, Nikunj C.

    2004-01-01

    Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models.

  14. Ensemble Data Mining Methods

    Data.gov (United States)

    National Aeronautics and Space Administration — Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve...

  15. Neural-net based real-time economic dispatch for thermal power plants

    Energy Technology Data Exchange (ETDEWEB)

    Djukanovic, M.; Milosevic, B. [Inst. Nikola Tesla, Belgrade (Yugoslavia). Dept. of Power Systems; Calovic, M. [Univ. of Belgrade (Yugoslavia). Dept. of Electrical Engineering; Sobajic, D.J. [Electric Power Research Inst., Palo Alto, CA (United States)

    1996-12-01

    This paper proposes the application of artificial neural networks to real-time optimal generation dispatch of thermal units. The approach can take into account the operational requirements and network losses. The proposed economic dispatch uses an artificial neural network (ANN) for generation of penalty factors, depending on the input generator powers and identified system load change. Then, a few additional iterations are performed within an iterative computation procedure for the solution of coordination equations, by using reference-bus penalty-factors derived from the Newton-Raphson load flow. A coordination technique for environmental and economic dispatch of pure thermal systems, based on the neural-net theory for simplified solution algorithms and improved man-machine interface is introduced. Numerical results on two test examples show that the proposed algorithm can efficiently and accurately develop optimal and feasible generator output trajectories, by applying neural-net forecasts of system load patterns.

  16. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2015-01-01

    Full Text Available Artificial neural networks (ANNs have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  17. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    Science.gov (United States)

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  18. GIS-based groundwater potential analysis using novel ensemble weights-of-evidence with logistic regression and functional tree models.

    Science.gov (United States)

    Chen, Wei; Li, Hui; Hou, Enke; Wang, Shengquan; Wang, Guirong; Panahi, Mahdi; Li, Tao; Peng, Tao; Guo, Chen; Niu, Chao; Xiao, Lele; Wang, Jiale; Xie, Xiaoshen; Ahmad, Baharin Bin

    2018-09-01

    The aim of the current study was to produce groundwater spring potential maps using novel ensemble weights-of-evidence (WoE) with logistic regression (LR) and functional tree (FT) models. First, a total of 66 springs were identified by field surveys, out of which 70% of the spring locations were used for training the models and 30% of the spring locations were employed for the validation process. Second, a total of 14 affecting factors including aspect, altitude, slope, plan curvature, profile curvature, stream power index (SPI), topographic wetness index (TWI), sediment transport index (STI), lithology, normalized difference vegetation index (NDVI), land use, soil, distance to roads, and distance to streams was used to analyze the spatial relationship between these affecting factors and spring occurrences. Multicollinearity analysis and feature selection of the correlation attribute evaluation (CAE) method were employed to optimize the affecting factors. Subsequently, the novel ensembles of the WoE, LR, and FT models were constructed using the training dataset. Finally, the receiver operating characteristic (ROC) curves, standard error, confidence interval (CI) at 95%, and significance level P were employed to validate and compare the performance of three models. Overall, all three models performed well for groundwater spring potential evaluation. The prediction capability of the FT model, with the highest AUC values, the smallest standard errors, the narrowest CIs, and the smallest P values for the training and validation datasets, is better compared to those of other models. The groundwater spring potential maps can be adopted for the management of water resources and land use by planners and engineers. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. A comparative assessment of GIS-based data mining models and a novel ensemble model in groundwater well potential mapping

    Science.gov (United States)

    Naghibi, Seyed Amir; Moghaddam, Davood Davoodi; Kalantar, Bahareh; Pradhan, Biswajeet; Kisi, Ozgur

    2017-05-01

    In recent years, application of ensemble models has been increased tremendously in various types of natural hazard assessment such as landslides and floods. However, application of this kind of robust models in groundwater potential mapping is relatively new. This study applied four data mining algorithms including AdaBoost, Bagging, generalized additive model (GAM), and Naive Bayes (NB) models to map groundwater potential. Then, a novel frequency ratio data mining ensemble model (FREM) was introduced and evaluated. For this purpose, eleven groundwater conditioning factors (GCFs), including altitude, slope aspect, slope angle, plan curvature, stream power index (SPI), river density, distance from rivers, topographic wetness index (TWI), land use, normalized difference vegetation index (NDVI), and lithology were mapped. About 281 well locations with high potential were selected. Wells were randomly partitioned into two classes for training the models (70% or 197) and validating them (30% or 84). AdaBoost, Bagging, GAM, and NB algorithms were employed to get groundwater potential maps (GPMs). The GPMs were categorized into potential classes using natural break method of classification scheme. In the next stage, frequency ratio (FR) value was calculated for the output of the four aforementioned models and were summed, and finally a GPM was produced using FREM. For validating the models, area under receiver operating characteristics (ROC) curve was calculated. The ROC curve for prediction dataset was 94.8, 93.5, 92.6, 92.0, and 84.4% for FREM, Bagging, AdaBoost, GAM, and NB models, respectively. The results indicated that FREM had the best performance among all the models. The better performance of the FREM model could be related to reduction of over fitting and possible errors. Other models such as AdaBoost, Bagging, GAM, and NB also produced acceptable performance in groundwater modelling. The GPMs produced in the current study may facilitate groundwater exploitation

  20. Ensemble Classification of Alzheimer's Disease and Mild Cognitive Impairment Based on Complex Graph Measures from Diffusion Tensor Images

    Science.gov (United States)

    Ebadi, Ashkan; Dalboni da Rocha, Josué L.; Nagaraju, Dushyanth B.; Tovar-Moll, Fernanda; Bramati, Ivanei; Coutinho, Gabriel; Sitaram, Ranganatha; Rashidi, Parisa

    2017-01-01

    The human brain is a complex network of interacting regions. The gray matter regions of brain are interconnected by white matter tracts, together forming one integrative complex network. In this article, we report our investigation about the potential of applying brain connectivity patterns as an aid in diagnosing Alzheimer's disease and Mild Cognitive Impairment (MCI). We performed pattern analysis of graph theoretical measures derived from Diffusion Tensor Imaging (DTI) data representing structural brain networks of 45 subjects, consisting of 15 patients of Alzheimer's disease (AD), 15 patients of MCI, and 15 healthy subjects (CT). We considered pair-wise class combinations of subjects, defining three separate classification tasks, i.e., AD-CT, AD-MCI, and CT-MCI, and used an ensemble classification module to perform the classification tasks. Our ensemble framework with feature selection shows a promising performance with classification accuracy of 83.3% for AD vs. MCI, 80% for AD vs. CT, and 70% for MCI vs. CT. Moreover, our findings suggest that AD can be related to graph measures abnormalities at Brodmann areas in the sensorimotor cortex and piriform cortex. In this way, node redundancy coefficient and load centrality in the primary motor cortex were recognized as good indicators of AD in contrast to MCI. In general, load centrality, betweenness centrality, and closeness centrality were found to be the most relevant network measures, as they were the top identified features at different nodes. The present study can be regarded as a “proof of concept” about a procedure for the classification of MRI markers between AD dementia, MCI, and normal old individuals, due to the small and not well-defined groups of AD and MCI patients. Future studies with larger samples of subjects and more sophisticated patient exclusion criteria are necessary toward the development of a more precise technique for clinical diagnosis. PMID:28293162