WorldWideScience

Sample records for neural network clustering

  1. Clustering: a neural network approach.

    Science.gov (United States)

    Du, K-L

    2010-01-01

    Clustering is a fundamental data analysis method. It is widely used for pattern recognition, feature extraction, vector quantization (VQ), image segmentation, function approximation, and data mining. As an unsupervised classification technique, clustering identifies some inherent structures present in a set of objects based on a similarity measure. Clustering methods can be based on statistical model identification (McLachlan & Basford, 1988) or competitive learning. In this paper, we give a comprehensive overview of competitive learning based clustering methods. Importance is attached to a number of competitive learning based clustering neural networks such as the self-organizing map (SOM), the learning vector quantization (LVQ), the neural gas, and the ART model, and clustering algorithms such as the C-means, mountain/subtractive clustering, and fuzzy C-means (FCM) algorithms. Associated topics such as the under-utilization problem, fuzzy clustering, robust clustering, clustering based on non-Euclidean distance measures, supervised clustering, hierarchical clustering as well as cluster validity are also described. Two examples are given to demonstrate the use of the clustering methods.

  2. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  3. Self-Taught convolutional neural networks for short text clustering.

    Science.gov (United States)

    Xu, Jiaming; Xu, Bo; Wang, Peng; Zheng, Suncong; Tian, Guanhua; Zhao, Jun; Xu, Bo

    2017-04-01

    Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC(2)), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction method. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Tao Ma

    2016-10-01

    Full Text Available The development of intrusion detection systems (IDS that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC and deep neural network (DNN algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN, support vector machine (SVM, random forest (RF and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  5. A Hybrid Spectral Clustering and Deep Neural Network Ensemble Algorithm for Intrusion Detection in Sensor Networks.

    Science.gov (United States)

    Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun

    2016-10-13

    The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.

  6. A neural network clustering algorithm for the ATLAS silicon pixel detector

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdel Khalek, Samah; Abdinov, Ovsat; Aben, Rosemarie; Abi, Babak; Abolins, Maris; AbouZeid, Ossama; Abramowicz, Halina; Abreu, Henso; Abreu, Ricardo; Abulaiti, Yiming; Acharya, Bobby Samir; Adamczyk, Leszek; Adams, David; Adelman, Jahred; Adomeit, Stefanie; Adye, Tim; Agatonovic-Jovin, Tatjana; Aguilar-Saavedra, Juan Antonio; Agustoni, Marco; Ahlen, Steven; Ahmadov, Faig; Aielli, Giulio; Akerstedt, Henrik; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov, Andrei; Alberghi, Gian Luigi; Albert, Justin; Albrand, Solveig; Alconada Verzini, Maria Josefina; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Alimonti, Gianluca; Alio, Lion; Alison, John; Allbrooke, Benedict; Allison, Lee John; Allport, Phillip; Almond, John; Aloisio, Alberto; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Altheimer, Andrew David; Alvarez Gonzalez, Barbara; Alviggi, Mariagrazia; Amako, Katsuya; Amaral Coutinho, Yara; Amelung, Christoph; Amidei, Dante; Amor Dos Santos, Susana Patricia; Amorim, Antonio; Amoroso, Simone; Amram, Nir; Amundsen, Glenn; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, Gabriel; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Anduaga, Xabier; Angelidakis, Stylianos; Angelozzi, Ivan; Anger, Philipp; Angerami, Aaron; Anghinolfi, Francis; Anisenkov, Alexey; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Araque, Juan Pedro; Arce, Ayana; Arguin, Jean-Francois; Argyropoulos, Spyridon; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnal, Vanessa; Arnold, Hannah; Arratia, Miguel; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Åsman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkinson, Markus; Atlay, Naim Bora; Auerbach, Benjamin; Augsten, Kamil; Aurousseau, Mathieu; Avolio, Giuseppe; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baas, Alessandra; Bacci, Cesare; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Backhaus, Malte; Backus Mayes, John; Badescu, Elisabeta; Bagiacchi, Paolo; Bagnaia, Paolo; Bai, Yu; Bain, Travis; Baines, John; Baker, Oliver Keith; Balek, Petr; Balli, Fabrice; Banas, Elzbieta; Banerjee, Swagato; Bannoura, Arwa A E; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Barnovska, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Bartsch, Valeria; Bassalat, Ahmed; Basye, Austin; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Marco; Battistin, Michele; Bauer, Florian; Bawa, Harinder Singh; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Becker, Anne Kathrin; Becker, Sebastian; Beckingham, Matthew; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Beemster, Lars; Beermann, Thomas; Begel, Michael; Behr, Katharina; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belotskiy, Konstantin; Beltramello, Olga; Benary, Odette; Benchekroun, Driss; Bendtz, Katarina; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez Garcia, Jorge-Armando; Benjamin, Douglas; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Beringer, Jürg; Bernard, Clare; Bernat, Pauline; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertoli, Gabriele; Bertolucci, Federico; Bertsche, David; Besana, Maria Ilaria; Besjes, Geert-Jan; Bessidskaia, Olga; Bessner, Martin Florian; Besson, Nathalie; Betancourt, Christopher; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianchini, Louis; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Bierwagen, Katharina; Biesiada, Jed; Biglietti, Michela; Bilbao De Mendizabal, Javier; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Black, Curtis; Black, James; Black, Kevin; Blackburn, Daniel; Blair, Robert; Blanchard, Jean-Baptiste; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Bock, Christopher; Boddy, Christopher Richard; Boehler, Michael; Boek, Thorsten Tobias; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bogouch, Andrei; Bohm, Christian; Bohm, Jan; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Boldyrev, Alexey; Bomben, Marco; Bona, Marcella; Boonekamp, Maarten; Borisov, Anatoly; Borissov, Guennadi; Borri, Marcello; Borroni, Sara; Bortfeldt, Jonathan; Bortolotto, Valerio; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Boudreau, Joseph; Bouffard, Julian; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Bousson, Nicolas; Boutouil, Sara; Boveia, Antonio; Boyd, James; Boyko, Igor; Bracinik, Juraj; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brazzale, Simone Federico; Brelier, Bertrand; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Richard; Bressler, Shikma; Bristow, Kieran; Bristow, Timothy Michael; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Bromberg, Carl; Bronner, Johanna; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brosamer, Jacquelyn; Brost, Elizabeth; Brown, Jonathan; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Bucci, Francesca; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Buehrer, Felix; Bugge, Lars; Bugge, Magnar Kopangen; Bulekov, Oleg; Bundock, Aaron Colin; Burckhart, Helfried; Burdin, Sergey; Burghgrave, Blake; Burke, Stephen; Burmeister, Ingo; Busato, Emmanuel; Büscher, Daniel; Büscher, Volker; Bussey, Peter; Buszello, Claus-Peter; Butler, Bart; Butler, John; Butt, Aatif Imtiaz; Buttar, Craig; Butterworth, Jonathan; Butti, Pierfrancesco; Buttinger, William; Buzatu, Adrian; Byszewski, Marcin; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calandri, Alessandro; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Camarda, Stefano; Cameron, David; Caminada, Lea Michaela; Caminal Armadans, Roger; Campana, Simone; Campanelli, Mario; Campoverde, Angel; Canale, Vincenzo; Canepa, Anadi; Cano Bret, Marc; Cantero, Josu; Cantrill, Robert; Cao, Tingting; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Sascha; Carquin, Edson; Carrillo-Montoya, German D; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Casolino, Mirkoantonio; Castaneda-Miranda, Elizabeth; Castelli, Angelantonio; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catastini, Pierluigi; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cavaliere, Viviana; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerio, Benjamin; Cerny, Karel; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cerv, Matevz; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chalupkova, Ina; Chang, Philip; Chapleau, Bertrand; Chapman, John Derek; Charfeddine, Driss; Charlton, Dave; Chau, Chav Chhiv; Chavez Barajas, Carlos Alberto; Cheatham, Susan; Chegwidden, Andrew; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Karen; Chen, Liming; Chen, Shenjian; Chen, Xin; Chen, Yujiao; Cheng, Hok Chuen; Cheng, Yangyang; Cheplakov, Alexander; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Chevalier, Laurent; Chiarella, Vitaliano; Chiefari, Giovanni; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chisholm, Andrew; Chislett, Rebecca Thalatta; Chitan, Adrian; Chizhov, Mihail; Chouridou, Sofia; Chow, Bonnie Kar Bo; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Chwastowski, Janusz; Chytka, Ladislav; Ciapetti, Guido; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciocio, Alessandra; Cirkovic, Predrag; Citron, Zvi Hirsh; Citterio, Mauro; Ciubancan, Mihai; Clark, Allan G; Clark, Philip James; Clarke, Robert; Cleland, Bill; Clemens, Jean-Claude; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coffey, Laurel; Cogan, Joshua Godfrey; Coggeshall, James; Cole, Brian; Cole, Stephen; Colijn, Auke-Pieter; Collot, Johann; Colombo, Tommaso; Colon, German; Compostella, Gabriele; Conde Muiño, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Connell, Simon Henry; Connelly, Ian; Consonni, Sofia Maria; Consorti, Valerio; Constantinescu, Serban; Conta, Claudio; Conti, Geraldine; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Côté, David; Cottin, Giovanna; Cowan, Glen; Cox, Brian; Cranmer, Kyle; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Cribbs, Wayne Allen; Crispin Ortuzar, Mireia; Cristinziani, Markus; Croft, Vince; Crosetti, Giovanni; Cuciuc, Constantin-Mihai; Cuhadar Donszelmann, Tulay; Cummings, Jane; Curatolo, Maria; Cuthbert, Cameron; Czirr, Hendrik; Czodrowski, Patrick; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dafinca, Alexandru; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Daniells, Andrew Christopher; Dano Hoffmann, Maria; Dao, Valerio; Darbo, Giovanni; Darmora, Smita; Dassoulas, James; Dattagupta, Aparajita; Davey, Will; David, Claire; Davidek, Tomas; Davies, Eleanor; Davies, Merlin; Davignon, Olivier; Davison, Adam; Davison, Peter; Davygora, Yuriy; Dawe, Edmund; Dawson, Ian; Daya-Ishmukhametova, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Cecco, Sandro; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Nooij, Lucie; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; Dearnaley, William James; Debbe, Ramiro; Debenedetti, Chiara; Dechenaux, Benjamin; Dedovich, Dmitri; Deigaard, Ingrid; Del Peso, Jose; Del Prete, Tarcisio; Deliot, Frederic; Delitzsch, Chris Malena; Deliyergiyev, Maksym; Dell'Acqua, Andrea; Dell'Asta, Lidia; Dell'Orso, Mauro; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demilly, Aurelien; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deterre, Cecile; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Domenico, Antonio; Di Donato, Camilla; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Dias, Flavia; Diaz, Marco Aurelio; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Barros do Vale, Maria Aline; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobos, Daniel; Doglioni, Caterina; Doherty, Tom; Dohmae, Takeshi; Dolejsi, Jiri; Dolezal, Zdenek; Dolgoshein, Boris; Donadelli, Marisilvia; Donati, Simone; Dondero, Paolo; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Dris, Manolis; Dubbert, Jörg; Dube, Sourabh; Dubreuil, Emmanuelle; Duchovni, Ehud; Duckeck, Guenter; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Dudziak, Fanny; Duflot, Laurent; Duguid, Liam; Dührssen, Michael; Dunford, Monica; Duran Yildiz, Hatice; Düren, Michael; Durglishvili, Archil; Dwuznik, Michal; Dyndal, Mateusz; Ebke, Johannes; Edson, William; Edwards, Nicholas Charles; Ehrenfeld, Wolfgang; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Endner, Oliver Chris; Endo, Masaki; Engelmann, Roderich; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ernis, Gunar; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Esch, Hendrik; Escobar, Carlos; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Ezhilov, Alexey; Fabbri, Laura; Facini, Gabriel; Fakhrutdinov, Rinat; Falciano, Speranza; Falla, Rebecca Jane; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Favareto, Andrea; Fayard, Louis; Federic, Pavol; Fedin, Oleg; Fedorko, Wojciech; Fehling-Kaschek, Mirjam; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Haolu; Fenyuk, Alexander; Fernandez Perez, Sonia; Ferrag, Samir; Ferrando, James; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filipuzzi, Marco; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Adam; Fischer, Julia; Fisher, Wade Cameron; Fitzgerald, Eric Andrew; Flechl, Martin; Fleck, Ivor; Fleischmann, Philipp; Fleischmann, Sebastian; Fletcher, Gareth Thomas; Fletcher, Gregory; Flick, Tobias; Floderus, Anders; Flores Castillo, Luis; Florez Bustos, Andres Carlos; Flowerdew, Michael; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Fournier, Daniel; Fox, Harald; Fracchia, Silvia; Francavilla, Paolo; Franchini, Matteo; Franchino, Silvia; Francis, David; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; French, Sky; Friedrich, Conrad; Friedrich, Felix; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fulsom, Bryan Gregory; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gadatsch, Stefan; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gan, KK; Gandrajula, Reddy Pratap; Gao, Jun; Gao, Yongsheng; Garay Walls, Francisca; Garberson, Ford; García, Carmen; García Navarro, José Enrique; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gatti, Claudio; Gaudio, Gabriella; Gaur, Bakul; Gauthier, Lea; Gauzzi, Paolo; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Ge, Peng; Gecse, Zoltan; Gee, Norman; Geerts, Daniël Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Hélène; Gentile, Simonetta; George, Matthias; George, Simon; Gerbaudo, Davide; Gershon, Avi; Ghazlane, Hamid; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giangiobbe, Vincent; Giannetti, Paola; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Stephen; Gilchriese, Murdock; Gillam, Thomas; Gillberg, Dag; Gilles, Geoffrey; Gingrich, Douglas; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Filippo Maria; Giorgi, Francesco Michelangelo; Giraud, Pierre-Francois; Giugni, Danilo; Giuliani, Claudia; Giulini, Maddalena; Gjelsten, Børge Kile; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glaysher, Paul; Glazov, Alexandre; Glonti, George; Goblirsch-Kolb, Maximilian; Goddard, Jack Robert; Godfrey, Jennifer; Godlewski, Jan; Goeringer, Christian; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; González de la Hoz, Santiago; Gonzalez Parra, Garoe; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Gozpinar, Serdar; Grabas, Herve Marie Xavier; Graber, Lars; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Gray, Heather; Graziani, Enrico; Grebenyuk, Oleg; Greenwood, Zeno Dixon; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Grohs, Johannes Philipp; Grohsjean, Alexander; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Groth-Jensen, Jacob; Grout, Zara Jane; Guan, Liang; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Guicheney, Christophe; Guido, Elisa; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Gunther, Jaroslav; Guo, Jun; Gupta, Shaun; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guttman, Nir; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haber, Carl; Hadavand, Haleh Khani; Haddad, Nacim; Haefner, Petra; Hageböck, Stephan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haleem, Mahsana; Hall, David; Halladjian, Garabed; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamer, Matthias; Hamilton, Andrew; Hamilton, Samuel; Hamnett, Phillip George; Han, Liang; Hanagaki, Kazunori; Hanawa, Keita; Hance, Michael; Hanke, Paul; Hanna, Remie; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Hariri, Faten; Harkusha, Siarhei; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Paul Fraser; Hartjes, Fred; Hasegawa, Satoshi; Hasegawa, Yoji; Hasib, A; Hassani, Samira; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Anthony David; Hayashi, Takayasu; Hayden, Daniel; Hays, Chris; Hayward, Helen; Haywood, Stephen; Head, Simon; Heck, Tobias; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Lukas; Hejbal, Jiri; Helary, Louis; Heller, Claudio; Heller, Matthieu; Hellman, Sten; Hellmich, Dennis; Helsens, Clement; Henderson, James; Henderson, Robert; Heng, Yang; Hengler, Christopher; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Hensel, Carsten; Herbert, Geoffrey Henry; Hernández Jiménez, Yesenia; Herrberg-Schubert, Ruth; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hesketh, Gavin Grant; Hessey, Nigel; Hickling, Robert; Higón-Rodriguez, Emilio; Hill, Ewan; Hill, John; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hofmann, Julia Isabell; Hohlfeld, Marc; Holmes, Tova Ray; Hong, Tae Min; Hooft van Huysduynen, Loek; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howard, Jacob; Howarth, James; Hrabovsky, Miroslav; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hsu, Catherine; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Diedi; Hu, Xueye; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Huhtinen, Mika; Hülsing, Tobias Alexander; Hurwitz, Martina; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Ideal, Emma; Iengo, Paolo; Igonkina, Olga; Iizawa, Tomoya; Ikegami, Yoichi; Ikematsu, Katsumasa; Ikeno, Masahiro; Ilchenko, Iurii; Iliadis, Dimitrios; Ilic, Nikolina; Inamaru, Yuki; Ince, Tayfun; Ioannou, Pavlos; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Irles Quiles, Adrian; Isaksson, Charlie; Ishino, Masaya; Ishitsuka, Masaki; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Iturbe Ponce, Julia Mariana; Iuppa, Roberto; Ivarsson, Jenny; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, Matthew; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jakubek, Jan; Jamin, David Olivier; Jana, Dilip; Jansen, Eric; Jansen, Hendrik; Janssen, Jens; Janus, Michel; Jarlskog, Göran; Javadov, Namig; Javůrek, Tomáš; Jeanty, Laura; Jejelava, Juansher; Jeng, Geng-yuan; Jennens, David; Jenni, Peter; Jentzsch, Jennifer; Jeske, Carl; Jézéquel, Stéphane; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Joergensen, Morten Dam; Johansson, Erik; Johansson, Per; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tim; Jongmanns, Jan; Jorge, Pedro; Joshi, Kiran Daniel; Jovicevic, Jelena; Ju, Xiangyang; Jung, Christian; Jungst, Ralph Markus; Jussel, Patrick; Juste Rozas, Aurelio; Kaci, Mohammed; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kajomovitz, Enrique; Kalderon, Charles William; Kama, Sami; Kamenshchikov, Andrey; Kanaya, Naoko; Kaneda, Michiru; Kaneti, Steven; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kar, Deepak; Karakostas, Konstantinos; Karastathis, Nikolaos; Karnevskiy, Mikhail; Karpov, Sergey; Karpova, Zoya; Karthik, Krishnaiyengar; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasieczka, Gregor; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Katre, Akshay; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kazama, Shingo; Kazanin, Vassili; Kazarinov, Makhail; Keeler, Richard; Kehoe, Robert; Keil, Markus; Keller, John; Kempster, Jacob Julian; Keoshkerian, Houry; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Keung, Justin; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Khodinov, Alexander; Khomich, Andrei; Khoo, Teng Jian; Khoriauli, Gia; Khoroshilov, Andrey; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hee Yeun; Kim, Hyeon Jin; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; King, Samuel Burton; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kiss, Florian; Kittelmann, Thomas; Kiuchi, Kenji; Kladiva, Eduard; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klinger, Joel Alexander; Klioutchnikova, Tatiana; Klok, Peter; Kluge, Eike-Erik; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kogan, Lucy Anne; Kohlmann, Simon; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kolanoski, Hermann; Koletsou, Iro; Koll, James; Komar, Aston; Komori, Yuto; Kondo, Takahiko; Kondrashova, Nataliia; Köneke, Karsten; König, Adriaan; König, Sebastian; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kopeliansky, Revital; Koperny, Stefan; Köpke, Lutz; Kopp, Anna Katharina; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotov, Vladislav; Kotwal, Ashutosh; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitriy; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, Jana; Kravchenko, Anton; Kreiss, Sven; Kretz, Moritz; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Kruker, Tobias; Krumnack, Nils; Krumshteyn, Zinovii; Kruse, Amanda; Kruse, Mark; Kruskal, Michael; Kubota, Takashi; Kuday, Sinan; Kuehn, Susanne; Kugel, Andreas; Kuhl, Andrew; Kuhl, Thorsten; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kuna, Marine; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurochkin, Yurii; Kurumida, Rie; Kus, Vlastimil; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; La Rosa, Alessandro; La Rotonda, Laura; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laier, Heiko; Lambourne, Luke; Lammers, Sabine; Lampen, Caleb; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lang, Valerie Susanne; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Law, Alexander; Laycock, Paul; Le, Bao Tran; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Claire Alexandra; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Guillaume; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehan, Allan; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leight, William Axel; Leisos, Antonios; Leister, Andrew Gerard; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzen, Georg; Lenzi, Bruno; Leone, Robert; Leone, Sandra; Leonhardt, Kathrin; Leonidopoulos, Christos; Leontsinis, Stefanos; Leroy, Claude; Lester, Christopher; Lester, Christopher Michael; Levchenko, Mikhail; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Levy, Mark; Lewis, Adrian; Lewis, George; Leyko, Agnieszka; Leyton, Michael; Li, Bing; Li, Bo; Li, Haifeng; Li, Ho Ling; Li, Lei; Li, Liang; Li, Shu; Li, Yichen; Liang, Zhijun; Liao, Hongbo; Liberti, Barbara; Lichard, Peter; Lie, Ki; Liebal, Jessica; Liebig, Wolfgang; Limbach, Christian; Limosani, Antonio; Lin, Simon; Lin, Tai-Hua; Linde, Frank; Lindquist, Brian Edward; Linnemann, James; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Bo; Liu, Dong; Liu, Jianbei; Liu, Kun; Liu, Lulu; Liu, Miaoyuan; Liu, Minghui; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo Sterzo, Francesco; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Loddenkoetter, Thomas; Loebinger, Fred; Loevschall-Jensen, Ask Emil; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Lombardo, Vincenzo Paolo; Long, Brian Alexander; Long, Jonathan; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Lopez Paredes, Brais; Lopez Paz, Ivan; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Loscutoff, Peter; Lou, XinChou; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Luehring, Frederick; Lukas, Wolfgang; Luminari, Lamberto; Lundberg, Olof; Lund-Jensen, Bengt; Lungwitz, Matthias; Lynn, David; Lysak, Roman; Lytken, Else; Ma, Hong; Ma, Lian Liang; Maccarrone, Giovanni; Macchiolo, Anna; Machado Miguens, Joana; Macina, Daniela; Madaffari, Daniele; Madar, Romain; Maddocks, Harvey Jonathan; Mader, Wolfgang; Madsen, Alexander; Maeno, Mayuko; Maeno, Tadashi; Magradze, Erekle; Mahboubi, Kambiz; Mahlstedt, Joern; Mahmoud, Sara; Maiani, Camilla; Maidantchik, Carmen; Maier, Andreas Alexander; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malaescu, Bogdan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Caitlin; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mamuzic, Judita; Mandelli, Beatrice; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Manfredini, Alessandro; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany Andreina; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Mantifel, Rodger; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchiori, Giovanni; Marcisovsky, Michal; Marino, Christopher; Marjanovic, Marija; Marques, Carlos; Marroquim, Fernando; Marsden, Stephen Philip; Marshall, Zach; Marti, Lukas Fritz; Marti-Garcia, Salvador; Martin, Brian; Martin, Brian Thomas; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Homero; Martinez, Mario; Martin-Haugh, Stewart; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massol, Nicolas; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mättig, Peter; Mattmann, Johannes; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Mazzaferro, Luca; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; Mchedlidze, Gvantsa; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Medinnis, Michael; Meehan, Samuel; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meineck, Christian; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Meloni, Federico; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mercurio, Kevin Michael; Mergelmeyer, Sebastian; Meric, Nicolas; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Merritt, Hayes; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Carsten; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Middleton, Robin; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Milic, Adriana; Miller, David; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Mitsui, Shingo; Miucci, Antonio; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Moa, Torbjoern; Mochizuki, Kazuya; Mohapatra, Soumya; Mohr, Wolfgang; Molander, Simon; Moles-Valls, Regina; Mönig, Klaus; Monini, Caterina; Monk, James; Monnier, Emmanuel; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Moraes, Arthur; Morange, Nicolas; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Marcus; Morii, Masahiro; Moritz, Sebastian; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Morvaj, Ljiljana; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Motohashi, Kazuki; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Muanza, Steve; Mudd, Richard; Mueller, Felix; Mueller, James; Mueller, Klemens; Mueller, Thibaut; Mueller, Timo; Muenstermann, Daniel; Munwes, Yonathan; Murillo Quijada, Javier Alberto; Murray, Bill; Musheghyan, Haykuhi; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nackenhorst, Olaf; Nadal, Jordi; Nagai, Koichi; Nagai, Ryo; Nagai, Yoshikazu; Nagano, Kunihiro; Nagarkar, Advait; Nagasaka, Yasushi; Nagel, Martin; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Namasivayam, Harisankar; Nanava, Gizo; Narayan, Rohin; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Nef, Pascal Daniel; Negri, Andrea; Negri, Guido; Negrini, Matteo; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nguyen, Duong Hai; Nickerson, Richard; Nicolaidou, Rosy; Nicquevert, Bertrand; Nielsen, Jason; Nikiforou, Nikiforos; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolics, Katalin; Nikolopoulos, Konstantinos; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nisius, Richard; Nobe, Takuya; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Norberg, Scarlet; Nordberg, Markus; Novgorodova, Olga; Nowak, Sebastian; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; Nuti, Francesco; O'Brien, Brendan Joseph; O'grady, Fionnbarr; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Obermann, Theresa; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Oda, Susumu; Odaka, Shigeru; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohman, Henrik; Ohshima, Takayoshi; Okamura, Wataru; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Olchevski, Alexander; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlando, Nicola; Oropeza Barrera, Cristina; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ouellette, Eric; Ould-Saada, Farid; Ouraou, Ahmimed; Oussoren, Koen Pieter; Ouyang, Qun; Ovcharova, Ana; Owen, Mark; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Pagáčová, Martina; Pagan Griso, Simone; Paganis, Efstathios; Pahl, Christoph; Paige, Frank; Pais, Preema; Pajchel, Katarina; Palacino, Gabriel; Palestini, Sandro; Palka, Marek; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panduro Vazquez, William; Pani, Priscilla; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Paolozzi, Lorenzo; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parker, Michael Andrew; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passaggio, Stefano; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pearce, James; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Pelikan, Daniel; Peng, Haiping; Penning, Bjoern; Penwell, John; Perepelitsa, Dennis; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Peschke, Richard; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Pettersson, Nora Emilia; Pezoa, Raquel; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Piccaro, Elisa; Piccinini, Maurizio; Piegaia, Ricardo; Pignotti, David; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Pingel, Almut; Pinto, Belmiro; Pires, Sylvestre; Pitt, Michael; Pizio, Caterina; Plazak, Lukas; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Plucinski, Pawel; Poddar, Sahill; Podlyski, Fabrice; Poettgen, Ruth; Poggioli, Luc; Pohl, David-leon; Pohl, Martin; Polesello, Giacomo; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Pommès, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Portell Bueso, Xavier; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Pozdnyakov, Valery; Pralavorio, Pascal; Pranko, Aliaksandr; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Price, Darren; Price, Joe; Price, Lawrence; Prieur, Damien; Primavera, Margherita; Proissl, Manuel; Prokofiev, Kirill; Prokoshin, Fedor; Protopapadaki, Eftychia-sofia; Protopopescu, Serban; Proudfoot, James; Przybycien, Mariusz; Przysiezniak, Helenka; Ptacek, Elizabeth; Puddu, Daniele; Pueschel, Elisa; Puldon, David; Purohit, Milind; Puzo, Patrick; Qian, Jianming; Qin, Gang; Qin, Yang; Quadt, Arnulf; Quarrie, David; Quayle, William; Queitsch-Maitland, Michaela; Quilty, Donnchadha; Qureshi, Anum; Radeka, Veljko; Radescu, Voica; Radhakrishnan, Sooraj Krishnan; Radloff, Peter; Rados, Pere; Ragusa, Francesco; Rahal, Ghita; Rajagopalan, Srinivasan; Rammensee, Michael; Randle-Conde, Aidan Sean; Rangel-Smith, Camila; Rao, Kanury; Rauscher, Felix; Rave, Tobias Christian; Ravenscroft, Thomas; Raymond, Michel; Read, Alexander Lincoln; Readioff, Nathan Peter; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Rehnisch, Laura; Reisin, Hernan; Relich, Matthew; Rembser, Christoph; Ren, Huan; Ren, Zhongliang; Renaud, Adrien; Rescigno, Marco; Resconi, Silvia; Rezanova, Olga; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Ridel, Melissa; Rieck, Patrick; Rieger, Julia; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Ritsch, Elmar; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Roda, Chiara; Rodrigues, Luis; Roe, Shaun; Røhne, Ole; Rolli, Simona; Romaniouk, Anatoli; Romano, Marino; Romero Adam, Elena; Rompotis, Nikolaos; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosbach, Kilian; Rose, Matthew; Rosendahl, Peter Lundgaard; Rosenthal, Oliver; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rosten, Rachel; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rubinskiy, Igor; Rud, Viacheslav; Rudolph, Christian; Rudolph, Matthew Scott; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Ruschke, Alexander; Rutherfoord, John; Ruthmann, Nils; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Saavedra, Aldo; Sacerdoti, Sabrina; Saddique, Asif; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Sakurai, Yuki; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Sanchez, Arturo; Sánchez, Javier; Sanchez Martinez, Victoria; Sandaker, Heidi; Sandbach, Ruth Laura; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandoval, Tanya; Sandoval, Carlos; Sandstroem, Rikard; Sankey, Dave; Sansoni, Andrea; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Santoyo Castillo, Itzebelt; Sapp, Kevin; Sapronov, Andrey; Saraiva, João; Sarrazin, Bjorn; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Yuichi; Sauvage, Gilles; Sauvan, Emmanuel; Savard, Pierre; Savu, Dan Octavian; Sawyer, Craig; Sawyer, Lee; Saxon, David; Saxon, James; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Scarcella, Mark; Scarfone, Valerio; Schaarschmidt, Jana; Schacht, Peter; Schaefer, Douglas; Schaefer, Ralph; Schaepe, Steffen; Schaetzel, Sebastian; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R. Dean; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schillo, Christian; Schioppa, Marco; Schlenker, Stefan; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitt, Christopher; Schmitt, Sebastian; Schneider, Basil; Schnellbach, Yan Jie; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schoenrock, Bradley Daniel; Schorlemmer, Andre Lukas; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schramm, Steven; Schreyer, Manuel; Schroeder, Christian; Schuh, Natascha; Schultens, Martin Johannes; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwegler, Philipp; Schwemling, Philippe; Schwienhorst, Reinhard; Schwindling, Jerome; Schwindt, Thomas; Schwoerer, Maud; Sciacca, Gianfranco; Scifo, Estelle; Sciolla, Gabriella; Scott, Bill; Scuri, Fabrizio; Scutti, Federico; Searcy, Jacob; Sedov, George; Sedykh, Evgeny; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Sekula, Stephen; Selbach, Karoline Elfriede; Seliverstov, Dmitry; Sellers, Graham; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Serre, Thomas; Seuster, Rolf; Severini, Horst; Sfiligoj, Tina; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shang, Ruo-yu; Shank, James; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Shehu, Ciwake Yusufu; Sherwood, Peter; Shi, Liaoshan; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shiyakova, Mariya; Shmeleva, Alevtina; Shochet, Mel; Short, Daniel; Shrestha, Suyog; Shulga, Evgeny; Shupe, Michael; Shushkevich, Stanislav; Sicho, Petr; Sidiropoulou, Ourania; Sidorov, Dmitri; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silva, José; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simoniello, Rosa; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sircar, Anirvan; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skottowe, Hugh Philip; Skovpen, Kirill; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Smakhtin, Vladimir; Smart, Ben; Smestad, Lillian; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snidero, Giacomo; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffer, Abner; Soh, Dart-yin; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Soloshenko, Alexei; Solovyanov, Oleg; Solovyev, Victor; Sommer, Philip; Song, Hong Ye; Soni, Nitesh; Sood, Alexander; Sopczak, Andre; Sopko, Bruno; Sopko, Vit; Sorin, Veronica; Sosebee, Mark; Soualah, Rachik; Soueid, Paul; Soukharev, Andrey; South, David; Spagnolo, Stefania; Spanò, Francesco; Spearman, William Robert; Spettel, Fabian; Spighi, Roberto; Spigo, Giancarlo; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St Denis, Richard Dante; Staerz, Steffen; Stahlman, Jonathan; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanescu-Bellu, Madalina; Stanitzki, Marcel Michael; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staszewski, Rafal; Stavina, Pavel; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stern, Sebastian; Stewart, Graeme; Stillings, Jan Andre; Stockton, Mark; Stoebe, Michael; Stoicea, Gabriel; Stolte, Philipp; Stonjek, Stefan; Stradling, Alden; Straessner, Arno; Stramaglia, Maria Elena; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Stucci, Stefania Antonia; Stugu, Bjarne; Styles, Nicholas Adam; Su, Dong; Su, Jun; Subramania, Halasya Siva; Subramaniam, Rajivalochan; Succurro, Antonella; Sugaya, Yorihito; Suhr, Chad; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Susinno, Giancarlo; Sutton, Mark; Suzuki, Yu; Svatos, Michal; Swedish, Stephen; Swiatlowski, Maximilian; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Taccini, Cecilia; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tam, Jason; Tan, Kong Guan; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tanasijczuk, Andres Jorge; Tannenwald, Benjamin Bordy; Tannoury, Nancy; Tapprogge, Stefan; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teischinger, Florian Alfred; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Teoh, Jia Jian; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Therhaag, Jan; Theveneaux-Pelzer, Timothée; Thomas, Juergen; Thomas-Wilsker, Joshuha; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Thomson, Mark; Thong, Wai Meng; Thun, Rudolf; Tian, Feng; Tibbetts, Mark James; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tiouchichine, Elodie; Tipton, Paul; Tisserant, Sylvain; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tollefson, Kirsten; Tomlinson, Lee; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Topilin, Nikolai; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Tran, Huong Lan; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Triplett, Nathan; Trischuk, William; Trocmé, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trovatelli, Monica; True, Patrick; Trzebinski, Maciej; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsuno, Soshi; Tsybychev, Dmitri; Tudorache, Alexandra; Tudorache, Valentina; Tuna, Alexander Naip; Tupputi, Salvatore; Turchikhin, Semen; Turecek, Daniel; Turk Cakir, Ilkay; Turra, Ruggero; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ughetto, Michael; Ugland, Maren; Uhlenbrock, Mathias; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Urbaniec, Dustin; Urquijo, Phillip; Usai, Giulio; Usanova, Anna; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Valencic, Nika; Valentinetti, Sara; Valero, Alberto; Valery, Loic; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Den Wollenberg, Wouter; Van Der Deijl, Pieter; van der Geer, Rogier; van der Graaf, Harry; Van Der Leeuw, Robin; van der Ster, Daniel; van Eldik, Niels; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vanguri, Rami; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Vardanyan, Gagik; Vari, Riccardo; Varnes, Erich; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vazeille, Francois; Vazquez Schroeder, Tamara; Veatch, Jason; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Venturini, Alessio; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Vigne, Ralph; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Virzi, Joseph; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vladoiu, Dan; Vlasak, Michal; Vogel, Adrian; Vogel, Marcelo; Vokac, Petr; Volpi, Guido; Volpi, Matteo; von der Schmitt, Hans; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorobev, Konstantin; Vos, Marcel; Voss, Rudiger; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vuillermet, Raphael; Vukotic, Ilija; Vykydal, Zdenek; Wagner, Peter; Wagner, Wolfgang; Wahlberg, Hernan; Wahrmund, Sebastian; Wakabayashi, Jun; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Walsh, Brian; Wang, Chao; Wang, Chiho; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Kuhan; Wang, Rui; Wang, Song-Ming; Wang, Tan; Wang, Xiaoxiao; Wanotayaroj, Chaowaroj; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Warsinsky, Markus; Washbrook, Andrew; Wasicki, Christoph; Watkins, Peter; Watson, Alan; Watson, Ian; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Samuel; Weber, Michele; Weber, Stefan Wolf; Webster, Jordan S; Weidberg, Anthony; Weigell, Philipp; Weinert, Benjamin; Weingarten, Jens; Weiser, Christian; Weits, Hartger; Wells, Phillippa; Wenaus, Torre; Wendland, Dennis; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Wessels, Martin; Wetter, Jeffrey; Whalen, Kathleen; White, Andrew; White, Martin; White, Ryan; White, Sebastian; Whiteson, Daniel; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Hugh; Williams, Sarah; Willis, Christopher; Willocq, Stephane; Wilson, Alan; Wilson, John; Wingerter-Seez, Isabelle; Winklmeier, Frank; Winter, Benedict Tobias; Wittgen, Matthias; Wittig, Tobias; Wittkowski, Josephine; Wollstadt, Simon Jakob; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wozniak, Krzysztof; Wright, Michael; Wu, Mengqing; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xiao, Meng; Xu, Da; Xu, Lailin; Yabsley, Bruce; Yacoob, Sahal; Yamada, Miho; Yamaguchi, Hiroshi; Yamaguchi, Yohei; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamanaka, Takashi; Yamauchi, Katsuya; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Un-Ki; Yang, Yi; Yanush, Serguei; Yao, Liwen; Yao, Weiming; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yen, Andy L; Yildirim, Eda; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Rikutaro; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Youssef, Saul; Yu, David Ren-Hwa; Yu, Jaehoon; Yu, Jiaming; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Yusuff, Imran; Zabinski, Bartlomiej; Zaidan, Remi; Zaitsev, Alexander; Zaman, Aungshuman; Zambito, Stefano; Zanello, Lucia; Zanzi, Daniele; Zeitnitz, Christian; Zeman, Martin; Zemla, Andrzej; Zengel, Keith; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhang, Dongliang; Zhang, Fangzhou; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Lei; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Lei; Zhou, Ning; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhukov, Konstantin; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Christoph; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Zinonos, Zinonas; Ziolkowski, Michael; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zurzolo, Giovanni; Zutshi, Vishnu; Zwalinski, Lukasz

    2014-09-15

    A novel technique to identify and split clusters created by multiple charged particles in the ATLAS pixel detector using a set of artificial neural networks is presented. Such merged clusters are a common feature of tracks originating from highly energetic objects, such as jets. Neural networks are trained using Monte Carlo samples produced with a detailed detector simulation. This technique replaces the former clustering approach based on a connected component analysis and charge interpolation. The performance of the neural network splitting technique is quantified using data from proton-proton collisions at the LHC collected by the ATLAS detector in 2011 and from Monte Carlo simulations. This technique reduces the number of clusters shared between tracks in highly energetic jets by up to a factor of three. It also provides more precise position and error estimates of the clusters in both the transverse and longitudinal impact parameter resolution.

  7. Upon the opportunity to apply ART2 Neural Network for clusterization of biodiesel fuels

    OpenAIRE

    Petkov T.; Mustafa Z.; Sotirov S.; Milina R.; Moskovkina M.

    2016-01-01

    A chemometric approach using artificial neural network for clusterization of biodiesels was developed. It is based on artificial ART2 neural network. Gas chromatography (GC) and Gas Chromatography - mass spectrometry (GC-MS) were used for quantitative and qualitative analysis of biodiesels, produced from different feedstocks, and FAME (fatty acid methyl esters) profiles were determined. Totally 96 analytical results for 7 different classes of biofuel plants: sunflower, rapeseed, corn, soybean...

  8. Application of clustering analysis in the prediction of photovoltaic power generation based on neural network

    Science.gov (United States)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.

  9. Comparision of Clustering Algorithms usingNeural Network Classifier for Satellite Image Classification

    OpenAIRE

    S.Praveena; Dr. S.P Singh

    2015-01-01

    This paper presents a hybrid clustering algorithm and feed-forward neural network classifier for land-cover mapping of trees, shade, building and road. It starts with the single step preprocessing procedure to make the image suitable for segmentation. The pre-processed image is segmented using the hybrid genetic-Artificial Bee Colony(ABC) algorithm that is developed by hybridizing the ABC and FCM to obtain the effective segmentation in satellite image and classified using neural n...

  10. Pinning cluster synchronization in an array of coupled neural networks under event-based mechanism.

    Science.gov (United States)

    Li, Lulu; Ho, Daniel W C; Cao, Jinde; Lu, Jianquan

    2016-04-01

    Cluster synchronization is a typical collective behavior in coupled dynamical systems, where the synchronization occurs within one group, while there is no synchronization among different groups. In this paper, under event-based mechanism, pinning cluster synchronization in an array of coupled neural networks is studied. A new event-triggered sampled-data transmission strategy, where only local and event-triggering states are utilized to update the broadcasting state of each agent, is proposed to realize cluster synchronization of the coupled neural networks. Furthermore, a self-triggered pinning cluster synchronization algorithm is proposed, and a set of iterative procedures is given to compute the event-triggered time instants. Hence, this will reduce the computational load significantly. Finally, an example is given to demonstrate the effectiveness of the theoretical results. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  11. Detection of clustered microcalcifications in masses on mammograms by artificial neural networks

    Science.gov (United States)

    Zhang, Xuejun; Hara, Takeshi; Fujita, Hiroshi; Iwase, Takuji; Endo, Tokiko

    2001-07-01

    The existence of a cluster of microcalcifications in mass area on mammogram is one of important features for distinguishing the breast cancer between benign and malignant. However, missed detections often occur because of its low subject contrast in denser background and small quantity of microcalcifications. To get a higher performance of detecting the cluster in mass area, we combined the shift-invariant artificial neural network (SIANN) with triple-ring filter (TRF) method in our computer-aided diagnosis (CAD) system. 150 region-of- interests around mass containing both of positive and negative microcalcifications were selected for training the network by a modified error-back-propagation algorithm. A variable-ring filter was used for eliminating the false- positive (FP) detections after the outputs of SIANN and TRF. The remained Fps were then reduced by a conventional three layer artificial neural network. Finally, the program identified clustered microcalcifications form individual microcalcifications. In a practical detection of 30 cases with 40 clusters in masses, the sensitivity of detecting clusters was improved form 90% by our previous method to 95% by using both SIANN and TRF, while the number of FP clusters was decreased from 0.85 to 0.40 cluster per image.

  12. Validity-Guided Fuzzy Clustering Evaluation for Neural Network-Based Time-Frequency Reassignment

    Directory of Open Access Journals (Sweden)

    Ahmad Khan Adnan

    2010-01-01

    Full Text Available Abstract This paper describes the validity-guided fuzzy clustering evaluation for optimal training of localized neural networks (LNNs used for reassigning time-frequency representations (TFRs. Our experiments show that the validity-guided fuzzy approach ameliorates the difficulty of choosing correct number of clusters and in conjunction with neural network-based processing technique utilizing a hybrid approach can effectively reduce the blur in the spectrograms. In the course of every partitioning problem the number of subsets must be given before the calculation, but it is rarely known apriori, in this case it must be searched also with using validity measures. Experimental results demonstrate the effectiveness of the approach.

  13. A Study of the Classification Capabilities of Neural Networks Using Unsupervised Learning: A Comparison with K-Means Clustering.

    Science.gov (United States)

    Balakrishnan, P. V. (Sunder); And Others

    1994-01-01

    A simulation study compares nonhierarchical clustering capabilities of a class of neural networks using Kohonen learning with a K-means clustering procedure. The focus is on the ability of the procedures to recover correctly the known cluster structure in the data. Advantages and disadvantages of the procedures are reviewed. (SLD)

  14. Hybrid Clustering-GWO-NARX neural network technique in predicting stock price

    Science.gov (United States)

    Das, Debashish; Safa Sadiq, Ali; Mirjalili, Seyedali; Noraziah, A.

    2017-09-01

    Prediction of stock price is one of the most challenging tasks due to nonlinear nature of the stock data. Though numerous attempts have been made to predict the stock price by applying various techniques, yet the predicted price is not always accurate and even the error rate is high to some extent. Consequently, this paper endeavours to determine an efficient stock prediction strategy by implementing a combinatorial method of Grey Wolf Optimizer (GWO), Clustering and Non Linear Autoregressive Exogenous (NARX) Technique. The study uses stock data from prominent stock market i.e. New York Stock Exchange (NYSE), NASDAQ and emerging stock market i.e. Malaysian Stock Market (Bursa Malaysia), Dhaka Stock Exchange (DSE). It applies K-means clustering algorithm to determine the most promising cluster, then MGWO is used to determine the classification rate and finally the stock price is predicted by applying NARX neural network algorithm. The prediction performance gained through experimentation is compared and assessed to guide the investors in making investment decision. The result through this technique is indeed promising as it has shown almost precise prediction and improved error rate. We have applied the hybrid Clustering-GWO-NARX neural network technique in predicting stock price. We intend to work with the effect of various factors in stock price movement and selection of parameters. We will further investigate the influence of company news either positive or negative in stock price movement. We would be also interested to predict the Stock indices.

  15. Wavelet neural networks initialization using hybridized clustering and harmony search algorithm: Application in epileptic seizure detection

    Science.gov (United States)

    Zainuddin, Zarita; Lai, Kee Huong; Ong, Pauline

    2013-04-01

    Artificial neural networks (ANNs) are powerful mathematical models that are used to solve complex real world problems. Wavelet neural networks (WNNs), which were developed based on the wavelet theory, are a variant of ANNs. During the training phase of WNNs, several parameters need to be initialized; including the type of wavelet activation functions, translation vectors, and dilation parameter. The conventional k-means and fuzzy c-means clustering algorithms have been used to select the translation vectors. However, the solution vectors might get trapped at local minima. In this regard, the evolutionary harmony search algorithm, which is capable of searching for near-optimum solution vectors, both locally and globally, is introduced to circumvent this problem. In this paper, the conventional k-means and fuzzy c-means clustering algorithms were hybridized with the metaheuristic harmony search algorithm. In addition to obtaining the estimation of the global minima accurately, these hybridized algorithms also offer more than one solution to a particular problem, since many possible solution vectors can be generated and stored in the harmony memory. To validate the robustness of the proposed WNNs, the real world problem of epileptic seizure detection was presented. The overall classification accuracy from the simulation showed that the hybridized metaheuristic algorithms outperformed the standard k-means and fuzzy c-means clustering algorithms.

  16. Upon the opportunity to apply ART2 Neural Network for clusterization of biodiesel fuels

    Directory of Open Access Journals (Sweden)

    Petkov T.

    2016-03-01

    Full Text Available A chemometric approach using artificial neural network for clusterization of biodiesels was developed. It is based on artificial ART2 neural network. Gas chromatography (GC and Gas Chromatography - mass spectrometry (GC-MS were used for quantitative and qualitative analysis of biodiesels, produced from different feedstocks, and FAME (fatty acid methyl esters profiles were determined. Totally 96 analytical results for 7 different classes of biofuel plants: sunflower, rapeseed, corn, soybean, palm, peanut, “unknown” were used as objects. The analysis of biodiesels showed the content of five major FAME (C16:0, C18:0, C18:1, C18:2, C18:3 and those components were used like inputs in the model. After training with 6 samples, for which the origin was known, ANN was verified and tested with ninety “unknown” samples. The present research demonstrated the successful application of neural network for recognition of biodiesels according to their feedstock which give information upon their properties and handling.

  17. Upon the opportunity to apply ART2 Neural Network for clusterization of biodiesel fuels

    Science.gov (United States)

    Petkov, T.; Mustafa, Z.; Sotirov, S.; Milina, R.; Moskovkina, M.

    2016-03-01

    A chemometric approach using artificial neural network for clusterization of biodiesels was developed. It is based on artificial ART2 neural network. Gas chromatography (GC) and Gas Chromatography - mass spectrometry (GC-MS) were used for quantitative and qualitative analysis of biodiesels, produced from different feedstocks, and FAME (fatty acid methyl esters) profiles were determined. Totally 96 analytical results for 7 different classes of biofuel plants: sunflower, rapeseed, corn, soybean, palm, peanut, "unknown" were used as objects. The analysis of biodiesels showed the content of five major FAME (C16:0, C18:0, C18:1, C18:2, C18:3) and those components were used like inputs in the model. After training with 6 samples, for which the origin was known, ANN was verified and tested with ninety "unknown" samples. The present research demonstrated the successful application of neural network for recognition of biodiesels according to their feedstock which give information upon their properties and handling.

  18. Cluster analysis and artificial neural networks multivariate classification of onion varieties.

    Science.gov (United States)

    Rodríguez Galdón, Beatriz; Peña-Méndez, Eladia; Havel, Josef; Rodríguez Rodríguez, Elena María; Díaz Romero, Carlos

    2010-11-10

    Eight cultivars of different colored onions (white, golden, and red) were evaluated for fresh bulbs cultivated and grown under the same environmental and agronomical conditions. Cluster analysis and principal component analysis, based on different flavonoids, total phenols, and pungency, data showed that the onions were not clustered according to variety (genetic similarity degree), whereas the color was the variable with the highest influence, ranging between 50 and 70%. Artificial neural networks were applied to study the possibility of discriminating among onion varieties. Characterization of the onion according to variety and procedence of the seeds was around 95-100%. Samples belonging to the Carrizal Alto procedence had an incorrect classification for 25% of the data.

  19. Design of hybrid radial basis function neural networks (HRBFNNs) realized with the aid of hybridization of fuzzy clustering method (FCM) and polynomial neural networks (PNNs).

    Science.gov (United States)

    Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold

    2014-12-01

    In this study, we propose Hybrid Radial Basis Function Neural Networks (HRBFNNs) realized with the aid of fuzzy clustering method (Fuzzy C-Means, FCM) and polynomial neural networks. Fuzzy clustering used to form information granulation is employed to overcome a possible curse of dimensionality, while the polynomial neural network is utilized to build local models. Furthermore, genetic algorithm (GA) is exploited here to optimize the essential design parameters of the model (including fuzzification coefficient, the number of input polynomial fuzzy neurons (PFNs), and a collection of the specific subset of input PFNs) of the network. To reduce dimensionality of the input space, principal component analysis (PCA) is considered as a sound preprocessing vehicle. The performance of the HRBFNNs is quantified through a series of experiments, in which we use several modeling benchmarks of different levels of complexity (different number of input variables and the number of available data). A comparative analysis reveals that the proposed HRBFNNs exhibit higher accuracy in comparison to the accuracy produced by some models reported previously in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Photo-z with CuBANz: An improved photometric redshift estimator using Clustering aided Back propagation Neural network

    Science.gov (United States)

    Samui, Saumyadip; Samui Pal, Shanoli

    2017-02-01

    We present an improved photometric redshift estimator code, CuBANz, that is publicly available at https://goo.gl/fpk90V. It uses the back propagation neural network along with clustering of the training set, which makes it more efficient than existing neural network codes. In CuBANz, the training set is divided into several self learning clusters with galaxies having similar photometric properties and spectroscopic redshifts within a given span. The clustering algorithm uses the color information (i.e. u - g , g - r etc.) rather than the apparent magnitudes at various photometric bands as the photometric redshift is more sensitive to the flux differences between different bands rather than the actual values. Separate neural networks are trained for each cluster using all possible colors, magnitudes and uncertainties in the measurements. For a galaxy with unknown redshift, we identify the closest possible clusters having similar photometric properties and use those clusters to get the photometric redshifts using the particular networks that were trained using those cluster members. For galaxies that do not match with any training cluster, the photometric redshifts are obtained from a separate network that uses entire training set. This clustering method enables us to determine the redshifts more accurately. SDSS Stripe 82 catalog has been used here for the demonstration of the code. For the clustered sources with redshift range zspec < 0.7, the residual error (〈(zspec -zphot) 2 〉 1 / 2) in the training/testing phase is as low as 0.03 compared to the existing ANNz code that provides residual error on the same test data set of 0.05. Further, we provide a much better estimate of the uncertainty of the derived photometric redshift.

  1. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule.

    Science.gov (United States)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin

    2015-11-01

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  2. Effects of bursting dynamic features on the generation of multi-clustered structure of neural network with symmetric spike-timing-dependent plasticity learning rule

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin, E-mail: xmli@cqu.edu.cn [Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044 (China); College of Automation, Chongqing University, Chongqing 400044 (China)

    2015-11-15

    In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.

  3. High-dimensional neural network potentials for solvation: The case of protonated water clusters in helium

    Science.gov (United States)

    Schran, Christoph; Uhl, Felix; Behler, Jörg; Marx, Dominik

    2018-03-01

    The design of accurate helium-solute interaction potentials for the simulation of chemically complex molecules solvated in superfluid helium has long been a cumbersome task due to the rather weak but strongly anisotropic nature of the interactions. We show that this challenge can be met by using a combination of an effective pair potential for the He-He interactions and a flexible high-dimensional neural network potential (NNP) for describing the complex interaction between helium and the solute in a pairwise additive manner. This approach yields an excellent agreement with a mean absolute deviation as small as 0.04 kJ mol-1 for the interaction energy between helium and both hydronium and Zundel cations compared with coupled cluster reference calculations with an energetically converged basis set. The construction and improvement of the potential can be performed in a highly automated way, which opens the door for applications to a variety of reactive molecules to study the effect of solvation on the solute as well as the solute-induced structuring of the solvent. Furthermore, we show that this NNP approach yields very convincing agreement with the coupled cluster reference for properties like many-body spatial and radial distribution functions. This holds for the microsolvation of the protonated water monomer and dimer by a few helium atoms up to their solvation in bulk helium as obtained from path integral simulations at about 1 K.

  4. Study of the Ground-State Geometry of Silicon Clusters Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    M.R. Lemes

    2002-09-01

    Full Text Available Theoretical determination of the ground-state geometry of Si clusters is a difficult task. As the number of local minima grows exponentially with the number of atoms, to find the global minimum is a real challenge. One may start the search procedure from a random distribution of atoms but it is probably wiser to make use of any available information to restrict the search space. Here, we introduce a new approach, the Assisted Genetic Optimization (AGO that couples an Artificial Neural Network (ANN to a Genetic Algorithm (GA. Using available information on small Silicon clusters, we trained an ANN to predict good starting points (initial population for the GA. AGO is applied to Si10 and Si20 and compared to pure GA. Our results indicate: i AGO is, at least, 5 times faster than pure GA in our test case; ii ANN training can be made very fast and successfully plays the role of an experienced investigator; iii AGO can easily be adapted to other optimization problems.

  5. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  6. Self-Adaptive Prediction of Cloud Resource Demands Using Ensemble Model and Subtractive-Fuzzy Clustering Based Fuzzy Neural Network

    Directory of Open Access Journals (Sweden)

    Zhijia Chen

    2015-01-01

    Full Text Available In IaaS (infrastructure as a service cloud environment, users are provisioned with virtual machines (VMs. To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN. We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands.

  7. Self-Adaptive Prediction of Cloud Resource Demands Using Ensemble Model and Subtractive-Fuzzy Clustering Based Fuzzy Neural Network

    Science.gov (United States)

    Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong

    2015-01-01

    In IaaS (infrastructure as a service) cloud environment, users are provisioned with virtual machines (VMs). To allocate resources for users dynamically and effectively, accurate resource demands predicting is essential. For this purpose, this paper proposes a self-adaptive prediction method using ensemble model and subtractive-fuzzy clustering based fuzzy neural network (ESFCFNN). We analyze the characters of user preferences and demands. Then the architecture of the prediction model is constructed. We adopt some base predictors to compose the ensemble model. Then the structure and learning algorithm of fuzzy neural network is researched. To obtain the number of fuzzy rules and the initial value of the premise and consequent parameters, this paper proposes the fuzzy c-means combined with subtractive clustering algorithm, that is, the subtractive-fuzzy clustering. Finally, we adopt different criteria to evaluate the proposed method. The experiment results show that the method is accurate and effective in predicting the resource demands. PMID:25691896

  8. Robust Clustering of Acoustic Emission Signals Using Neural Networks and Signal Subspace Projections

    Directory of Open Access Journals (Sweden)

    Vahid Emamian

    2003-03-01

    Full Text Available Acoustic emission-based techniques are being used for the nondestructive inspection of mechanical systems. For reliable automatic fault monitoring related to the generation and propagation of cracks, it is important to identify the transient crack-related signals in the presence of strong time-varying noise and other interference. A prominent difficulty is the inability to differentiate events due to crack growth from noise of various origins. This work presents a novel algorithm for automatic clustering and separation of acoustic emission (AE events based on multiple features extracted from the experimental data. The algorithm consists of two steps. In the first step, the noise is separated from the events of interest and subsequently removed using a combination of covariance analysis, principal component analysis (PCA, and differential time delay estimates. The second step processes the remaining data using a self-organizing map (SOM neural network, which outputs the noise and AE signals into separate neurons. To improve the efficiency of classification, the short-time Fourier transform (STFT is applied to retain the time-frequency features of the remaining events, reducing the dimension of the data. The algorithm is verified with two sets of data, and a correct classification ratio over 95% is achieved.

  9. Mixture of Clustered Bayesian Neural Networks for Modeling Friction Processes at the Nanoscale.

    Science.gov (United States)

    Zaidan, Martha A; Canova, Filippo F; Laurson, Lasse; Foster, Adam S

    2017-01-10

    Friction and wear are the source of every mechanical device failure, and lubricants are essential for the operation of the devices. These physical phenomena have a complex nature so that no model capable of accurately predicting the behavior of lubricants exists. Thus, lubricants cannot be designed from scratch but have to be screened through expensive trial-error tests. In this study we propose a machine learning (ML) method that infers the relationship between chemical composition of lubricants and their performance from a database. Because no such database of desirable size and completeness is publicly available, we compiled one from molecular dynamics (MD) simulations of toy-model fluids nanoconfined between shearing surfaces. The fluid-friction relation is modeled by a Bayesian neural network (BNN), trained to reproduce the results for a training set of fluids. Due to the inhomogeneous data distribution it was necessary to carefully pick fluids for training and validation from the database with advanced clustering algorithms, rather than using the standard random selection. Different BNNs were then trained on the data clusters and their predictions combined into a mixture of experts. The model provides a prediction of lubricants performance as well as an error bar, at a fraction of the cost of MD. Because most values agree with the actual MD simulations within the estimated error σ, we conclude that the model is satisfactory. This method addresses the challenges brought by noisy, badly distributed, high-dimensional data that are likely to appear in reality as well, and it can be extended to real fluids, if a database could be provided.

  10. DCT-Yager FNN: a novel Yager-based fuzzy neural network with the discrete clustering technique.

    Science.gov (United States)

    Singh, A; Quek, C; Cho, S Y

    2008-04-01

    Earlier clustering techniques such as the modified learning vector quantization (MLVQ) and the fuzzy Kohonen partitioning (FKP) techniques have focused on the derivation of a certain set of parameters so as to define the fuzzy sets in terms of an algebraic function. The fuzzy membership functions thus generated are uniform, normal, and convex. Since any irregular training data is clustered into uniform fuzzy sets (Gaussian, triangular, or trapezoidal), the clustering may not be exact and some amount of information may be lost. In this paper, two clustering techniques using a Kohonen-like self-organizing neural network architecture, namely, the unsupervised discrete clustering technique (UDCT) and the supervised discrete clustering technique (SDCT), are proposed. The UDCT and SDCT algorithms reduce this data loss by introducing nonuniform, normal fuzzy sets that are not necessarily convex. The training data range is divided into discrete points at equal intervals, and the membership value corresponding to each discrete point is generated. Hence, the fuzzy sets obtained contain pairs of values, each pair corresponding to a discrete point and its membership grade. Thus, it can be argued that fuzzy membership functions generated using this kind of a discrete methodology provide a more accurate representation of the actual input data. This fact has been demonstrated by comparing the membership functions generated by the UDCT and SDCT algorithms against those generated by the MLVQ, FKP, and pseudofuzzy Kohonen partitioning (PFKP) algorithms. In addition to these clustering techniques, a novel pattern classifying network called the Yager fuzzy neural network (FNN) is proposed in this paper. This network corresponds completely to the Yager inference rule and exhibits remarkable generalization abilities. A modified version of the pseudo-outer product (POP)-Yager FNN called the modified Yager FNN is introduced that eliminates the drawbacks of the earlier network and yi- elds

  11. Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network

    Science.gov (United States)

    Wang, Zhen-yu; Zhang, Li-jie

    2017-10-01

    Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s

  12. Introduction to neural networks

    CERN Document Server

    James, Frederick E

    1994-02-02

    1. Introduction and overview of Artificial Neural Networks. 2,3. The Feed-forward Network as an inverse Problem, and results on the computational complexity of network training. 4.Physics applications of neural networks.

  13. Combining LiDAR Space Clustering and Convolutional Neural Networks for Pedestrian Detection

    OpenAIRE

    Matti, Damien; Ekenel, Hazim Kemal; Thiran, Jean-Philippe

    2017-01-01

    Pedestrian detection is an important component for safety of autonomous vehicles, as well as for traffic and street surveillance. There are extensive benchmarks on this topic and it has been shown to be a challenging problem when applied on real use-case scenarios. In purely image-based pedestrian detection approaches, the state-of-the-art results have been achieved with convolutional neural networks (CNN) and surprisingly few detection frameworks have been built upon multi-cue approaches. In...

  14. A Parallel Strategy for Convolutional Neural Network Based on Heterogeneous Cluster for Mobile Information System

    Directory of Open Access Journals (Sweden)

    Jilin Zhang

    2017-01-01

    Full Text Available With the development of the mobile systems, we gain a lot of benefits and convenience by leveraging mobile devices; at the same time, the information gathered by smartphones, such as location and environment, is also valuable for business to provide more intelligent services for customers. More and more machine learning methods have been used in the field of mobile information systems to study user behavior and classify usage patterns, especially convolutional neural network. With the increasing of model training parameters and data scale, the traditional single machine training method cannot meet the requirements of time complexity in practical application scenarios. The current training framework often uses simple data parallel or model parallel method to speed up the training process, which is why heterogeneous computing resources have not been fully utilized. To solve these problems, our paper proposes a delay synchronization convolutional neural network parallel strategy, which leverages the heterogeneous system. The strategy is based on both synchronous parallel and asynchronous parallel approaches; the model training process can reduce the dependence on the heterogeneous architecture in the premise of ensuring the model convergence, so the convolution neural network framework is more adaptive to different heterogeneous system environments. The experimental results show that the proposed delay synchronization strategy can achieve at least three times the speedup compared to the traditional data parallelism.

  15. A Fuzzy Neural Network Based on Non-Euclidean Distance Clustering for Quality Index Model in Slashing Process

    Directory of Open Access Journals (Sweden)

    Yuxian Zhang

    2015-01-01

    Full Text Available The quality index model in slashing process is difficult to build by reason of the outliers and noise data from original data. To the above problem, a fuzzy neural network based on non-Euclidean distance clustering is proposed in which the input space is partitioned into many local regions by the fuzzy clustering based on non-Euclidean distance so that the computation complexity is decreased, and fuzzy rule number is determined by validity function based on both the separation and the compactness among clusterings. Then, the premise parameters and consequent parameters are trained by hybrid learning algorithm. The parameters identification is realized; meanwhile the convergence condition of consequent parameters is obtained by Lyapunov function. Finally, the proposed method is applied to build the quality index model in slashing process in which the experimental data come from the actual slashing process. The experiment results show that the proposed fuzzy neural network for quality index model has lower computation complexity and faster convergence time, comparing with GP-FNN, BPNN, and RBFNN.

  16. Comparison of the applicability of neural networks and cluster classification methods on the example company's financial situation

    Directory of Open Access Journals (Sweden)

    Oldřich Trenz

    2010-01-01

    Full Text Available The paper is focused on comparing the classification ability of the model with self-learning neutral network and methods from cluster analysis. The emphasis is particularly on the comparison of different approaches to a specific application example of the commitment, the classification of then financial situation. The aim is to critically evaluate different approaches at the level of application and deployment options.The verify the classification capability of the different approaches were used financial data from the database „Credit Info“, in particular data describing the financial situation of the two hundred eleven farms of homogeneous and uniform primary field.Input data were from the methods used, modified and evaluated by appropriate methodology. Found the final solution showed that the used approaches do not show significant differences, and they can say that they are equivalent. Based on this finding can formulate the conclusion that the approach of artificial intelligence (self-learning neural network is as effective as a partial methods in the field of cluster analysis. In both cases, these approaches can be an invaluable tool in decision making.When the financial situation is evaluated by the expert, the calculation of liquidity, profitability and other financial indicators are making some simplification. In this respect, neural networks perform better, since these simplifications in them selves are not natively included. They can better assess and somewhat ambiguous cases, including businesses with undefined financial situation, the so-called data in the border region. In this respect, support and representation of the graphical layout of the resulting situation sorted out objects using software implemented neural network model.

  17. Symmetry, Hopf bifurcation, and the emergence of cluster solutions in time delayed neural networks

    Science.gov (United States)

    Wang, Zhen; Campbell, Sue Ann

    2017-11-01

    We consider the networks of N identical oscillators with time delayed, global circulant coupling, modeled by a system of delay differential equations with ZN symmetry. We first study the existence of Hopf bifurcations induced by the coupling time delay and then use symmetric Hopf bifurcation theory to determine how these bifurcations lead to different patterns of symmetric cluster oscillations. We apply our results to a case study: a network of FitzHugh-Nagumo neurons with diffusive coupling. For this model, we derive the asymptotic stability, global asymptotic stability, absolute instability, and stability switches of the equilibrium point in the plane of coupling time delay (τ) and excitability parameter (a). We investigate the patterns of cluster oscillations induced by the time delay and determine the direction and stability of the bifurcating periodic orbits by employing the multiple timescales method and normal form theory. We find that in the region where stability switching occurs, the dynamics of the system can be switched from the equilibrium point to any symmetric cluster oscillation, and back to equilibrium point as the time delay is increased.

  18. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  19. Improvement for detection of microcalcifications through clustering algorithms and artificial neural networks

    Science.gov (United States)

    Quintanilla-Domínguez, Joel; Ojeda-Magaña, Benjamín; Marcano-Cedeño, Alexis; Cortina-Januchs, María G.; Vega-Corona, Antonio; Andina, Diego

    2011-12-01

    A new method for detecting microcalcifications in regions of interest (ROIs) extracted from digitized mammograms is proposed. The top-hat transform is a technique based on mathematical morphology operations and, in this paper, is used to perform contrast enhancement of the mi-crocalcifications. To improve microcalcification detection, a novel image sub-segmentation approach based on the possibilistic fuzzy c-means algorithm is used. From the original ROIs, window-based features, such as the mean and standard deviation, were extracted; these features were used as an input vector in a classifier. The classifier is based on an artificial neural network to identify patterns belonging to microcalcifications and healthy tissue. Our results show that the proposed method is a good alternative for automatically detecting microcalcifications, because this stage is an important part of early breast cancer detection.

  20. Comparison of rule-based and artificial neural network approaches for improving the automated detection of clustered microcalcifications in mammograms

    Science.gov (United States)

    Nagel, Rufus H.; Nishikawa, Robert M.; Papaioannou, John; Giger, Maryellen L.; Doi, Kunio

    1995-08-01

    Forty-six thousnad women die each year in the US from breast cancer. Mammography is the best method of detecting breast cancer and has been shown to reduce breast cancer mortality in randomized controlled studies. Clustered microcalcifications are often the first sign of breast cancer in a mammogram. The use of a second reader may improve the sensitivity of detecting clustered microcalcifications. Our laboratory has developed a computerized scheme for the detection of clustered microcalcifications that is undergoing clinical evalution. This paper concerns the feature analysis stage of the computerized scheme, which is designed to remove false-computer detections. We have examined three methods of feature analysis: rule-based (the method currently used in the clinical system), an artificial neural network (ANN), and a combined method. To compare the three methods, the false-positive (FP) rate at a sensitivity of 85% was measured on two separate databases. The average number of FPs per image were: 0.54 for rule-based, 0.44 for ANN, and 0.31 for the combined method. The combined method had the highest performance and will be incorporated into the clinical system.

  1. Classification of Fourier transform infrared microscopic imaging data of human breast cells by cluster analysis and artificial neural networks.

    Science.gov (United States)

    Zhang, Lin; Small, Gary W; Haka, Abigail S; Kidder, Linda H; Lewis, E Neil

    2003-01-01

    Cluster analysis and artificial neural networks (ANNs) are applied to the automated assessment of disease state in Fourier transform infrared microscopic imaging measurements of normal and carcinomatous immortalized human breast cell lines. K-means clustering is used to implement an automated algorithm for the assignment of pixels in the image to cell and non-cell categories. Cell pixels are subsequently classified into carcinoma and normal categories through the use of a feed-forward ANN computed with the Broyden-Fletcher-Goldfarb-Shanno training algorithm. Inputs to the ANN consist of principal component scores computed from Fourier filtered absorbance data. A grid search optimization procedure is used to identify the optimal network architecture and filter frequency response. Data from three images corresponding to normal cells, carcinoma cells, and a mixture of normal and carcinoma cells are used to build and test the classification methodology. A successful classifier is developed through this work, although differences in the spectral backgrounds between the three images are observed to complicate the classification problem. The robustness of the final classifier is improved through the use of a rejection threshold procedure to prevent classification of outlying pixels.

  2. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  3. Percolation in clustered networks

    OpenAIRE

    Miller, Joel C

    2009-01-01

    The social networks that infectious diseases spread along are typically clustered. Because of the close relation between percolation and epidemic spread, the behavior of percolation in such networks gives insight into infectious disease dynamics. A number of authors have studied clustered networks, but the networks often contain preferential mixing between high degree nodes. We introduce a class of random clustered networks and another class of random unclustered networks with the same prefer...

  4. Clustering of correlated networks

    OpenAIRE

    Dorogovtsev, S. N.

    2003-01-01

    We obtain the clustering coefficient, the degree-dependent local clustering, and the mean clustering of networks with arbitrary correlations between the degrees of the nearest-neighbor vertices. The resulting formulas allow one to determine the nature of the clustering of a network.

  5. A density-functional theory-based neural network potential for water clusters including van der Waals corrections.

    Science.gov (United States)

    Morawietz, Tobias; Behler, Jörg

    2013-08-15

    The fundamental importance of water for many chemical processes has motivated the development of countless efficient but approximate water potentials for large-scale molecular dynamics simulations, from simple empirical force fields to very sophisticated flexible water models. Accurate and generally applicable water potentials should fulfill a number of requirements. They should have a quality close to quantum chemical methods, they should explicitly depend on all degrees of freedom including all relevant many-body interactions, and they should be able to describe molecular dissociation and recombination. In this work, we present a high-dimensional neural network (NN) potential for water clusters based on density-functional theory (DFT) calculations, which is constructed using clusters containing up to 10 monomers and is in principle able to meet all these requirements. We investigate the reliability of specific parametrizations employing two frequently used generalized gradient approximation (GGA) exchange-correlation functionals, PBE and RPBE, as reference methods. We find that the binding energy errors of the NN potentials with respect to DFT are significantly lower than the typical uncertainties of DFT calculations arising from the choice of the exchange-correlation functional. Further, we examine the role of van der Waals interactions, which are not properly described by GGA functionals. Specifically, we incorporate the D3 scheme suggested by Grimme (J. Chem. Phys. 2010, 132, 154104) in our potentials and demonstrate that it can be applied to GGA-based NN potentials in the same way as to DFT calculations without modification. Our results show that the description of small water clusters provided by the RPBE functional is significantly improved if van der Waals interactions are included, while in case of the PBE functional, which is well-known to yield stronger binding than RPBE, van der Waals corrections lead to overestimated binding energies.

  6. Neural Networks: Implementations and Applications

    NARCIS (Netherlands)

    Vonk, E.; Veelenturf, L.P.J.; Jain, L.C.

    1996-01-01

    Artificial neural networks, also called neural networks, have been used successfully in many fields including engineering, science and business. This paper presents the implementation of several neural network simulators and their applications in character recognition and other engineering areas

  7. Parametric Identification of Aircraft Loads: An Artificial Neural Network Approach

    Science.gov (United States)

    2016-03-30

    Undergraduate Student Paper Postgraduate Student Paper Parametric Identification of Aircraft Loads: An Artificial Neural Network Approach...monitoring, flight parameter, nonlinear modeling, Artificial Neural Network , typical loadcase. Introduction Aircraft load monitoring is an... Neural Networks (ANN), i.e. the BP network and Kohonen Clustering Network , are applied and revised by Kalman Filter and Genetic Algorithm to build

  8. Hierarchical and non-hierarchical clustering and artificial neural networks for thechracterization of groups of feedlot-finished male cattle

    Directory of Open Access Journals (Sweden)

    Wignez Henrique

    2015-03-01

    Full Text Available The individual experimental results of 1,393 feedlot-finished cattle of different genetic groups obtained at different research institutions were collected. Exploratory multivariate hierarchical analysis was applied, which permitted the division of cattle into seven groups containing animals with similar performance patterns. The following variables were studied: weight of the animal at feedlot entry and exit, concentrate percentage, time spent in the feedlot, dry matter intake, weight gain, and feed efficiency. The data were submitted to non-hierarchical k-means cluster analysis, which revealed that all traits should be considered. In addition to the variables used in the previous analysis, the following variables were included: dietary nutrient content, crude protein and total digestible nutrient intake, hot carcass weight and yield, fat coverage, and loin eye area. Using all of these data, structures of 3 to 14 groups were formed which were analyzed using Kohonen self-organizing maps. Specimens of the Nellore breed, either intact or castrated, were diluted among groups in hierarchical and non-hierarchical analysis, as well as in the analysis of artificial neural networks. Nellore animals therefore cannot be characterized as having a single behavior when finished in feedlots, since they participate in groups formed with animals of other Zebu breeds (Gyr, Guzerá and with animals of European breeds (Hereford, Aberdeen Angus, Caracu that exhibit different performance potentials.

  9. Securing Personal Network Clusters

    NARCIS (Netherlands)

    Jehangir, A.; Heemstra de Groot, S.M.

    2007-01-01

    A Personal Network is a self-organizing, secure and private network of a user’s devices notwithstanding their geographic location. It aims to utilize pervasive computing to provide users with new and improved services. In this paper we propose a model for securing Personal Network clusters. Clusters

  10. Hidden neural networks

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose; Riis, Søren Kamaric

    1999-01-01

    A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability...... parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum...... likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear...

  11. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  12. Critical Branching Neural Networks

    Science.gov (United States)

    Kello, Christopher T.

    2013-01-01

    It is now well-established that intrinsic variations in human neural and behavioral activity tend to exhibit scaling laws in their fluctuations and distributions. The meaning of these scaling laws is an ongoing matter of debate between isolable causes versus pervasive causes. A spiking neural network model is presented that self-tunes to critical…

  13. Million city traveling salesman problem solution by divide and conquer clustering with adaptive resonance neural networks.

    Science.gov (United States)

    Mulder, Samuel A; Wunsch, Donald C

    2003-01-01

    The Traveling Salesman Problem (TSP) is a very hard optimization problem in the field of operations research. It has been shown to be NP-complete, and is an often-used benchmark for new optimization techniques. One of the main challenges with this problem is that standard, non-AI heuristic approaches such as the Lin-Kernighan algorithm (LK) and the chained LK variant are currently very effective and in wide use for the common fully connected, Euclidean variant that is considered here. This paper presents an algorithm that uses adaptive resonance theory (ART) in combination with a variation of the Lin-Kernighan local optimization algorithm to solve very large instances of the TSP. The primary advantage of this algorithm over traditional LK and chained-LK approaches is the increased scalability and parallelism allowed by the divide-and-conquer clustering paradigm. Tours obtained by the algorithm are lower quality, but scaling is much better and there is a high potential for increasing performance using parallel hardware.

  14. Neural network applications

    Science.gov (United States)

    Padgett, Mary L.; Desai, Utpal; Roppel, T.A.; White, Charles R.

    1993-01-01

    A design procedure is suggested for neural networks which accommodates the inclusion of such knowledge-based systems techniques as fuzzy logic and pairwise comparisons. The use of these procedures in the design of applications combines qualitative and quantitative factors with empirical data to yield a model with justifiable design and parameter selection procedures. The procedure is especially relevant to areas of back-propagation neural network design which are highly responsive to the use of precisely recorded expert knowledge.

  15. Classification of behavior using unsupervised temporal neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Adair, K.L. [Florida State Univ., Tallahassee, FL (United States). Dept. of Computer Science; Argo, P. [Los Alamos National Lab., NM (United States)

    1998-03-01

    Adding recurrent connections to unsupervised neural networks used for clustering creates a temporal neural network which clusters a sequence of inputs as they appear over time. The model presented combines the Jordan architecture with the unsupervised learning technique Adaptive Resonance Theory, Fuzzy ART. The combination yields a neural network capable of quickly clustering sequential pattern sequences as the sequences are generated. The applicability of the architecture is illustrated through a facility monitoring problem.

  16. Complex time series analysis of PM10 and PM2.5 for a coastal site using artificial neural network modelling and k-means clustering

    Science.gov (United States)

    Elangasinghe, M. A.; Singhal, N.; Dirks, K. N.; Salmond, J. A.; Samarasinghe, S.

    2014-09-01

    This paper uses artificial neural networks (ANN), combined with k-means clustering, to understand the complex time series of PM10 and PM2.5 concentrations at a coastal location of New Zealand based on data from a single site. Out of available meteorological parameters from the network (wind speed, wind direction, solar radiation, temperature, relative humidity), key factors governing the pattern of the time series concentrations were identified through input sensitivity analysis performed on the trained neural network model. The transport pathways of particulate matter under these key meteorological parameters were further analysed through bivariate concentration polar plots and k-means clustering techniques. The analysis shows that the external sources such as marine aerosols and local sources such as traffic and biomass burning contribute equally to the particulate matter concentrations at the study site. These results are in agreement with the results of receptor modelling by the Auckland Council based on Positive Matrix Factorization (PMF). Our findings also show that contrasting concentration-wind speed relationships exist between marine aerosols and local traffic sources resulting in very noisy and seemingly large random PM10 concentrations. The inclusion of cluster rankings as an input parameter to the ANN model showed a statistically significant (p transport characteristics prior to the implementation of costly chemical analysis techniques or advanced air dispersion models.

  17. A functional clustering algorithm for the analysis of neural relationships

    CERN Document Server

    Feldt, S; Hetrick, V L; Berke, J D; Zochowski, M

    2008-01-01

    We formulate a novel technique for the detection of functional clusters in neural data. In contrast to prior network clustering algorithms, our procedure progressively combines spike trains and derives the optimal clustering cutoff in a simple and intuitive manner. To demonstrate the power of this algorithm to detect changes in network dynamics and connectivity, we apply it to both simulated data and real neural data obtained from the mouse hippocampus during exploration and slow-wave sleep. We observe state-dependent clustering patterns consistent with known neurophysiological processes involved in memory consolidation.

  18. Hyperbolic Hopfield neural networks.

    Science.gov (United States)

    Kobayashi, M

    2013-02-01

    In recent years, several neural networks using Clifford algebra have been studied. Clifford algebra is also called geometric algebra. Complex-valued Hopfield neural networks (CHNNs) are the most popular neural networks using Clifford algebra. The aim of this brief is to construct hyperbolic HNNs (HHNNs) as an analog of CHNNs. Hyperbolic algebra is a Clifford algebra based on Lorentzian geometry. In this brief, a hyperbolic neuron is defined in a manner analogous to a phasor neuron, which is a typical complex-valued neuron model. HHNNs share common concepts with CHNNs, such as the angle and energy. However, HHNNs and CHNNs are different in several aspects. The states of hyperbolic neurons do not form a circle, and, therefore, the start and end states are not identical. In the quantized version, unlike complex-valued neurons, hyperbolic neurons have an infinite number of states.

  19. Introduction to Artificial Neural Networks

    DEFF Research Database (Denmark)

    Larsen, Jan

    1999-01-01

    The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks.......The note addresses introduction to signal analysis and classification based on artificial feed-forward neural networks....

  20. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  1. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .

  2. Clustering signatures classify directed networks

    Science.gov (United States)

    Ahnert, S. E.; Fink, T. M. A.

    2008-09-01

    We use a clustering signature, based on a recently introduced generalization of the clustering coefficient to directed networks, to analyze 16 directed real-world networks of five different types: social networks, genetic transcription networks, word adjacency networks, food webs, and electric circuits. We show that these five classes of networks are cleanly separated in the space of clustering signatures due to the statistical properties of their local neighborhoods, demonstrating the usefulness of clustering signatures as a classifier of directed networks.

  3. Neural network technologies

    Science.gov (United States)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  4. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Neural networks for triggering

    Energy Technology Data Exchange (ETDEWEB)

    Denby, B. (Fermi National Accelerator Lab., Batavia, IL (USA)); Campbell, M. (Michigan Univ., Ann Arbor, MI (USA)); Bedeschi, F. (Istituto Nazionale di Fisica Nucleare, Pisa (Italy)); Chriss, N.; Bowers, C. (Chicago Univ., IL (USA)); Nesti, F. (Scuola Normale Superiore, Pisa (Italy))

    1990-01-01

    Two types of neural network beauty trigger architectures, based on identification of electrons in jets and recognition of secondary vertices, have been simulated in the environment of the Fermilab CDF experiment. The efficiencies for B's and rejection of background obtained are encouraging. If hardware tests are successful, the electron identification architecture will be tested in the 1991 run of CDF. 10 refs., 5 figs., 1 tab.

  6. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  7. [Artificial neural networks in Neurosciences].

    Science.gov (United States)

    Porras Chavarino, Carmen; Salinas Martínez de Lecea, José María

    2011-11-01

    This article shows that artificial neural networks are used for confirming the relationships between physiological and cognitive changes. Specifically, we explore the influence of a decrease of neurotransmitters on the behaviour of old people in recognition tasks. This artificial neural network recognizes learned patterns. When we change the threshold of activation in some units, the artificial neural network simulates the experimental results of old people in recognition tasks. However, the main contributions of this paper are the design of an artificial neural network and its operation inspired by the nervous system and the way the inputs are coded and the process of orthogonalization of patterns.

  8. Analysis of neural networks

    CERN Document Server

    Heiden, Uwe

    1980-01-01

    The purpose of this work is a unified and general treatment of activity in neural networks from a mathematical pOint of view. Possible applications of the theory presented are indica­ ted throughout the text. However, they are not explored in de­ tail for two reasons : first, the universal character of n- ral activity in nearly all animals requires some type of a general approach~ secondly, the mathematical perspicuity would suffer if too many experimental details and empirical peculiarities were interspersed among the mathematical investigation. A guide to many applications is supplied by the references concerning a variety of specific issues. Of course the theory does not aim at covering all individual problems. Moreover there are other approaches to neural network theory (see e.g. Poggio-Torre, 1978) based on the different lev­ els at which the nervous system may be viewed. The theory is a deterministic one reflecting the average be­ havior of neurons or neuron pools. In this respect the essay is writt...

  9. Clustering of resting state networks.

    Directory of Open Access Journals (Sweden)

    Megan H Lee

    Full Text Available The goal of the study was to demonstrate a hierarchical structure of resting state activity in the healthy brain using a data-driven clustering algorithm.The fuzzy-c-means clustering algorithm was applied to resting state fMRI data in cortical and subcortical gray matter from two groups acquired separately, one of 17 healthy individuals and the second of 21 healthy individuals. Different numbers of clusters and different starting conditions were used. A cluster dispersion measure determined the optimal numbers of clusters. An inner product metric provided a measure of similarity between different clusters. The two cluster result found the task-negative and task-positive systems. The cluster dispersion measure was minimized with seven and eleven clusters. Each of the clusters in the seven and eleven cluster result was associated with either the task-negative or task-positive system. Applying the algorithm to find seven clusters recovered previously described resting state networks, including the default mode network, frontoparietal control network, ventral and dorsal attention networks, somatomotor, visual, and language networks. The language and ventral attention networks had significant subcortical involvement. This parcellation was consistently found in a large majority of algorithm runs under different conditions and was robust to different methods of initialization.The clustering of resting state activity using different optimal numbers of clusters identified resting state networks comparable to previously obtained results. This work reinforces the observation that resting state networks are hierarchically organized.

  10. Neural Networks for Optimal Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1995-01-01

    Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process.......Two neural networks are trained to act as an observer and a controller, respectively, to control a non-linear, multi-variable process....

  11. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... examined, and it appears that considering 'normal' neural network models with, say, 500 samples, the problem of over-fitting is neglible, and therefore it is not taken into consideration afterwards. Numerous model types, often met in control applications, are implemented as neural network models...... Kalmann filter) representing state space description. The potentials of neural networks for control of non-linear processes are also examined, focusing on three different groups of control concepts, all considered as generalizations of known linear control concepts to handle also non-linear processes...

  12. An Optoelectronic Neural Network

    Science.gov (United States)

    Neil, Mark A. A.; White, Ian H.; Carroll, John E.

    1990-02-01

    We describe and present results of an optoelectronic neural network processing system. The system uses an algorithm based on the Hebbian learning rule to memorise a set of associated vector pairs. Recall occurs by the processing of the input vector with these stored associations in an incoherent optical vector multiplier using optical polarisation rotating liquid crystal spatial light modulators to store the vectors and an optical polarisation shadow casting technique to perform multiplications. Results are detected on a photodiode array and thresholded electronically by a controlling microcomputer. The processor is shown to work in autoassociative and heteroassociative modes with up to 10 stored memory vectors of length 64 (equivalent to 64 neurons) and a cycle time of 50ms. We discuss the limiting factors at work in this system, how they affect its scalability and the general applicability of its principles to other systems.

  13. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  14. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    simulated process and compared. The closing chapter describes some practical experiments, where the different control concepts and training methods are tested on the same practical process operating in very noisy environments. All tests confirm that neural networks also have the potential to be trained......The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...

  15. Discovering biomarkers from gene expression data for predicting cancer subgroups using neural networks and relational fuzzy clustering

    Directory of Open Access Journals (Sweden)

    Sharma Animesh

    2007-01-01

    Full Text Available Abstract Background The four heterogeneous childhood cancers, neuroblastoma, non-Hodgkin lymphoma, rhabdomyosarcoma, and Ewing sarcoma present a similar histology of small round blue cell tumor (SRBCT and thus often leads to misdiagnosis. Identification of biomarkers for distinguishing these cancers is a well studied problem. Existing methods typically evaluate each gene separately and do not take into account the nonlinear interaction between genes and the tools that are used to design the diagnostic prediction system. Consequently, more genes are usually identified as necessary for prediction. We propose a general scheme for finding a small set of biomarkers to design a diagnostic system for accurate classification of the cancer subgroups. We use multilayer networks with online gene selection ability and relational fuzzy clustering to identify a small set of biomarkers for accurate classification of the training and blind test cases of a well studied data set. Results Our method discerned just seven biomarkers that precisely categorized the four subgroups of cancer both in training and blind samples. For the same problem, others suggested 19–94 genes. These seven biomarkers include three novel genes (NAB2, LSP1 and EHD1 – not identified by others with distinct class-specific signatures and important role in cancer biology, including cellular proliferation, transendothelial migration and trafficking of MHC class antigens. Interestingly, NAB2 is downregulated in other tumors including Non-Hodgkin lymphoma and Neuroblastoma but we observed moderate to high upregulation in a few cases of Ewing sarcoma and Rabhdomyosarcoma, suggesting that NAB2 might be mutated in these tumors. These genes can discover the subgroups correctly with unsupervised learning, can differentiate non-SRBCT samples and they perform equally well with other machine learning tools including support vector machines. These biomarkers lead to four simple human interpretable

  16. Neural-like growing networks

    Science.gov (United States)

    Yashchenko, Vitaliy A.

    2000-03-01

    On the basis of the analysis of scientific ideas reflecting the law in the structure and functioning the biological structures of a brain, and analysis and synthesis of knowledge, developed by various directions in Computer Science, also there were developed the bases of the theory of a new class neural-like growing networks, not having the analogue in world practice. In a base of neural-like growing networks the synthesis of knowledge developed by classical theories - semantic and neural of networks is. The first of them enable to form sense, as objects and connections between them in accordance with construction of the network. With thus each sense gets a separate a component of a network as top, connected to other tops. In common it quite corresponds to structure reflected in a brain, where each obvious concept is presented by certain structure and has designating symbol. Secondly, this network gets increased semantic clearness at the expense owing to formation not only connections between neural by elements, but also themselves of elements as such, i.e. here has a place not simply construction of a network by accommodation sense structures in environment neural of elements, and purely creation of most this environment, as of an equivalent of environment of memory. Thus neural-like growing networks are represented by the convenient apparatus for modeling of mechanisms of teleological thinking, as a fulfillment of certain psychophysiological of functions.

  17. Artificial Neural Networks·

    Indian Academy of Sciences (India)

    differences between biological neural networks (BNNs) of the brain and ANN s. A thorough understanding of ... neurons. Artificial neural models are loosely based on biology since a complete understanding of the .... A learning scheme for updating a neuron's connections (weights) was proposed by Donald Hebb in 1949.

  18. Memristor-based neural networks

    Science.gov (United States)

    Thomas, Andy

    2013-03-01

    The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them.

  19. Pansharpening by Convolutional Neural Networks

    National Research Council Canada - National Science Library

    Masi, Giuseppe; Cozzolino, Davide; Verdoliva, Luisa; Scarpa, Giuseppe

    2016-01-01

    A new pansharpening method is proposed, based on convolutional neural networks. We adapt a simple and effective three-layer architecture recently proposed for super-resolution to the pansharpening problem...

  20. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  1. Biologically Inspired Modular Neural Networks

    OpenAIRE

    Azam, Farooq

    2000-01-01

    This dissertation explores the modular learning in artificial neural networks that mainly driven by the inspiration from the neurobiological basis of the human learning. The presented modularization approaches to the neural network design and learning are inspired by the engineering, complexity, psychological and neurobiological aspects. The main theme of this dissertation is to explore the organization and functioning of the brain to discover new structural and learning ...

  2. Improved Extension Neural Network and Its Applications

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2014-01-01

    Full Text Available Extension neural network (ENN is a new neural network that is a combination of extension theory and artificial neural network (ANN. The learning algorithm of ENN is based on supervised learning algorithm. One of important issues in the field of classification and recognition of ENN is how to achieve the best possible classifier with a small number of labeled training data. Training data selection is an effective approach to solve this issue. In this work, in order to improve the supervised learning performance and expand the engineering application range of ENN, we use a novel data selection method based on shadowed sets to refine the training data set of ENN. Firstly, we use clustering algorithm to label the data and induce shadowed sets. Then, in the framework of shadowed sets, the samples located around each cluster centers (core data and the borders between clusters (boundary data are selected as training data. Lastly, we use selected data to train ENN. Compared with traditional ENN, the proposed improved ENN (IENN has a better performance. Moreover, IENN is independent of the supervised learning algorithms and initial labeled data. Experimental results verify the effectiveness and applicability of our proposed work.

  3. Decentralized clustering over adaptive networks

    OpenAIRE

    Khawatmi, Sahar; Zoubir, Abdelhak M.; Sayed, Ali H.

    2015-01-01

    Cooperation among agents across the network leads to better estimation accuracy. However, in many network applications the agents infer and track different models of interest in an environment where agents do not know beforehand which models are being observed by their neighbors. In this work, we propose an adaptive and distributed clustering technique that allows agents to learn and form clusters from streaming data in a robust manner. Once clusters are formed, cooperation among agents with ...

  4. Complex-Valued Neural Networks

    CERN Document Server

    Hirose, Akira

    2012-01-01

    This book is the second enlarged and revised edition of the first successful monograph on complex-valued neural networks (CVNNs) published in 2006, which lends itself to graduate and undergraduate courses in electrical engineering, informatics, control engineering, mechanics, robotics, bioengineering, and other relevant fields. In the second edition the recent trends in CVNNs research are included, resulting in e.g. almost a doubled number of references. The parametron invented in 1954 is also referred to with discussion on analogy and disparity. Also various additional arguments on the advantages of the complex-valued neural networks enhancing the difference to real-valued neural networks are given in various sections. The book is useful for those beginning their studies, for instance, in adaptive signal processing for highly functional sensing and imaging, control in unknown and changing environment, robotics inspired by human neural systems, and brain-like information processing, as well as interdisciplina...

  5. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    Science.gov (United States)

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2017-10-01

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  6. Spiking modular neural networks: A neural network modeling approach for hydrological processes

    National Research Council Canada - National Science Library

    Kamban Parasuraman; Amin Elshorbagy; Sean K. Carey

    2006-01-01

    .... In this study, a novel neural network model called the spiking modular neural networks (SMNNs) is proposed. An SMNN consists of an input layer, a spiking layer, and an associator neural network layer...

  7. Multigradient for Neural Networks for Equalizers

    Directory of Open Access Journals (Sweden)

    Chulhee Lee

    2003-06-01

    Full Text Available Recently, a new training algorithm, multigradient, has been published for neural networks and it is reported that the multigradient outperforms the backpropagation when neural networks are used as a classifier. When neural networks are used as an equalizer in communications, they can be viewed as a classifier. In this paper, we apply the multigradient algorithm to train the neural networks that are used as equalizers. Experiments show that the neural networks trained using the multigradient noticeably outperforms the neural networks trained by the backpropagation.

  8. Multiprocessor Neural Network in Healthcare.

    Science.gov (United States)

    Godó, Zoltán Attila; Kiss, Gábor; Kocsis, Dénes

    2015-01-01

    A possible way of creating a multiprocessor artificial neural network is by the use of microcontrollers. The RISC processors' high performance and the large number of I/O ports mean they are greatly suitable for creating such a system. During our research, we wanted to see if it is possible to efficiently create interaction between the artifical neural network and the natural nervous system. To achieve as much analogy to the living nervous system as possible, we created a frequency-modulated analog connection between the units. Our system is connected to the living nervous system through 128 microelectrodes. Two-way communication is provided through A/D transformation, which is even capable of testing psychopharmacons. The microcontroller-based analog artificial neural network can play a great role in medical singal processing, such as ECG, EEG etc.

  9. Artificial neural networks as quantum associative memory

    Science.gov (United States)

    Hamilton, Kathleen; Schrock, Jonathan; Imam, Neena; Humble, Travis

    We present results related to the recall accuracy and capacity of Hopfield networks implemented on commercially available quantum annealers. The use of Hopfield networks and artificial neural networks as content-addressable memories offer robust storage and retrieval of classical information, however, implementation of these models using currently available quantum annealers faces several challenges: the limits of precision when setting synaptic weights, the effects of spurious spin-glass states and minor embedding of densely connected graphs into fixed-connectivity hardware. We consider neural networks which are less than fully-connected, and also consider neural networks which contain multiple sparsely connected clusters. We discuss the effect of weak edge dilution on the accuracy of memory recall, and discuss how the multiple clique structure affects the storage capacity. Our work focuses on storage of patterns which can be embedded into physical hardware containing n States Department of Defense and used resources of the Computational Research and Development Programs as Oak Ridge National Laboratory under Contract No. DE-AC0500OR22725 with the U. S. Department of Energy.

  10. Tumor Diagnosis Using Backpropagation Neural Network Method

    Science.gov (United States)

    Ma, Lixing; Looney, Carl; Sukuta, Sydney; Bruch, Reinhard; Afanasyeva, Natalia

    1998-05-01

    For characterization of skin cancer, an artificial neural network (ANN) method has been developed to diagnose normal tissue, benign tumor and melanoma. The pattern recognition is based on a three-layer neural network fuzzy learning system. In this study, the input neuron data set is the Fourier Transform infrared (FT-IR)spectrum obtained by a new Fiberoptic Evanescent Wave Fourier Transform Infrared (FEW-FTIR) spectroscopy method in the range of 1480 to 1850 cm-1. Ten input features are extracted from the absorbency values in this region. A single hidden layer of neural nodes with sigmoids activation functions clusters the feature space into small subclasses and the output nodes are separated in different nonconvex classes to permit nonlinear discrimination of disease states. The output is classified as three classes: normal tissue, benign tumor and melanoma. The results obtained from the neural network pattern recognition are shown to be consistent with traditional medical diagnosis. Input features have also been extracted from the absorbency spectra using chemical factor analysis. These abstract features or factors are also used in the classification.

  11. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  12. voltage compensation using artificial neural network

    African Journals Online (AJOL)

    Offor Theophilos

    VOLTAGE COMPENSATION USING ARTIFICIAL NEURAL NETWORK: A CASE STUDY OF. RUMUOLA ... using artificial neural network (ANN) controller based dynamic voltage restorer (DVR). ... substation by simulating with sample of average voltage for Omerelu, Waterlines, Rumuola, Shell Industrial and Barracks.

  13. Plant Growth Models Using Artificial Neural Networks

    Science.gov (United States)

    Bubenheim, David

    1997-01-01

    In this paper, we descrive our motivation and approach to devloping models and the neural network architecture. Initial use of the artificial neural network for modeling the single plant process of transpiration is presented.

  14. Neural networks and applications tutorial

    Science.gov (United States)

    Guyon, I.

    1991-09-01

    The importance of neural networks has grown dramatically during this decade. While only a few years ago they were primarily of academic interest, now dozens of companies and many universities are investigating the potential use of these systems and products are beginning to appear. The idea of building a machine whose architecture is inspired by that of the brain has roots which go far back in history. Nowadays, technological advances of computers and the availability of custom integrated circuits, permit simulations of hundreds or even thousands of neurons. In conjunction, the growing interest in learning machines, non-linear dynamics and parallel computation spurred renewed attention in artificial neural networks. Many tentative applications have been proposed, including decision systems (associative memories, classifiers, data compressors and optimizers), or parametric models for signal processing purposes (system identification, automatic control, noise canceling, etc.). While they do not always outperform standard methods, neural network approaches are already used in some real world applications for pattern recognition and signal processing tasks. The tutorial is divided into six lectures, that where presented at the Third Graduate Summer Course on Computational Physics (September 3-7, 1990) on Parallel Architectures and Applications, organized by the European Physical Society: (1) Introduction: machine learning and biological computation. (2) Adaptive artificial neurons (perceptron, ADALINE, sigmoid units, etc.): learning rules and implementations. (3) Neural network systems: architectures, learning algorithms. (4) Applications: pattern recognition, signal processing, etc. (5) Elements of learning theory: how to build networks which generalize. (6) A case study: a neural network for on-line recognition of handwritten alphanumeric characters.

  15. Optoelectronic Implementation of Neural Networks

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 9. Optoelectronic Implementation of Neural Networks - Use of Optics in Computing. R Ramachandran. General Article Volume 3 Issue 9 September 1998 pp 45-55. Fulltext. Click here to view fulltext PDF. Permanent link:

  16. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.

    2000-01-01

    A web-based software model (http://fuzzy.iau.dtu.dk/aphasia.nsf) was developed as an example for classification of aphasia using neural networks. Two multilayer perceptrons were used to classify the type of aphasia (Broca, Wernicke, anomic, global) according to the results in some subtests...

  17. Analysis of neural networks through base functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, L.

    Problem statement. Despite their success-story, neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more

  18. Simplified LQG Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1997-01-01

    A new neural network application for non-linear state control is described. One neural network is modelled to form a Kalmann predictor and trained to act as an optimal state observer for a non-linear process. Another neural network is modelled to form a state controller and trained to produce...

  19. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically ...

  20. Dynamic properties of cellular neural networks

    Directory of Open Access Journals (Sweden)

    Angela Slavova

    1993-01-01

    Full Text Available Dynamic behavior of a new class of information-processing systems called Cellular Neural Networks is investigated. In this paper we introduce a small parameter in the state equation of a cellular neural network and we seek for periodic phenomena. New approach is used for proving stability of a cellular neural network by constructing Lyapunov's majorizing equations. This algorithm is helpful for finding a map from initial continuous state space of a cellular neural network into discrete output. A comparison between cellular neural networks and cellular automata is made.

  1. Granular neural networks, pattern recognition and bioinformatics

    CERN Document Server

    Pal, Sankar K; Ganivada, Avatharam

    2017-01-01

    This book provides a uniform framework describing how fuzzy rough granular neural network technologies can be formulated and used in building efficient pattern recognition and mining models. It also discusses the formation of granules in the notion of both fuzzy and rough sets. Judicious integration in forming fuzzy-rough information granules based on lower approximate regions enables the network to determine the exactness in class shape as well as to handle the uncertainties arising from overlapping regions, resulting in efficient and speedy learning with enhanced performance. Layered network and self-organizing analysis maps, which have a strong potential in big data, are considered as basic modules,. The book is structured according to the major phases of a pattern recognition system (e.g., classification, clustering, and feature selection) with a balanced mixture of theory, algorithm, and application. It covers the latest findings as well as directions for future research, particularly highlighting bioinf...

  2. Unsupervised clustering with spiking neurons by sparse temporal coding and multi-layer RBF networks

    NARCIS (Netherlands)

    S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)

    2000-01-01

    textabstractWe demonstrate that spiking neural networks encoding information in spike times are capable of computing and learning clusters from realistic data. We show how a spiking neural network based on spike-time coding and Hebbian learning can successfully perform unsupervised clustering on

  3. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  4. The LILARTI neural network system

    Energy Technology Data Exchange (ETDEWEB)

    Allen, J.D. Jr.; Schell, F.M.; Dodd, C.V.

    1992-10-01

    The material of this Technical Memorandum is intended to provide the reader with conceptual and technical background information on the LILARTI neural network system of detail sufficient to confer an understanding of the LILARTI method as it is presently allied and to facilitate application of the method to problems beyond the scope of this document. Of particular importance in this regard are the descriptive sections and the Appendices which include operating instructions, partial listings of program output and data files, and network construction information.

  5. Practical neural network recipies in C++

    CERN Document Server

    Masters

    2014-01-01

    This text serves as a cookbook for neural network solutions to practical problems using C++. It will enable those with moderate programming experience to select a neural network model appropriate to solving a particular problem, and to produce a working program implementing that network. The book provides guidance along the entire problem-solving path, including designing the training set, preprocessing variables, training and validating the network, and evaluating its performance. Though the book is not intended as a general course in neural networks, no background in neural works is assum

  6. Neural network modeling of emotion

    Science.gov (United States)

    Levine, Daniel S.

    2007-03-01

    This article reviews the history and development of computational neural network modeling of cognitive and behavioral processes that involve emotion. The exposition starts with models of classical conditioning dating from the early 1970s. Then it proceeds toward models of interactions between emotion and attention. Then models of emotional influences on decision making are reviewed, including some speculative (not and not yet simulated) models of the evolution of decision rules. Through the late 1980s, the neural networks developed to model emotional processes were mainly embodiments of significant functional principles motivated by psychological data. In the last two decades, network models of these processes have become much more detailed in their incorporation of known physiological properties of specific brain regions, while preserving many of the psychological principles from the earlier models. Most network models of emotional processes so far have dealt with positive and negative emotion in general, rather than specific emotions such as fear, joy, sadness, and anger. But a later section of this article reviews a few models relevant to specific emotions: one family of models of auditory fear conditioning in rats, and one model of induced pleasure enhancing creativity in humans. Then models of emotional disorders are reviewed. The article concludes with philosophical statements about the essential contributions of emotion to intelligent behavior and the importance of quantitative theories and models to the interdisciplinary enterprise of understanding the interactions of emotion, cognition, and behavior.

  7. MEMBRAIN NEURAL NETWORK FOR VISUAL PATTERN RECOGNITION

    Directory of Open Access Journals (Sweden)

    Artur Popko

    2013-06-01

    Full Text Available Recognition of visual patterns is one of significant applications of Artificial Neural Networks, which partially emulate human thinking in the domain of artificial intelligence. In the paper, a simplified neural approach to recognition of visual patterns is portrayed and discussed. This paper is dedicated for investigators in visual patterns recognition, Artificial Neural Networking and related disciplines. The document describes also MemBrain application environment as a powerful and easy to use neural networks’ editor and simulator supporting ANN.

  8. Satellite image analysis using neural networks

    Science.gov (United States)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  9. Fuzzy neural networks: theory and applications

    Science.gov (United States)

    Gupta, Madan M.

    1994-10-01

    During recent years, significant advances have been made in two distinct technological areas: fuzzy logic and computational neural networks. The theory of fuzzy logic provides a mathematical framework to capture the uncertainties associated with human cognitive processes, such as thinking and reasoning. It also provides a mathematical morphology to emulate certain perceptual and linguistic attributes associated with human cognition. On the other hand, the computational neural network paradigms have evolved in the process of understanding the incredible learning and adaptive features of neuronal mechanisms inherent in certain biological species. Computational neural networks replicate, on a small scale, some of the computational operations observed in biological learning and adaptation. The integration of these two fields, fuzzy logic and neural networks, have given birth to an emerging technological field -- fuzzy neural networks. Fuzzy neural networks, have the potential to capture the benefits of these two fascinating fields, fuzzy logic and neural networks, into a single framework. The intent of this tutorial paper is to describe the basic notions of biological and computational neuronal morphologies, and to describe the principles and architectures of fuzzy neural networks. Towards this goal, we develop a fuzzy neural architecture based upon the notion of T-norm and T-conorm connectives. An error-based learning scheme is described for this neural structure.

  10. Pediatric Nutritional Requirements Determination with Neural Networks

    OpenAIRE

    Karlık, Bekir; Ece, Aydın

    1998-01-01

    To calculate daily nutritional requirements of children, a computer program has been developed based upon neural network. Three parameters, daily protein, energy and water requirements, were calculated through trained artificial neural networks using a database of 312 children The results were compared with those of calculated from dietary requirements tables of World Health Organisation. No significant difference was found between two calculations. In conclusion, a simple neural network may ...

  11. Adaptive optimization and control using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Mead, W.C.; Brown, S.K.; Jones, R.D.; Bowling, P.S.; Barnes, C.W.

    1993-10-22

    Recent work has demonstrated the ability of neural-network-based controllers to optimize and control machines with complex, non-linear, relatively unknown control spaces. We present a brief overview of neural networks via a taxonomy illustrating some capabilities of different kinds of neural networks. We present some successful control examples, particularly the optimization and control of a small-angle negative ion source.

  12. Bayesian regularization of neural networks.

    Science.gov (United States)

    Burden, Frank; Winkler, Dave

    2008-01-01

    Bayesian regularized artificial neural networks (BRANNs) are more robust than standard back-propagation nets and can reduce or eliminate the need for lengthy cross-validation. Bayesian regularization is a mathematical process that converts a nonlinear regression into a "well-posed" statistical problem in the manner of a ridge regression. The advantage of BRANNs is that the models are robust and the validation process, which scales as O(N2) in normal regression methods, such as back propagation, is unnecessary. These networks provide solutions to a number of problems that arise in QSAR modeling, such as choice of model, robustness of model, choice of validation set, size of validation effort, and optimization of network architecture. They are difficult to overtrain, since evidence procedures provide an objective Bayesian criterion for stopping training. They are also difficult to overfit, because the BRANN calculates and trains on a number of effective network parameters or weights, effectively turning off those that are not relevant. This effective number is usually considerably smaller than the number of weights in a standard fully connected back-propagation neural net. Automatic relevance determination (ARD) of the input variables can be used with BRANNs, and this allows the network to "estimate" the importance of each input. The ARD method ensures that irrelevant or highly correlated indices used in the modeling are neglected as well as showing which are the most important variables for modeling the activity data. This chapter outlines the equations that define the BRANN method plus a flowchart for producing a BRANN-QSAR model. Some results of the use of BRANNs on a number of data sets are illustrated and compared with other linear and nonlinear models.

  13. Neural networks for nuclear spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States)] [and others

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  14. Credit Risk Evaluation System: An Artificial Neural Network Approach

    African Journals Online (AJOL)

    In this paper, we proposed an architecture which uses the theory of artificial neural networks and business rules to correctly determine whether a customer is good or bad. In the first step, by using clustering algorithm, clients are segmented into groups with similar features. In the second step, decision trees are built based ...

  15. Neural network based system for equipment surveillance

    Science.gov (United States)

    Vilim, R.B.; Gross, K.C.; Wegerich, S.W.

    1998-04-28

    A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.

  16. Fuzzy neural network theory and application

    CERN Document Server

    Liu, Puyin

    2004-01-01

    This book systematically synthesizes research achievements in the field of fuzzy neural networks in recent years. It also provides a comprehensive presentation of the developments in fuzzy neural networks, with regard to theory as well as their application to system modeling and image restoration. Special emphasis is placed on the fundamental concepts and architecture analysis of fuzzy neural networks. The book is unique in treating all kinds of fuzzy neural networks and their learning algorithms and universal approximations, and employing simulation examples which are carefully designed to he

  17. Pansharpening by Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Giuseppe Masi

    2016-07-01

    Full Text Available A new pansharpening method is proposed, based on convolutional neural networks. We adapt a simple and effective three-layer architecture recently proposed for super-resolution to the pansharpening problem. Moreover, to improve performance without increasing complexity, we augment the input by including several maps of nonlinear radiometric indices typical of remote sensing. Experiments on three representative datasets show the proposed method to provide very promising results, largely competitive with the current state of the art in terms of both full-reference and no-reference metrics, and also at a visual inspection.

  18. Neural networks and perceptual learning

    Science.gov (United States)

    Tsodyks, Misha; Gilbert, Charles

    2005-01-01

    Sensory perception is a learned trait. The brain strategies we use to perceive the world are constantly modified by experience. With practice, we subconsciously become better at identifying familiar objects or distinguishing fine details in our environment. Current theoretical models simulate some properties of perceptual learning, but neglect the underlying cortical circuits. Future neural network models must incorporate the top-down alteration of cortical function by expectation or perceptual tasks. These newly found dynamic processes are challenging earlier views of static and feedforward processing of sensory information. PMID:15483598

  19. Optimization with Potts Neural Networks

    Science.gov (United States)

    Söderberg, Bo

    The Potts Neural Network approach to non-binary discrete optimization problems is described. It applies to problems that can be described as a set of elementary `multiple choice' options. Instead of the conventional binary (Ising) neurons, mean field Potts neurons, having several available states, are used to describe the elementary degrees of freedom of such problems. The dynamics consists of iterating the mean field equations with annealing until convergence. Due to its deterministic character, the method is quite fast. When applied to problems of Graph Partition and scheduling types, it produces very good solutions also for problems of considerable size.

  20. Collaborative Clustering for Sensor Networks

    Science.gov (United States)

    Wagstaff. Loro :/; Green Jillian; Lane, Terran

    2011-01-01

    Traditionally, nodes in a sensor network simply collect data and then pass it on to a centralized node that archives, distributes, and possibly analyzes the data. However, analysis at the individual nodes could enable faster detection of anomalies or other interesting events, as well as faster responses such as sending out alerts or increasing the data collection rate. There is an additional opportunity for increased performance if individual nodes can communicate directly with their neighbors. Previously, a method was developed by which machine learning classification algorithms could collaborate to achieve high performance autonomously (without requiring human intervention). This method worked for supervised learning algorithms, in which labeled data is used to train models. The learners collaborated by exchanging labels describing the data. The new advance enables clustering algorithms, which do not use labeled data, to also collaborate. This is achieved by defining a new language for collaboration that uses pair-wise constraints to encode useful information for other learners. These constraints specify that two items must, or cannot, be placed into the same cluster. Previous work has shown that clustering with these constraints (in isolation) already improves performance. In the problem formulation, each learner resides at a different node in the sensor network and makes observations (collects data) independently of the other learners. Each learner clusters its data and then selects a pair of items about which it is uncertain and uses them to query its neighbors. The resulting feedback (a must and cannot constraint from each neighbor) is combined by the learner into a consensus constraint, and it then reclusters its data while incorporating the new constraint. A strategy was also proposed for cleaning the resulting constraint sets, which may contain conflicting constraints; this improves performance significantly. This approach has been applied to collaborative

  1. Three dimensional living neural networks

    Science.gov (United States)

    Linnenberger, Anna; McLeod, Robert R.; Basta, Tamara; Stowell, Michael H. B.

    2015-08-01

    We investigate holographic optical tweezing combined with step-and-repeat maskless projection micro-stereolithography for fine control of 3D positioning of living cells within a 3D microstructured hydrogel grid. Samples were fabricated using three different cell lines; PC12, NT2/D1 and iPSC. PC12 cells are a rat cell line capable of differentiation into neuron-like cells NT2/D1 cells are a human cell line that exhibit biochemical and developmental properties similar to that of an early embryo and when exposed to retinoic acid the cells differentiate into human neurons useful for studies of human neurological disease. Finally induced pluripotent stem cells (iPSC) were utilized with the goal of future studies of neural networks fabricated from human iPSC derived neurons. Cells are positioned in the monomer solution with holographic optical tweezers at 1064 nm and then are encapsulated by photopolymerization of polyethylene glycol (PEG) hydrogels formed by thiol-ene photo-click chemistry via projection of a 512x512 spatial light modulator (SLM) illuminated at 405 nm. Fabricated samples are incubated in differentiation media such that cells cease to divide and begin to form axons or axon-like structures. By controlling the position of the cells within the encapsulating hydrogel structure the formation of the neural circuits is controlled. The samples fabricated with this system are a useful model for future studies of neural circuit formation, neurological disease, cellular communication, plasticity, and repair mechanisms.

  2. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  3. Clusters - Territorial Networks. Where to?

    Directory of Open Access Journals (Sweden)

    Luiza Nicoleta Radu

    2013-08-01

    Full Text Available Globalization has led to an increased international trade relations between organizations spatially separated. This determined a greater spatial differentiation influenced by local and regional competition production systems. Territoriality has been considered as the main cause for the development of active areas, explaining also the success of certain local systems of production that became competitive on a global scale. The new school of regional competitiveness promoted by Porter (2003 identifies the cluster industry as a source of competitive advantages, supporting the identification and cluster setting – up as an objective of the public policy. In the last few years, clusters became an important basis for the new policies promoted at the level of the European Union. The challenges established through the Lisbon Strategy, respectively “to make the Europe the most competitive and dynamic based knowledge economy”, determinate a new approach of the economic policy in order to increase competitiveness. For the regional economy, the cluster has the aim to develop new strategies focused on the economic sectors of the regional development, by taking into account sectoral advantages. However, in terms of economic activities promoted at regional level, the spatial development is an essential component for increasing EU competitiveness in terms of economic globalization trends, regional networks being considered the most advanced form of cluster in the economic sector.

  4. Neural networks with discontinuous/impact activations

    CERN Document Server

    Akhmet, Marat

    2014-01-01

    This book presents as its main subject new models in mathematical neuroscience. A wide range of neural networks models with discontinuities are discussed, including impulsive differential equations, differential equations with piecewise constant arguments, and models of mixed type. These models involve discontinuities, which are natural because huge velocities and short distances are usually observed in devices modeling the networks. A discussion of the models, appropriate for the proposed applications, is also provided. This book also: Explores questions related to the biological underpinning for models of neural networks\\ Considers neural networks modeling using differential equations with impulsive and piecewise constant argument discontinuities Provides all necessary mathematical basics for application to the theory of neural networks Neural Networks with Discontinuous/Impact Activations is an ideal book for researchers and professionals in the field of engineering mathematics that have an interest in app...

  5. Hindcasting of storm waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, S.; Mandal, S.

    of any exogenous input requirement makes the network attractive. A neural network is an information processing system modeled on the structure of the human brain. Its merit is the ability to deal with fuzzy information whose interrelation is ambiguous...

  6. Drift chamber tracking with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey, C.S.; Denby, B.; Haggerty, H.

    1992-10-01

    We discuss drift chamber tracking with a commercial log VLSI neural network chip. Voltages proportional to the drift times in a 4-layer drift chamber were presented to the Intel ETANN chip. The network was trained to provide the intercept and slope of straight tracks traversing the chamber. The outputs were recorded and later compared off line to conventional track fits. Two types of network architectures were studied. Applications of neural network tracking to high energy physics detector triggers is discussed.

  7. Neural network optimization, components, and design selection

    Science.gov (United States)

    Weller, Scott W.

    1991-01-01

    Neural Networks are part of a revived technology which has received a lot of hype in recent years. As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and non-contrived problems. For example, one net has been trained to "read", translating English text into phoneme sequences. Other applications of Neural Networks include data base manipulation and the solving of routing and classification types of optimization problems. It was their use in optimization that got me involved with Neural Networks. As it turned out, "optimization" used in this context was somewhat misleading, because while some network configurations could indeed solve certain kinds of optimization problems, the configuring or "training" of a Neural Network itself is an optimization problem, and most of the literature which talked about Neural Nets and optimization in the same breath did not speak to my goal of using Neural Nets to help solve lens optimization problems. I did eventually apply Neural Network to lens optimization, and I will touch on those results. The application of Neural Nets to the problem of lens selection was much more successful, and those results will dominate this paper.

  8. Radiation Behavior of Analog Neural Network Chip

    Science.gov (United States)

    Langenbacher, H.; Zee, F.; Daud, T.; Thakoor, A.

    1996-01-01

    A neural network experiment conducted for the Space Technology Research Vehicle (STRV-1) 1-b launched in June 1994. Identical sets of analog feed-forward neural network chips was used to study and compare the effects of space and ground radiation on the chips. Three failure mechanisms are noted.

  9. Neural network approach to parton distributions fitting

    CERN Document Server

    Piccione, Andrea; Forte, Stefano; Latorre, Jose I.; Rojo, Joan; Piccione, Andrea; Rojo, Joan

    2006-01-01

    We will show an application of neural networks to extract information on the structure of hadrons. A Monte Carlo over experimental data is performed to correctly reproduce data errors and correlations. A neural network is then trained on each Monte Carlo replica via a genetic algorithm. Results on the proton and deuteron structure functions, and on the nonsinglet parton distribution will be shown.

  10. Self-organization of neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Clark, J.W.; Winston, J.V.; Rafelski, J.

    1984-05-14

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (brainwashing) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conducive to the simulation of memory and learning phenomena. 18 references, 2 figures.

  11. Medical image analysis with artificial neural networks.

    Science.gov (United States)

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Hidden neural networks: application to speech recognition

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1998-01-01

    We evaluate the hidden neural network HMM/NN hybrid on two speech recognition benchmark tasks; (1) task independent isolated word recognition on the Phonebook database, and (2) recognition of broad phoneme classes in continuous speech from the TIMIT database. It is shown how hidden neural networks...

  13. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous spectrophotometric multicomponent analysis are suggested, with a study on the estimation of the components of an antihypertensive combination, namely, atenolol and losartan potassium.

  14. Neural Networks for Non-linear Control

    DEFF Research Database (Denmark)

    Sørensen, O.

    1994-01-01

    This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process.......This paper describes how a neural network, structured as a Multi Layer Perceptron, is trained to predict, simulate and control a non-linear process....

  15. Application of Neural Networks for Energy Reconstruction

    CERN Document Server

    Damgov, Jordan

    2002-01-01

    The possibility to use Neural Networks for reconstruction ofthe energy deposited in the calorimetry system of the CMS detector is investigated. It is shown that using feed-forward neural network, good linearity, Gaussian energy distribution and good energy resolution can be achieved. Significant improvement of the energy resolution and linearity is reached in comparison with other weighting methods for energy reconstruction.

  16. Neural Network to Solve Concave Games

    OpenAIRE

    Zixin Liu; Nengfa Wang

    2014-01-01

    The issue on neural network method to solve concave games is concerned. Combined with variational inequality, Ky Fan inequality, and projection equation, concave games are transformed into a neural network model. On the basis of the Lyapunov stable theory, some stability results are also given. Finally, two classic games’ simulation results are given to illustrate the theoretical results.

  17. Recognizing changing seasonal patterns using neural networks

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); G. Draisma (Gerrit)

    1997-01-01

    textabstractIn this paper we propose a graphical method based on an artificial neural network model to investigate how and when seasonal patterns in macroeconomic time series change over time. Neural networks are useful since the hidden layer units may become activated only in certain seasons or

  18. Adaptive Neurons For Artificial Neural Networks

    Science.gov (United States)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  19. Initialization of multilayer forecasting artifical neural networks

    OpenAIRE

    Bochkarev, Vladimir V.; Maslennikova, Yulia S.

    2014-01-01

    In this paper, a new method was developed for initialising artificial neural networks predicting dynamics of time series. Initial weighting coefficients were determined for neurons analogously to the case of a linear prediction filter. Moreover, to improve the accuracy of the initialization method for a multilayer neural network, some variants of decomposition of the transformation matrix corresponding to the linear prediction filter were suggested. The efficiency of the proposed neural netwo...

  20. International Conference on Artificial Neural Networks (ICANN)

    CERN Document Server

    Mladenov, Valeri; Kasabov, Nikola; Artificial Neural Networks : Methods and Applications in Bio-/Neuroinformatics

    2015-01-01

    The book reports on the latest theories on artificial neural networks, with a special emphasis on bio-neuroinformatics methods. It includes twenty-three papers selected from among the best contributions on bio-neuroinformatics-related issues, which were presented at the International Conference on Artificial Neural Networks, held in Sofia, Bulgaria, on September 10-13, 2013 (ICANN 2013). The book covers a broad range of topics concerning the theory and applications of artificial neural networks, including recurrent neural networks, super-Turing computation and reservoir computing, double-layer vector perceptrons, nonnegative matrix factorization, bio-inspired models of cell communities, Gestalt laws, embodied theory of language understanding, saccadic gaze shifts and memory formation, and new training algorithms for Deep Boltzmann Machines, as well as dynamic neural networks and kernel machines. It also reports on new approaches to reinforcement learning, optimal control of discrete time-delay systems, new al...

  1. Effectiveness of Partition and Graph Theoretic Clustering Algorithms for Multiple Source Partial Discharge Pattern Classification Using Probabilistic Neural Network and Its Adaptive Version: A Critique Based on Experimental Studies

    Directory of Open Access Journals (Sweden)

    S. Venkatesh

    2012-01-01

    Full Text Available Partial discharge (PD is a major cause of failure of power apparatus and hence its measurement and analysis have emerged as a vital field in assessing the condition of the insulation system. Several efforts have been undertaken by researchers to classify PD pulses utilizing artificial intelligence techniques. Recently, the focus has shifted to the identification of multiple sources of PD since it is often encountered in real-time measurements. Studies have indicated that classification of multi-source PD becomes difficult with the degree of overlap and that several techniques such as mixed Weibull functions, neural networks, and wavelet transformation have been attempted with limited success. Since digital PD acquisition systems record data for a substantial period, the database becomes large, posing considerable difficulties during classification. This research work aims firstly at analyzing aspects concerning classification capability during the discrimination of multisource PD patterns. Secondly, it attempts at extending the previous work of the authors in utilizing the novel approach of probabilistic neural network versions for classifying moderate sets of PD sources to that of large sets. The third focus is on comparing the ability of partition-based algorithms, namely, the labelled (learning vector quantization and unlabelled (K-means versions, with that of a novel hypergraph-based clustering method in providing parsimonious sets of centers during classification.

  2. Neural Based Orthogonal Data Fitting The EXIN Neural Networks

    CERN Document Server

    Cirrincione, Giansalvo

    2008-01-01

    Written by three leaders in the field of neural based algorithms, Neural Based Orthogonal Data Fitting proposes several neural networks, all endowed with a complete theory which not only explains their behavior, but also compares them with the existing neural and traditional algorithms. The algorithms are studied from different points of view, including: as a differential geometry problem, as a dynamic problem, as a stochastic problem, and as a numerical problem. All algorithms have also been analyzed on real time problems (large dimensional data matrices) and have shown accurate solutions. Wh

  3. Complex-valued Neural Networks

    Science.gov (United States)

    Hirose, Akira

    This paper reviews the features and applications of complex-valued neural networks (CVNNs). First we list the present application fields, and describe the advantages of the CVNNs in two application examples, namely, an adaptive plastic-landmine visualization system and an optical frequency-domain-multiplexed learning logic circuit. Then we briefly discuss the features of complex number itself to find that the phase rotation is the most significant concept, which is very useful in processing the information related to wave phenomena such as lightwave and electromagnetic wave. The CVNNs will also be an indispensable framework of the future microelectronic information-processing hardware where the quantum electron wave plays the principal role.

  4. Collision avoidance using neural networks

    Science.gov (United States)

    Sugathan, Shilpa; Sowmya Shree, B. V.; Warrier, Mithila R.; Vidhyapathi, C. M.

    2017-11-01

    Now a days, accidents on roads are caused due to the negligence of drivers and pedestrians or due to unexpected obstacles that come into the vehicle’s path. In this paper, a model (robot) is developed to assist drivers for a smooth travel without accidents. It reacts to the real time obstacles on the four critical sides of the vehicle and takes necessary action. The sensor used for detecting the obstacle was an IR proximity sensor. A single layer perceptron neural network is used to train and test all possible combinations of sensors result by using Matlab (offline). A microcontroller (ARM Cortex-M3 LPC1768) is used to control the vehicle through the output data which is received from Matlab via serial communication. Hence, the vehicle becomes capable of reacting to any combination of real time obstacles.

  5. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2003-12-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NO{sub x} formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent soot-blowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat

  6. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2004-09-30

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing cofunding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  7. Tampa Electric Neural Network Sootblowing

    Energy Technology Data Exchange (ETDEWEB)

    Mark A. Rhode

    2004-03-31

    Boiler combustion dynamics change continuously due to several factors including coal quality, boiler loading, ambient conditions, changes in slag/soot deposits and the condition of plant equipment. NOx formation, Particulate Matter (PM) emissions, and boiler thermal performance are directly affected by the sootblowing practices on a unit. As part of its Power Plant Improvement Initiative program, the US DOE is providing co-funding (DE-FC26-02NT41425) and NETL is the managing agency for this project at Tampa Electric's Big Bend Station. This program serves to co-fund projects that have the potential to increase thermal efficiency and reduce emissions from coal-fired utility boilers. A review of the Big Bend units helped identify intelligent sootblowing as a suitable application to achieve the desired objectives. The existing sootblower control philosophy uses sequential schemes, whose frequency is either dictated by the control room operator or is timed based. The intent of this project is to implement a neural network based intelligent sootblowing system, in conjunction with state-of-the-art controls and instrumentation, to optimize the operation of a utility boiler and systematically control boiler fouling. Utilizing unique, on-line, adaptive technology, operation of the sootblowers can be dynamically controlled based on real-time events and conditions within the boiler. This could be an extremely cost-effective technology, which has the ability to be readily and easily adapted to virtually any pulverized coal fired boiler. Through unique on-line adaptive technology, Neural Network-based systems optimize the boiler operation by accommodating equipment performance changes due to wear and maintenance activities, adjusting to fluctuations in fuel quality, and improving operating flexibility. The system dynamically adjusts combustion setpoints and bias settings in closed-loop supervisory control to simultaneously reduce NO{sub x} emissions and improve heat rate

  8. CNEM: Cluster Based Network Evolution Model

    Directory of Open Access Journals (Sweden)

    Sarwat Nizamani

    2015-01-01

    Full Text Available This paper presents a network evolution model, which is based on the clustering approach. The proposed approach depicts the network evolution, which demonstrates the network formation from individual nodes to fully evolved network. An agglomerative hierarchical clustering method is applied for the evolution of network. In the paper, we present three case studies which show the evolution of the networks from the scratch. These case studies include: terrorist network of 9/11 incidents, terrorist network of WMD (Weapons Mass Destruction plot against France and a network of tweets discussing a topic. The network of 9/11 is also used for evaluation, using other social network analysis methods which show that the clusters created using the proposed model of network evolution are of good quality, thus the proposed method can be used by law enforcement agencies in order to further investigate the criminal networks

  9. Program Aids Simulation Of Neural Networks

    Science.gov (United States)

    Baffes, Paul T.

    1990-01-01

    Computer program NETS - Tool for Development and Evaluation of Neural Networks - provides simulation of neural-network algorithms plus software environment for development of such algorithms. Enables user to customize patterns of connections between layers of network, and provides features for saving weight values of network, providing for more precise control over learning process. Consists of translating problem into format using input/output pairs, designing network configuration for problem, and finally training network with input/output pairs until acceptable error reached. Written in C.

  10. Learning Processes of Layered Neural Networks

    OpenAIRE

    Fujiki, Sumiyoshi; FUJIKI, Nahomi, M.

    1995-01-01

    A positive reinforcement type learning algorithm is formulated for a stochastic feed-forward neural network, and a learning equation similar to that of the Boltzmann machine algorithm is obtained. By applying a mean field approximation to the same stochastic feed-forward neural network, a deterministic analog feed-forward network is obtained and the back-propagation learning rule is re-derived.

  11. Artificial neural network modeling and cluster analysis for organic facies and burial history estimation using well log data: A case study of the South Pars Gas Field, Persian Gulf, Iran

    Science.gov (United States)

    Alizadeh, Bahram; Najjari, Saeid; Kadkhodaie-Ilkhchi, Ali

    2012-08-01

    Intelligent and statistical techniques were used to extract the hidden organic facies from well log responses in the Giant South Pars Gas Field, Persian Gulf, Iran. Kazhdomi Formation of Mid-Cretaceous and Kangan-Dalan Formations of Permo-Triassic Data were used for this purpose. Initially GR, SGR, CGR, THOR, POTA, NPHI and DT logs were applied to model the relationship between wireline logs and Total Organic Carbon (TOC) content using Artificial Neural Networks (ANN). The correlation coefficient (R2) between the measured and ANN predicted TOC equals to 89%. The performance of the model is measured by the Mean Squared Error function, which does not exceed 0.0073. Using Cluster Analysis technique and creating a binary hierarchical cluster tree the constructed TOC column of each formation was clustered into 5 organic facies according to their geochemical similarity. Later a second model with the accuracy of 84% was created by ANN to determine the specified clusters (facies) directly from well logs for quick cluster recognition in other wells of the studied field. Each created facies was correlated to its appropriate burial history curve. Hence each and every facies of a formation could be scrutinized separately and directly from its well logs, demonstrating the time and depth of oil or gas generation. Therefore potential production zone of Kazhdomi probable source rock and Kangan- Dalan reservoir formation could be identified while well logging operations (especially in LWD cases) were in progress. This could reduce uncertainty and save plenty of time and cost for oil industries and aid in the successful implementation of exploration and exploitation plans.

  12. Research of The Deeper Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao You Rong

    2016-01-01

    Full Text Available Neural networks (NNs have powerful computational abilities and could be used in a variety of applications; however, training these networks is still a difficult problem. With different network structures, many neural models have been constructed. In this report, a deeper neural networks (DNNs architecture is proposed. The training algorithm of deeper neural network insides searching the global optimal point in the actual error surface. Before the training algorithm is designed, the error surface of the deeper neural network is analyzed from simple to complicated, and the features of the error surface is obtained. Based on these characters, the initialization method and training algorithm of DNNs is designed. For the initialization, a block-uniform design method is proposed which separates the error surface into some blocks and finds the optimal block using the uniform design method. For the training algorithm, the improved gradient-descent method is proposed which adds a penalty term into the cost function of the old gradient descent method. This algorithm makes the network have a great approximating ability and keeps the network state stable. All of these improve the practicality of the neural network.

  13. Clustered Protocadherins Are Required for Building Functional Neural Circuits

    Science.gov (United States)

    Hasegawa, Sonoko; Kobayashi, Hiroaki; Kumagai, Makiko; Nishimaru, Hiroshi; Tarusawa, Etsuko; Kanda, Hiro; Sanbo, Makoto; Yoshimura, Yumiko; Hirabayashi, Masumi; Hirabayashi, Takahiro; Yagi, Takeshi

    2017-01-01

    Neuronal identity is generated by the cell-surface expression of clustered protocadherin (Pcdh) isoforms. In mice, 58 isoforms from three gene clusters, Pcdhα, Pcdhβ, and Pcdhγ, are differentially expressed in neurons. Since cis-heteromeric Pcdh oligomers on the cell surface interact homophilically with that in other neurons in trans, it has been thought that the Pcdh isoform repertoire determines the binding specificity of synapses. We previously described the cooperative functions of isoforms from all three Pcdh gene clusters in neuronal survival and synapse formation in the spinal cord. However, the neuronal loss and the following neonatal lethality prevented an analysis of the postnatal development and characteristics of the clustered-Pcdh-null (Δαβγ) neural circuits. Here, we used two methods, one to generate the chimeric mice that have transplanted Δαβγ neurons into mouse embryos, and the other to generate double mutant mice harboring null alleles of both the Pcdh gene and the proapoptotic gene Bax to prevent neuronal loss. First, our results showed that the surviving chimeric mice that had a high contribution of Δαβγ cells exhibited paralysis and died in the postnatal period. An analysis of neuronal survival in postnatally developing brain regions of chimeric mice clarified that many Δαβγ neurons in the forebrain were spared from apoptosis, unlike those in the reticular formation of the brainstem. Second, in Δαβγ/Bax null double mutants, the central pattern generator (CPG) for locomotion failed to create a left-right alternating pattern even in the absence of neurodegeneraton. Third, calcium imaging of cultured hippocampal neurons showed that the network activity of Δαβγ neurons tended to be more synchronized and lost the variability in the number of simultaneously active neurons observed in the control network. Lastly, a comparative analysis for trans-homophilic interactions of the exogenously introduced single Pcdh-γA3 isoforms

  14. Clustered Protocadherins Are Required for Building Functional Neural Circuits

    Directory of Open Access Journals (Sweden)

    Takeshi Yagi

    2017-04-01

    Full Text Available Neuronal identity is generated by the cell-surface expression of clustered protocadherin (Pcdh isoforms. In mice, 58 isoforms from three gene clusters, Pcdhα, Pcdhβ, and Pcdhγ, are differentially expressed in neurons. Since cis-heteromeric Pcdh oligomers on the cell surface interact homophilically with that in other neurons in trans, it has been thought that the Pcdh isoform repertoire determines the binding specificity of synapses. We previously described the cooperative functions of isoforms from all three Pcdh gene clusters in neuronal survival and synapse formation in the spinal cord. However, the neuronal loss and the following neonatal lethality prevented an analysis of the postnatal development and characteristics of the clustered-Pcdh-null (Δαβγ neural circuits. Here, we used two methods, one to generate the chimeric mice that have transplanted Δαβγ neurons into mouse embryos, and the other to generate double mutant mice harboring null alleles of both the Pcdh gene and the proapoptotic gene Bax to prevent neuronal loss. First, our results showed that the surviving chimeric mice that had a high contribution of Δαβγ cells exhibited paralysis and died in the postnatal period. An analysis of neuronal survival in postnatally developing brain regions of chimeric mice clarified that many Δαβγ neurons in the forebrain were spared from apoptosis, unlike those in the reticular formation of the brainstem. Second, in Δαβγ/Bax null double mutants, the central pattern generator (CPG for locomotion failed to create a left-right alternating pattern even in the absence of neurodegeneraton. Third, calcium imaging of cultured hippocampal neurons showed that the network activity of Δαβγ neurons tended to be more synchronized and lost the variability in the number of simultaneously active neurons observed in the control network. Lastly, a comparative analysis for trans-homophilic interactions of the exogenously introduced single

  15. Neural network topology design for nonlinear control

    Science.gov (United States)

    Haecker, Jens; Rudolph, Stephan

    2001-03-01

    Neural networks, especially in nonlinear system identification and control applications, are typically considered to be black-boxes which are difficult to analyze and understand mathematically. Due to this reason, an in- depth mathematical analysis offering insight into the different neural network transformation layers based on a theoretical transformation scheme is desired, but up to now neither available nor known. In previous works it has been shown how proven engineering methods such as dimensional analysis and the Laplace transform may be used to construct a neural controller topology for time-invariant systems. Using the knowledge of neural correspondences of these two classical methods, the internal nodes of the network could also be successfully interpreted after training. As further extension to these works, the paper describes the latest of a theoretical interpretation framework describing the neural network transformation sequences in nonlinear system identification and control. This can be achieved By incorporation of the method of exact input-output linearization in the above mentioned two transform sequences of dimensional analysis and the Laplace transformation. Based on these three theoretical considerations neural network topologies may be designed in special situations by pure translation in the sense of a structural compilation of the known classical solutions into their correspondent neural topology. Based on known exemplary results, the paper synthesizes the proposed approach into the visionary goals of a structural compiler for neural networks. This structural compiler for neural networks is intended to automatically convert classical control formulations into their equivalent neural network structure based on the principles of equivalence between formula and operator, and operator and structure which are discussed in detail in this work.

  16. Quantum Entanglement in Neural Network States

    Science.gov (United States)

    Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.

    2017-04-01

    Machine learning, one of today's most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM) architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement) of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our results uncover the

  17. Quantum Entanglement in Neural Network States

    Directory of Open Access Journals (Sweden)

    Dong-Ling Deng

    2017-05-01

    Full Text Available Machine learning, one of today’s most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our

  18. Genetic algorithm for neural networks optimization

    Science.gov (United States)

    Setyawati, Bina R.; Creese, Robert C.; Sahirman, Sidharta

    2004-11-01

    This paper examines the forecasting performance of multi-layer feed forward neural networks in modeling a particular foreign exchange rates, i.e. Japanese Yen/US Dollar. The effects of two learning methods, Back Propagation and Genetic Algorithm, in which the neural network topology and other parameters fixed, were investigated. The early results indicate that the application of this hybrid system seems to be well suited for the forecasting of foreign exchange rates. The Neural Networks and Genetic Algorithm were programmed using MATLAB«.

  19. Estimation of Conditional Quantile using Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1999-01-01

    The problem of estimating conditional quantiles using neural networks is investigated here. A basic structure is developed using the methodology of kernel estimation, and a theory guaranteeing con-sistency on a mild set of assumptions is provided. The constructed structure constitutes a basis...... for the design of a variety of different neural networks, some of which are considered in detail. The task of estimating conditional quantiles is related to Bayes point estimation whereby a broad range of applications within engineering, economics and management can be suggested. Numerical results illustrating...... the capabilities of the elaborated neural network are also given....

  20. Vectorized algorithms for spiking neural network simulation.

    Science.gov (United States)

    Brette, Romain; Goodman, Dan F M

    2011-06-01

    High-level languages (Matlab, Python) are popular in neuroscience because they are flexible and accelerate development. However, for simulating spiking neural networks, the cost of interpretation is a bottleneck. We describe a set of algorithms to simulate large spiking neural networks efficiently with high-level languages using vector-based operations. These algorithms constitute the core of Brian, a spiking neural network simulator written in the Python language. Vectorized simulation makes it possible to combine the flexibility of high-level languages with the computational efficiency usually associated with compiled languages.

  1. Convolutional Neural Network for Image Recognition

    CERN Document Server

    Seifnashri, Sahand

    2015-01-01

    The aim of this project is to use machine learning techniques especially Convolutional Neural Networks for image processing. These techniques can be used for Quark-Gluon discrimination using calorimeters data, but unfortunately I didn’t manage to get the calorimeters data and I just used the Jet data fromminiaodsim(ak4 chs). The Jet data was not good enough for Convolutional Neural Network which is designed for ’image’ recognition. This report is made of twomain part, part one is mainly about implementing Convolutional Neural Network on unphysical data such as MNIST digits and CIFAR-10 dataset and part 2 is about the Jet data.

  2. Neural Network and Letter Recognition.

    Science.gov (United States)

    Lee, Hue Yeon

    Neural net architectures and learning algorithms that recognize hand written 36 alphanumeric characters are studied. The thin line input patterns written in 32 x 32 binary array are used. The system is comprised of two major components, viz. a preprocessing unit and a Recognition unit. The preprocessing unit in turn consists of three layers of neurons; the U-layer, the V-layer, and the C -layer. The functions of the U-layer is to extract local features by template matching. The correlation between the detected local features are considered. Through correlating neurons in a plane with their neighboring neurons, the V-layer would thicken the on-cells or lines that are groups of on-cells of the previous layer. These two correlations would yield some deformation tolerance and some of the rotational tolerance of the system. The C-layer then compresses data through the 'Gabor' transform. Pattern dependent choice of center and wavelengths of 'Gabor' filters is the cause of shift and scale tolerance of the system. Three different learning schemes had been investigated in the recognition unit, namely; the error back propagation learning with hidden units, a simple perceptron learning, and a competitive learning. Their performances were analyzed and compared. Since sometimes the network fails to distinguish between two letters that are inherently similar, additional ambiguity resolving neural nets are introduced on top of the above main neural net. The two dimensional Fourier transform is used as the preprocessing and the perceptron is used as the recognition unit of the ambiguity resolver. One hundred different person's handwriting sets are collected. Some of these are used as the training sets and the remainders are used as the test sets. The correct recognition rate of the system increases with the number of training sets and eventually saturates at a certain value. Similar recognition rates are obtained for the above three different learning algorithms. The minimum error

  3. Nonequilibrium landscape theory of neural networks

    Science.gov (United States)

    Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin

    2013-01-01

    The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451

  4. Neural Network for Estimating Conditional Distribution

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Kulczycki, P.

    Neural networks for estimating conditional distributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency is proved from a mild set of assumptions. A number of applications within...... statistcs, decision theory and signal processing are suggested, and a numerical example illustrating the capabilities of the elaborated network is given...

  5. Person Movement Prediction Using Neural Networks

    OpenAIRE

    Vintan, Lucian; Gellert, Arpad; Petzold, Jan; Ungerer, Theo

    2006-01-01

    Ubiquitous systems use context information to adapt appliance behavior to human needs. Even more convenience is reached if the appliance foresees the user's desires and acts proactively. This paper proposes neural prediction techniques to anticipate a person's next movement. We focus on neural predictors (multi-layer perceptron with back-propagation learning) with and without pre-training. The optimal configuration of the neural network is determined by evaluating movement sequences of real p...

  6. Concurrent conditional clustering of multiple networks: COCONETS.

    Directory of Open Access Journals (Sweden)

    Sabrina Kleessen

    Full Text Available The accumulation of high-throughput data from different experiments has facilitated the extraction of condition-specific networks over the same set of biological entities. Comparing and contrasting of such multiple biological networks is in the center of differential network biology, aiming at determining general and condition-specific responses captured in the network structure (i.e., included associations between the network components. We provide a novel way for comparison of multiple networks based on determining network clustering (i.e., partition into communities which is optimal across the set of networks with respect to a given cluster quality measure. To this end, we formulate the optimization-based problem of concurrent conditional clustering of multiple networks, termed COCONETS, based on the modularity. The solution to this problem is a clustering which depends on all considered networks and pinpoints their preserved substructures. We present theoretical results for special classes of networks to demonstrate the implications of conditionality captured by the COCONETS formulation. As the problem can be shown to be intractable, we extend an existing efficient greedy heuristic and applied it to determine concurrent conditional clusters on coexpression networks extracted from publically available time-resolved transcriptomics data of Escherichia coli under five stresses as well as on metabolite correlation networks from metabolomics data set from Arabidopsis thaliana exposed to eight environmental conditions. We demonstrate that the investigation of the differences between the clustering based on all networks with that obtained from a subset of networks can be used to quantify the specificity of biological responses. While a comparison of the Escherichia coli coexpression networks based on seminal properties does not pinpoint biologically relevant differences, the common network substructures extracted by COCONETS are supported by

  7. Curvature of Indoor Sensor Network: Clustering Coefficient

    Directory of Open Access Journals (Sweden)

    Ariaei F

    2008-01-01

    Full Text Available Abstract We investigate the geometric properties of the communication graph in realistic low-power wireless networks. In particular, we explore the concept of the curvature of a wireless network via the clustering coefficient. Clustering coefficient analysis is a computationally simplified, semilocal approach, which nevertheless captures such a large-scale feature as congestion in the underlying network. The clustering coefficient concept is applied to three cases of indoor sensor networks, under varying thresholds on the link packet reception rate (PRR. A transition from positive curvature ("meshed" network to negative curvature ("core concentric" network is observed by increasing the threshold. Even though this paper deals with network curvature per se, we nevertheless expand on the underlying congestion motivation, propose several new concepts (network inertia and centroid, and finally we argue that greedy routing on a virtual positively curved network achieves load balancing on the physical network.

  8. Curvature of Indoor Sensor Network: Clustering Coefficient

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available We investigate the geometric properties of the communication graph in realistic low-power wireless networks. In particular, we explore the concept of the curvature of a wireless network via the clustering coefficient. Clustering coefficient analysis is a computationally simplified, semilocal approach, which nevertheless captures such a large-scale feature as congestion in the underlying network. The clustering coefficient concept is applied to three cases of indoor sensor networks, under varying thresholds on the link packet reception rate (PRR. A transition from positive curvature (“meshed” network to negative curvature (“core concentric” network is observed by increasing the threshold. Even though this paper deals with network curvature per se, we nevertheless expand on the underlying congestion motivation, propose several new concepts (network inertia and centroid, and finally we argue that greedy routing on a virtual positively curved network achieves load balancing on the physical network.

  9. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Science.gov (United States)

    Chernoded, Andrey; Dudko, Lev; Myagkov, Igor; Volkov, Petr

    2017-10-01

    Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  10. Deep Learning Neural Networks and Bayesian Neural Networks in Data Analysis

    Directory of Open Access Journals (Sweden)

    Chernoded Andrey

    2017-01-01

    Full Text Available Most of the modern analyses in high energy physics use signal-versus-background classification techniques of machine learning methods and neural networks in particular. Deep learning neural network is the most promising modern technique to separate signal and background and now days can be widely and successfully implemented as a part of physical analysis. In this article we compare Deep learning and Bayesian neural networks application as a classifiers in an instance of top quark analysis.

  11. Invertebrate diversity classification using self-organizing map neural network: with some special topological functions

    Directory of Open Access Journals (Sweden)

    WenJun Zhang

    2014-06-01

    Full Text Available In present study we used self-organizing map (SOM neural network to conduct the non-supervisory clustering of invertebrate orders in rice field. Four topological functions, i.e., cossintopf, sincostopf, acossintopf, and expsintopf, established on the template in toolbox of Matlab, were used in SOM neural network learning. Results showed that clusters were different when using different topological functions because different topological functions will generate different spatial structure of neurons in neural network. We may chose these functions and results based on comparison with the practical situation.

  12. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    Science.gov (United States)

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  13. [Medical use of artificial neural networks].

    Science.gov (United States)

    Molnár, B; Papik, K; Schaefer, R; Dombóvári, Z; Fehér, J; Tulassay, Z

    1998-01-04

    The main aim of the research in medical diagnostics is to develop more exact, cost-effective and handsome systems, procedures and methods for supporting the clinicians. In their paper the authors introduce a new method that recently came into the focus referred to as artificial neural networks. Based on the literature of the past 5-6 years they give a brief review--highlighting the most important ones--showing the idea behind neural networks, what they are used for in the medical field. The definition, structure and operation of neural networks are discussed. In the application part they collect examples in order to give an insight in the neural network application research. It is emphasised that in the near future basically new diagnostic equipment can be developed based on this new technology in the field of ECG, EEG and macroscopic and microscopic image analysis systems.

  14. Application of neural networks in coastal engineering

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.

    methods. That is why it is becoming popular in various fields including coastal engineering. Waves and tides will play important roles in coastal erosion or accretion. This paper briefly describes the back-propagation neural networks and its application...

  15. Additive Feed Forward Control with Neural Networks

    DEFF Research Database (Denmark)

    Sørensen, O.

    1999-01-01

    This paper demonstrates a method to control a non-linear, multivariable, noisy process using trained neural networks. The basis for the method is a trained neural network controller acting as the inverse process model. A training method for obtaining such an inverse process model is applied....... A suitable 'shaped' (low-pass filtered) reference is used to overcome problems with excessive control action when using a controller acting as the inverse process model. The control concept is Additive Feed Forward Control, where the trained neural network controller, acting as the inverse process model......, is placed in a supplementary pure feed-forward path to an existing feedback controller. This concept benefits from the fact, that an existing, traditional designed, feedback controller can be retained without any modifications, and after training the connection of the neural network feed-forward controller...

  16. Blood glucose prediction using neural network

    Science.gov (United States)

    Soh, Chit Siang; Zhang, Xiqin; Chen, Jianhong; Raveendran, P.; Soh, Phey Hong; Yeo, Joon Hock

    2008-02-01

    We used neural network for blood glucose level determination in this study. The data set used in this study was collected using a non-invasive blood glucose monitoring system with six laser diodes, each laser diode operating at distinct near infrared wavelength between 1500nm and 1800nm. The neural network is specifically used to determine blood glucose level of one individual who participated in an oral glucose tolerance test (OGTT) session. Partial least squares regression is also used for blood glucose level determination for the purpose of comparison with the neural network model. The neural network model performs better in the prediction of blood glucose level as compared with the partial least squares model.

  17. PREDIKSI FOREX MENGGUNAKAN MODEL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    R. Hadapiningradja Kusumodestoni

    2015-11-01

    Full Text Available ABSTRAK Prediksi adalah salah satu teknik yang paling penting dalam menjalankan bisnis forex. Keputusan dalam memprediksi adalah sangatlah penting, karena dengan prediksi dapat membantu mengetahui nilai forex di waktu tertentu kedepan sehingga dapat mengurangi resiko kerugian. Tujuan dari penelitian ini dimaksudkan memprediksi bisnis fores menggunakan model neural network dengan data time series per 1 menit untuk mengetahui nilai akurasi prediksi sehingga dapat mengurangi resiko dalam menjalankan bisnis forex. Metode penelitian pada penelitian ini meliputi metode pengumpulan data kemudian dilanjutkan ke metode training, learning, testing menggunakan neural network. Setelah di evaluasi hasil penelitian ini menunjukan bahwa penerapan algoritma Neural Network mampu untuk memprediksi forex dengan tingkat akurasi prediksi 0.431 +/- 0.096 sehingga dengan prediksi ini dapat membantu mengurangi resiko dalam menjalankan bisnis forex. Kata kunci: prediksi, forex, neural network.

  18. Using Neural Networks in Diagnosing Breast Cancer

    National Research Council Canada - National Science Library

    Fogel, David

    1997-01-01

    .... In the current study, evolutionary programming is used to train neural networks and linear discriminant models to detect breast cancer in suspicious and microcalcifications using radiographic features and patient age...

  19. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglová

    2004-03-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the “free” space using ultrasound range finder data. The second neural network “finds” a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  20. Isolated Speech Recognition Using Artificial Neural Networks

    National Research Council Canada - National Science Library

    Polur, Prasad

    2001-01-01

    .... A small size vocabulary containing the words YES and NO is chosen. Spectral features using cepstral analysis are extracted per frame and imported to a feedforward neural network which uses a backpropagation with momentum training algorithm...

  1. Control of autonomous robot using neural networks

    Science.gov (United States)

    Barton, Adam; Volna, Eva

    2017-07-01

    The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.

  2. Neural Networks in Mobile Robot Motion

    Directory of Open Access Journals (Sweden)

    Danica Janglova

    2008-11-01

    Full Text Available This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the "free" space using ultrasound range finder data. The second neural network "finds" a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.

  3. Artificial neural networks a practical course

    CERN Document Server

    da Silva, Ivan Nunes; Andrade Flauzino, Rogerio; Liboni, Luisa Helena Bartocci; dos Reis Alves, Silas Franco

    2017-01-01

    This book provides comprehensive coverage of neural networks, their evolution, their structure, the problems they can solve, and their applications. The first half of the book looks at theoretical investigations on artificial neural networks and addresses the key architectures that are capable of implementation in various application scenarios. The second half is designed specifically for the production of solutions using artificial neural networks to solve practical problems arising from different areas of knowledge. It also describes the various implementation details that were taken into account to achieve the reported results. These aspects contribute to the maturation and improvement of experimental techniques to specify the neural network architecture that is most appropriate for a particular application scope. The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals.

  4. Constructive autoassociative neural network for facial recognition.

    Directory of Open Access Journals (Sweden)

    Bruno J T Fernandes

    Full Text Available Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network. CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.

  5. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Genetic Algorithm Optimized Neural Networks Ensemble as. Calibration Model for Simultaneous Spectrophotometric. Estimation of Atenolol and Losartan Potassium in Tablets. Dondeti Satyanarayana*, Kamarajan Kannan and Rajappan Manavalan. Department of Pharmacy, Annamalai University, Annamalainagar, Tamil ...

  6. Applications of Pulse-Coupled Neural Networks

    CERN Document Server

    Ma, Yide; Wang, Zhaobin

    2011-01-01

    "Applications of Pulse-Coupled Neural Networks" explores the fields of image processing, including image filtering, image segmentation, image fusion, image coding, image retrieval, and biometric recognition, and the role of pulse-coupled neural networks in these fields. This book is intended for researchers and graduate students in artificial intelligence, pattern recognition, electronic engineering, and computer science. Prof. Yide Ma conducts research on intelligent information processing, biomedical image processing, and embedded system development at the School of Information Sci

  7. Neural networks as models of psychopathology.

    Science.gov (United States)

    Aakerlund, L; Hemmingsen, R

    1998-04-01

    Neural network modeling is situated between neurobiology, cognitive science, and neuropsychology. The structural and functional resemblance with biological computation has made artificial neural networks (ANN) useful for exploring the relationship between neurobiology and computational performance, i.e., cognition and behavior. This review provides an introduction to the theory of ANN and how they have linked theories from neurobiology and psychopathology in schizophrenia, affective disorders, and dementia.

  8. A neural network simulation package in CLIPS

    Science.gov (United States)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  9. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Diabetic retinopathy screening using deep neural network.

    Science.gov (United States)

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  11. Symbolic processing in neural networks

    OpenAIRE

    Neto, João Pedro; Hava T Siegelmann; Costa,J.Félix

    2003-01-01

    In this paper we show that programming languages can be translated into recurrent (analog, rational weighted) neural nets. Implementation of programming languages in neural nets turns to be not only theoretical exciting, but has also some practical implications in the recent efforts to merge symbolic and sub symbolic computation. To be of some use, it should be carried in a context of bounded resources. Herein, we show how to use resource bounds to speed up computations over neural nets, thro...

  12. Functional clustering algorithm for the analysis of dynamic network data

    Science.gov (United States)

    Feldt, S.; Waddell, J.; Hetrick, V. L.; Berke, J. D.; Żochowski, M.

    2009-05-01

    We formulate a technique for the detection of functional clusters in discrete event data. The advantage of this algorithm is that no prior knowledge of the number of functional groups is needed, as our procedure progressively combines data traces and derives the optimal clustering cutoff in a simple and intuitive manner through the use of surrogate data sets. In order to demonstrate the power of this algorithm to detect changes in network dynamics and connectivity, we apply it to both simulated neural spike train data and real neural data obtained from the mouse hippocampus during exploration and slow-wave sleep. Using the simulated data, we show that our algorithm performs better than existing methods. In the experimental data, we observe state-dependent clustering patterns consistent with known neurophysiological processes involved in memory consolidation.

  13. Hindcasting cyclonic waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Chakravarty, N.V.

    the backpropagation networks with updated algorithms are used in this paper. A brief description about the working of a back propagation neural network and three updated algorithms is given below. Backpropagation learning: Backpropagation is the most widely used... algorithm for supervised learning with multi layer feed forward networks. The idea of the backpropagation learning algorithm is the repeated application of the chain rule to compute the influence of each weight in the network with respect to an arbitrary...

  14. Gait features analysis using artificial neural networks - testing the footwear effect.

    Science.gov (United States)

    Wang, Jikun; Zielińska, Teresa

    2017-01-01

    The aim of this paper is to provide the methods for automatic detection of the difference in gait features depending on a footwear. Artificial neural networks were applied in the study. The gait data were recorded during the walk with different footwear for testing and validation of the proposed method. The gait properties were analyzed considering EMG (electromyography) signals and using two types of artificial neural networks: the learning vector quantization (LVQ) classifying network, and the clustering competitive network. Obtained classification and clustering results were discussed. For comparative studies, velocities of the leg joint trajectories, and accelerations were used. The features indicated by neural networks were compared with the conclusions formulated analyzing the above mentioned trajectories for ankle and knee joints. The matching between experimentally recorded joint trajectories and the results given by neural networks was studied. It was indicated what muscles are most influenced by the footwear, the relation between the footwear type and the muscles work was concluded.

  15. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  16. Fin-and-tube condenser performance evaluation using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Ling-Xiao [Institute of Refrigeration and Cryogenics, Shanghai Jiaotong University, Shanghai 200240 (China); Zhang, Chun-Lu [China R and D Center, Carrier Corporation, No. 3239 Shen Jiang Road, Shanghai 201206 (China)

    2010-05-15

    The paper presents neural network approach to performance evaluation of the fin-and-tube air-cooled condensers which are widely used in air-conditioning and refrigeration systems. Inputs of the neural network include refrigerant and air-flow rates, refrigerant inlet temperature and saturated temperature, and entering air dry-bulb temperature. Outputs of the neural network consist of the heating capacity and the pressure drops on both refrigerant and air sides. The multi-input multi-output (MIMO) neural network is separated into multi-input single-output (MISO) neural networks for training. Afterwards, the trained MISO neural networks are combined into a MIMO neural network, which indicates that the number of training data sets is determined by the biggest MISO neural network not the whole MIMO network. Compared with a validated first-principle model, the standard deviations of neural network models are less than 1.9%, and all errors fall into {+-}5%. (author)

  17. Sea level forecasts using neural networks

    Science.gov (United States)

    Röske, Frank

    1997-03-01

    In this paper, a new method for predicting the sea level employing a neural network approach is introduced. It was designed to improve the prediction of the sea level along the German North Sea Coast under standard conditions. The sea level at any given time depends upon the tides as well as meteorological and oceanographic factors, such as the winds and external surges induced by air pressure. Since tidal predictions are already sufficiently accurate, they have been subtracted from the observed sea levels. The differences will be predicted up to 18 hours in advance. In this paper, the differences are called anomalies. The prediction of the sea level each hour is distinguished from its predictions at the times of high and low tide. For this study, Cuxhaven was selected as a reference site. The predictions made using neural networks were compared for accuracy with the prognoses prepared using six models: two hydrodynamic models, a statistical model, a nearest neighbor model, which is based on analogies, the persistence model, and the verbal forecasts that are broadcast and kept on record by the Sea Level Forecast Service of the Federal Maritime and Hydrography Agency (BSH) in Hamburg. Predictions were calculated for the year 1993 and compared with the actual levels measured. Artificial neural networks are capable of learning. By applying them to the prediction of sea levels, learning from past events has been attempted. It was also attempted to make the experiences of expert forecasters objective. Instead of using the wide-spread back-propagation networks, the self-organizing feature map of Kohonen, or “Kohonen network”, was applied. The fundamental principle of this network is the transformation of the signal similarity into the neighborhood of the neurons while preserving the topology of the signal space. The self-organization procedure of Kohonen networks can be visualized. To make predictions, these networks have been subdivided into a part describing the

  18. Prototype-Incorporated Emotional Neural Network.

    Science.gov (United States)

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-08-15

    Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.

  19. On sparsely connected optimal neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V. [Los Alamos National Lab., NM (United States); Draghici, S. [Wayne State Univ., Detroit, MI (United States)

    1997-10-01

    This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.

  20. Artificial neural network intelligent method for prediction

    Science.gov (United States)

    Trifonov, Roumen; Yoshinov, Radoslav; Pavlova, Galya; Tsochev, Georgi

    2017-09-01

    Accounting and financial classification and prediction problems are high challenge and researchers use different methods to solve them. Methods and instruments for short time prediction of financial operations using artificial neural network are considered. The methods, used for prediction of financial data as well as the developed forecasting system with neural network are described in the paper. The architecture of a neural network used four different technical indicators, which are based on the raw data and the current day of the week is presented. The network developed is used for forecasting movement of stock prices one day ahead and consists of an input layer, one hidden layer and an output layer. The training method is algorithm with back propagation of the error. The main advantage of the developed system is self-determination of the optimal topology of neural network, due to which it becomes flexible and more precise The proposed system with neural network is universal and can be applied to various financial instruments using only basic technical indicators as input data.

  1. Estimating Conditional Distributions by Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1998-01-01

    Neural Networks for estimating conditionaldistributions and their associated quantiles are investigated in this paper. A basic network structure is developed on the basis of kernel estimation theory, and consistency property is considered from a mild set of assumptions. A number of applications...

  2. Medical Text Classification using Convolutional Neural Networks

    OpenAIRE

    Hughes, Mark; Li, Irene; Kotoulas, Spyros; Suzumura, Toyotaro

    2017-01-01

    We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.

  3. Medical Text Classification Using Convolutional Neural Networks.

    Science.gov (United States)

    Hughes, Mark; Li, Irene; Kotoulas, Spyros; Suzumura, Toyotaro

    2017-01-01

    We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.

  4. Artificial Neural Networks and Instructional Technology.

    Science.gov (United States)

    Carlson, Patricia A.

    1991-01-01

    Artificial neural networks (ANN), part of artificial intelligence, are discussed. Such networks are fed sample cases (training sets), learn how to recognize patterns in the sample data, and use this experience in handling new cases. Two cognitive roles for ANNs (intelligent filters and spreading, associative memories) are examined. Prototypes…

  5. Visual Servoing from Deep Neural Networks

    OpenAIRE

    Bateux, Quentin; Marchand, Eric; Leitner, Jürgen; Chaumette, Francois; Corke, Peter

    2017-01-01

    International audience; We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing c...

  6. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...

  7. Electronic device aspects of neural network memories

    Science.gov (United States)

    Lambe, J.; Moopenn, A.; Thakoor, A. P.

    1985-01-01

    The basic issues related to the electronic implementation of the neural network model (NNM) for content addressable memories are examined. A brief introduction to the principles of the NNM is followed by an analysis of the information storage of the neural network in the form of a binary connection matrix and the recall capability of such matrix memories based on a hardware simulation study. In addition, materials and device architecture issues involved in the future realization of such networks in VLSI-compatible ultrahigh-density memories are considered. A possible space application of such devices would be in the area of large-scale information storage without mechanical devices.

  8. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  9. Information Propagation in Clustered Multilayer Networks

    CERN Document Server

    Zhuang, Yong

    2015-01-01

    In today's world, individuals interact with each other in more complicated patterns than ever. Some individuals engage through online social networks (e.g., Facebook, Twitter), while some communicate only through conventional ways (e.g., face-to-face). Therefore, understanding the dynamics of information propagation among humans calls for a multi-layer network model where an online social network is conjoined with a physical network. In this work, we initiate a study of information diffusion in a clustered multi-layer network model, where all constituent layers are random networks with high clustering. We assume that information propagates according to the SIR model and with different information transmissibility across the networks. We give results for the conditions, probability, and size of information epidemics, i.e., cases where information starts from a single individual and reaches a positive fraction of the population. We show that increasing the level of clustering in either one of the layers increas...

  10. Neural network optimization, components, and design selection

    Science.gov (United States)

    Weller, Scott W.

    1990-07-01

    Neural Networks are part of a revived technology which has received a lot of hype in recent years. As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and noncontrived problems. For example, one net has been trained to "read", translating English text into phoneme sequences. Other applications of Neural Networks include data base manipulation and the solving of muting and classification types of optimization problems. Neural Networks are constructed from neurons, which in electronics or software attempt to model but are not constrained by the real thing, i.e., neurons in our gray matter. Neurons are simple processing units connected to many other neurons over pathways which modify the incoming signals. A single synthetic neuron typically sums its weighted inputs, runs this sum through a non-linear function, and produces an output. In the brain, neurons are connected in a complex topology: in hardware/software the topology is typically much simpler, with neurons lying side by side, forming layers of neurons which connect to the layer of neurons which receive their outputs. This simplistic model is much easier to construct than the real thing, and yet can solve real problems. The information in a network, or its "memory", is completely contained in the weights on the connections from one neuron to another. Establishing these weights is called "training" the network. Some networks are trained by design -- once constructed no further learning takes place. Other types of networks require iterative training once wired up, but are not trainable once taught Still other types of networks can continue to learn after initial construction. The main benefit to using Neural Networks is their ability to work with conflicting or incomplete ("fuzzy") data sets. This ability and its usefulness will become evident in the following

  11. Evaluating Functional Autocorrelation within Spatially Distributed Neural Processing Networks*

    Science.gov (United States)

    Derado, Gordana; Bowman, F. Dubois; Ely, Timothy D.; Kilts, Clinton D.

    2010-01-01

    Data-driven statistical approaches, such as cluster analysis or independent component analysis, applied to in vivo functional neuroimaging data help to identify neural processing networks that exhibit similar task-related or restingstate patterns of activity. Ideally, the measured brain activity for voxels within such networks should exhibit high autocorrelation. An important limitation is that the algorithms do not typically quantify or statistically test the strength or nature of the within-network relatedness between voxels. To extend the results given by such data-driven analyses, we propose the use of Moran’s I statistic to measure the degree of functional autocorrelation within identified neural processing networks and to evaluate the statistical significance of the observed associations. We adapt the conventional definition of Moran’s I, for applicability to neuroimaging analyses, by defining the global autocorrelation index using network-based neighborhoods. Also, we compute network-specific contributions to the overall autocorrelation. We present results from a bootstrap analysis that provide empirical support for the use of our hypothesis testing framework. We illustrate our methodology using positron emission tomography (PET) data from a study that examines the neural representation of working memory among individuals with schizophrenia and functional magnetic resonance imaging (fMRI) data from a study of depression. PMID:21643436

  12. Neutron spectrometry with artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Rodriguez, J.M.; Mercado S, G.A. [Universidad Autonoma de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico); Iniguez de la Torre Bayo, M.P. [Universidad de Valladolid, Valladolid (Spain); Barquero, R. [Hospital Universitario Rio Hortega, Valladolid (Spain); Arteaga A, T. [Envases de Zacatecas, S.A. de C.V., Zacatecas (Mexico)]. e-mail: rvega@cantera.reduaz.mx

    2005-07-01

    An artificial neural network has been designed to obtain the neutron spectra from the Bonner spheres spectrometer's count rates. The neural network was trained using 129 neutron spectra. These include isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra from mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-bin ned to 31 energy groups using the MCNP 4C code. Re-binned spectra and UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and the respective spectrum was used as output during neural network training. After training the network was tested with the Bonner spheres count rates produced by a set of neutron spectra. This set contains data used during network training as well as data not used. Training and testing was carried out in the Mat lab program. To verify the network unfolding performance the original and unfolded spectra were compared using the {chi}{sup 2}-test and the total fluence ratios. The use of Artificial Neural Networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  13. Neutron spectrometry using artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Vega-Carrillo, Hector Rene [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Matematicas, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]. E-mail: fermineutron@yahoo.com; Martin Hernandez-Davila, Victor [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Manzanares-Acuna, Eduardo [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Mercado Sanchez, Gema A. [Unidad Academica de Matematicas, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico); Pilar Iniguez de la Torre, Maria [Depto. Fisica Teorica, Molecular y Nuclear, Universidad de Valladolid, Valladolid (Spain); Barquero, Raquel [Hospital Universitario Rio Hortega, Valladolid (Spain); Palacios, Francisco; Mendez Villafane, Roberto [Depto. Fisica Teorica, Molecular y Nuclear, Universidad de Valladolid, Valladolid (Spain)]|[Universidad Europea Miguel de Cervantes, C. Padre Julio Chevalier No. 2, 47012 Valladolid (Spain); Arteaga Arteaga, Tarcicio [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Envases de Zacatecas, SA de CV, Parque Industrial de Calera de Victor Rosales, Zac. (Mexico); Manuel Ortiz Rodriguez, Jose [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)]|[Unidad Academica de Ing. Electrica, Universidad Autonoma de Zacatecas, Apdo. Postal 336, 98000 Zacatecas, Zac. (Mexico)

    2006-04-15

    An artificial neural network has been designed to obtain neutron spectra from Bonner spheres spectrometer count rates. The neural network was trained using 129 neutron spectra. These include spectra from isotopic neutron sources; reference and operational spectra from accelerators and nuclear reactors, spectra based on mathematical functions as well as few energy groups and monoenergetic spectra. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. The re-binned spectra and the UTA4 response matrix were used to calculate the expected count rates in Bonner spheres spectrometer. These count rates were used as input and their respective spectra were used as output during the neural network training. After training, the network was tested with the Bonner spheres count rates produced by folding a set of neutron spectra with the response matrix. This set contains data used during network training as well as data not used. Training and testing was carried out using the Matlab{sup (R)} program. To verify the network unfolding performance, the original and unfolded spectra were compared using the root mean square error. The use of artificial neural networks to unfold neutron spectra in neutron spectrometry is an alternative procedure that overcomes the drawbacks associated with this ill-conditioned problem.

  14. Competitive cluster growth in complex networks

    Science.gov (United States)

    Moreira, André A.; Paula, Demétrius R.; Costa Filho, Raimundo N.; Andrade, José S., Jr.

    2006-06-01

    In this work we propose an idealized model for competitive cluster growth in complex networks. Each cluster can be thought of as a fraction of a community that shares some common opinion. Our results show that the cluster size distribution depends on the particular choice for the topology of the network of contacts among the agents. As an application, we show that the cluster size distributions obtained when the growth process is performed on hierarchical networks, e.g., the Apollonian network, have a scaling form similar to what has been observed for the distribution of a number of votes in an electoral process. We suggest that this similarity may be due to the fact that social networks involved in the electoral process may also possess an underlining hierarchical structure.

  15. Antagonistic neural networks underlying differentiated leadership roles

    OpenAIRE

    Richard Eleftherios Boyatzis; Kylie eRochford; Anthony Ian Jack

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950’s. Recent research in neuroscience suggests that the division between task oriented and socio-emotional oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks -- the Task Positive Network (TPN) and the Default Mode Network (DMN). Neural activity in ...

  16. Representations in neural network based empirical potentials

    Science.gov (United States)

    Cubuk, Ekin D.; Malone, Brad D.; Onat, Berk; Waterland, Amos; Kaxiras, Efthimios

    2017-07-01

    Many structural and mechanical properties of crystals, glasses, and biological macromolecules can be modeled from the local interactions between atoms. These interactions ultimately derive from the quantum nature of electrons, which can be prohibitively expensive to simulate. Machine learning has the potential to revolutionize materials modeling due to its ability to efficiently approximate complex functions. For example, neural networks can be trained to reproduce results of density functional theory calculations at a much lower cost. However, how neural networks reach their predictions is not well understood, which has led to them being used as a "black box" tool. This lack of understanding is not desirable especially for applications of neural networks in scientific inquiry. We argue that machine learning models trained on physical systems can be used as more than just approximations since they had to "learn" physical concepts in order to reproduce the labels they were trained on. We use dimensionality reduction techniques to study in detail the representation of silicon atoms at different stages in a neural network, which provides insight into how a neural network learns to model atomic interactions.

  17. Domestic Heat Demand Prediction using Neural Networks

    NARCIS (Netherlands)

    Bakker, Vincent; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2008-01-01

    By combining a cluster of microCHP appliances, a virtual power plant can be formed. To use such a virtual power plant, a good heat demand prediction of individual households is needed since the heat demand determines the production capacity. In this paper we present the results of using neural

  18. Using Self-Organizing Neural Network Map Combined with Ward's Clustering Algorithm for Visualization of Students' Cognitive Structural Models about Aliveness Concept.

    Science.gov (United States)

    Yorek, Nurettin; Ugulu, Ilker; Aydin, Halil

    2016-01-01

    We propose an approach to clustering and visualization of students' cognitive structural models. We use the self-organizing map (SOM) combined with Ward's clustering to conduct cluster analysis. In the study carried out on 100 subjects, a conceptual understanding test consisting of open-ended questions was used as a data collection tool. The results of analyses indicated that students constructed the aliveness concept by associating it predominantly with human. Motion appeared as the most frequently associated term with the aliveness concept. The results suggest that the aliveness concept has been constructed using anthropocentric and animistic cognitive structures. In the next step, we used the data obtained from the conceptual understanding test for training the SOM. Consequently, we propose a visualization method about cognitive structure of the aliveness concept.

  19. Community structure of complex networks based on continuous neural network

    Science.gov (United States)

    Dai, Ting-ting; Shan, Chang-ji; Dong, Yan-shou

    2017-09-01

    As a new subject, the research of complex networks has attracted the attention of researchers from different disciplines. Community structure is one of the key structures of complex networks, so it is a very important task to analyze the community structure of complex networks accurately. In this paper, we study the problem of extracting the community structure of complex networks, and propose a continuous neural network (CNN) algorithm. It is proved that for any given initial value, the continuous neural network algorithm converges to the eigenvector of the maximum eigenvalue of the network modularity matrix. Therefore, according to the stability of the evolution of the network symbol will be able to get two community structure.

  20. Flexible body control using neural networks

    Science.gov (United States)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  1. Identification and Position Control of Marine Helm using Artificial Neural Network Neural Network

    Directory of Open Access Journals (Sweden)

    Hui ZHU

    2008-02-01

    Full Text Available If nonlinearities such as saturation of the amplifier gain and motor torque, gear backlash, and shaft compliances- just to name a few - are considered in the position control system of marine helm, traditional control methods are no longer sufficient to be used to improve the performance of the system. In this paper an alternative approach to traditional control methods - a neural network reference controller - is proposed to establish an adaptive control of the position of the marine helm to achieve the controlled variable at the command position. This neural network controller comprises of two neural networks. One is the plant model network used to identify the nonlinear system and the other the controller network used to control the output to follow the reference model. The experimental results demonstrate that this adaptive neural network reference controller has much better control performance than is obtained with traditional controllers.

  2. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  3. Implementing Signature Neural Networks with Spiking Neurons

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the

  4. Training Deep Spiking Neural Networks Using Backpropagation.

    Science.gov (United States)

    Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael

    2016-01-01

    Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.

  5. Memory-optimal neural network approximation

    Science.gov (United States)

    Bölcskei, Helmut; Grohs, Philipp; Kutyniok, Gitta; Petersen, Philipp

    2017-08-01

    We summarize the main results of a recent theory-developed by the authors-establishing fundamental lower bounds on the connectivity and memory requirements of deep neural networks as a function of the complexity of the function class to be approximated by the network. These bounds are shown to be achievable. Specifically, all function classes that are optimally approximated by a general class of representation systems-so-called affine systems-can be approximated by deep neural networks with minimal connectivity and memory requirements. Affine systems encompass a wealth of representation systems from applied harmonic analysis such as wavelets, shearlets, ridgelets, α-shearlets, and more generally α-molecules. This result elucidates a remarkable universality property of deep neural networks and shows that they achieve the optimum approximation properties of all affine systems combined. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks which provide close-to-optimal approximation rates at minimal connectivity. Moreover, stochastic gradient descent is found to actually learn approximations that are sparse in the representation system optimally sparsifying the function class the network is trained on.

  6. Neural networks for sign language translation

    Science.gov (United States)

    Wilson, Beth J.; Anspach, Gretel

    1993-09-01

    A neural network is used to extract relevant features of sign language from video images of a person communicating in American Sign Language or Signed English. The key features are hand motion, hand location with respect to the body, and handshape. A modular hybrid design is under way to apply various techniques, including neural networks, in the development of a translation system that will facilitate communication between deaf and hearing people. One of the neural networks described here is used to classify video images of handshapes into their linguistic counterpart in American Sign Language. The video image is preprocessed to yield Fourier descriptors that encode the shape of the hand silhouette. These descriptors are then used as inputs to a neural network that classifies their shapes. The network is trained with various examples from different signers and is tested with new images from new signers. The results have shown that for coarse handshape classes, the network is invariant to the type of camera used to film the various signers and to the segmentation technique.

  7. Cluster Analysis Based on Bipartite Network

    Directory of Open Access Journals (Sweden)

    Dawei Zhang

    2014-01-01

    Full Text Available Clustering data has a wide range of applications and has attracted considerable attention in data mining and artificial intelligence. However it is difficult to find a set of clusters that best fits natural partitions without any class information. In this paper, a method for detecting the optimal cluster number is proposed. The optimal cluster number can be obtained by the proposal, while partitioning the data into clusters by FCM (Fuzzy c-means algorithm. It overcomes the drawback of FCM algorithm which needs to define the cluster number c in advance. The method works by converting the fuzzy cluster result into a weighted bipartite network and then the optimal cluster number can be detected by the improved bipartite modularity. The experimental results on artificial and real data sets show the validity of the proposed method.

  8. Development of Energy Efficient Clustering Protocol in Wireless Sensor Network Using Neuro-Fuzzy Approach.

    Science.gov (United States)

    Julie, E Golden; Selvi, S Tamil

    2016-01-01

    Wireless sensor networks (WSNs) consist of sensor nodes with limited processing capability and limited nonrechargeable battery power. Energy consumption in WSN is a significant issue in networks for improving network lifetime. It is essential to develop an energy aware clustering protocol in WSN to reduce energy consumption for increasing network lifetime. In this paper, a neuro-fuzzy energy aware clustering scheme (NFEACS) is proposed to form optimum and energy aware clusters. NFEACS consists of two parts: fuzzy subsystem and neural network system that achieved energy efficiency in forming clusters and cluster heads in WSN. NFEACS used neural network that provides effective training set related to energy and received signal strength of all nodes to estimate the expected energy for tentative cluster heads. Sensor nodes with higher energy are trained with center location of base station to select energy aware cluster heads. Fuzzy rule is used in fuzzy logic part that inputs to form clusters. NFEACS is designed for WSN handling mobility of node. The proposed scheme NFEACS is compared with related clustering schemes, cluster-head election mechanism using fuzzy logic, and energy aware fuzzy unequal clustering. The experiment results show that NFEACS performs better than the other related schemes.

  9. Development of Energy Efficient Clustering Protocol in Wireless Sensor Network Using Neuro-Fuzzy Approach

    Directory of Open Access Journals (Sweden)

    E. Golden Julie

    2016-01-01

    Full Text Available Wireless sensor networks (WSNs consist of sensor nodes with limited processing capability and limited nonrechargeable battery power. Energy consumption in WSN is a significant issue in networks for improving network lifetime. It is essential to develop an energy aware clustering protocol in WSN to reduce energy consumption for increasing network lifetime. In this paper, a neuro-fuzzy energy aware clustering scheme (NFEACS is proposed to form optimum and energy aware clusters. NFEACS consists of two parts: fuzzy subsystem and neural network system that achieved energy efficiency in forming clusters and cluster heads in WSN. NFEACS used neural network that provides effective training set related to energy and received signal strength of all nodes to estimate the expected energy for tentative cluster heads. Sensor nodes with higher energy are trained with center location of base station to select energy aware cluster heads. Fuzzy rule is used in fuzzy logic part that inputs to form clusters. NFEACS is designed for WSN handling mobility of node. The proposed scheme NFEACS is compared with related clustering schemes, cluster-head election mechanism using fuzzy logic, and energy aware fuzzy unequal clustering. The experiment results show that NFEACS performs better than the other related schemes.

  10. Development of Energy Efficient Clustering Protocol in Wireless Sensor Network Using Neuro-Fuzzy Approach

    Science.gov (United States)

    Julie, E. Golden; Selvi, S. Tamil

    2016-01-01

    Wireless sensor networks (WSNs) consist of sensor nodes with limited processing capability and limited nonrechargeable battery power. Energy consumption in WSN is a significant issue in networks for improving network lifetime. It is essential to develop an energy aware clustering protocol in WSN to reduce energy consumption for increasing network lifetime. In this paper, a neuro-fuzzy energy aware clustering scheme (NFEACS) is proposed to form optimum and energy aware clusters. NFEACS consists of two parts: fuzzy subsystem and neural network system that achieved energy efficiency in forming clusters and cluster heads in WSN. NFEACS used neural network that provides effective training set related to energy and received signal strength of all nodes to estimate the expected energy for tentative cluster heads. Sensor nodes with higher energy are trained with center location of base station to select energy aware cluster heads. Fuzzy rule is used in fuzzy logic part that inputs to form clusters. NFEACS is designed for WSN handling mobility of node. The proposed scheme NFEACS is compared with related clustering schemes, cluster-head election mechanism using fuzzy logic, and energy aware fuzzy unequal clustering. The experiment results show that NFEACS performs better than the other related schemes. PMID:26881269

  11. Equivalence of Conventional and Modified Network of Generalized Neural Elements

    Directory of Open Access Journals (Sweden)

    E. V. Konovalov

    2016-01-01

    Full Text Available The article is devoted to the analysis of neural networks consisting of generalized neural elements. The first part of the article proposes a new neural network model — a modified network of generalized neural elements (MGNE-network. This network developes the model of generalized neural element, whose formal description contains some flaws. In the model of the MGNE-network these drawbacks are overcome. A neural network is introduced all at once, without preliminary description of the model of a single neural element and method of such elements interaction. The description of neural network mathematical model is simplified and makes it relatively easy to construct on its basis a simulation model to conduct numerical experiments. The model of the MGNE-network is universal, uniting properties of networks consisting of neurons-oscillators and neurons-detectors. In the second part of the article we prove the equivalence of the dynamics of the two considered neural networks: the network, consisting of classical generalized neural elements, and MGNE-network. We introduce the definition of equivalence in the functioning of the generalized neural element and the MGNE-network consisting of a single element. Then we introduce the definition of the equivalence of the dynamics of the two neural networks in general. It is determined the correlation of different parameters of the two considered neural network models. We discuss the issue of matching the initial conditions of the two considered neural network models. We prove the theorem about the equivalence of the dynamics of the two considered neural networks. This theorem allows us to apply all previously obtained results for the networks, consisting of classical generalized neural elements, to the MGNE-network.

  12. Neural networks and particle physics

    CERN Document Server

    Peterson, Carsten

    1993-01-01

    1. Introduction : Structure of the Central Nervous System Generics2. Feed-forward networks, Perceptions, Function approximators3. Self-organisation, Feature Maps4. Feed-back Networks, The Hopfield model, Optimization problems, Feed-back, Networks, Deformable templates, Graph bisection

  13. Cotton genotypes selection through artificial neural networks.

    Science.gov (United States)

    Júnior, E G Silva; Cardoso, D B O; Reis, M C; Nascimento, A F O; Bortolin, D I; Martins, M R; Sousa, L B

    2017-09-27

    Breeding programs currently use statistical analysis to assist in the identification of superior genotypes at various stages of a cultivar's development. Differently from these analyses, the computational intelligence approach has been little explored in genetic improvement of cotton. Thus, this study was carried out with the objective of presenting the use of artificial neural networks as auxiliary tools in the improvement of the cotton to improve fiber quality. To demonstrate the applicability of this approach, this research was carried out using the evaluation data of 40 genotypes. In order to classify the genotypes for fiber quality, the artificial neural networks were trained with replicate data of 20 genotypes of cotton evaluated in the harvests of 2013/14 and 2014/15, regarding fiber length, uniformity of length, fiber strength, micronaire index, elongation, short fiber index, maturity index, reflectance degree, and fiber quality index. This quality index was estimated by means of a weighted average on the determined score (1 to 5) of each characteristic of the HVI evaluated, according to its industry standards. The artificial neural networks presented a high capacity of correct classification of the 20 selected genotypes based on the fiber quality index, so that when using fiber length associated with the short fiber index, fiber maturation, and micronaire index, the artificial neural networks presented better results than using only fiber length and previous associations. It was also observed that to submit data of means of new genotypes to the neural networks trained with data of repetition, provides better results of classification of the genotypes. When observing the results obtained in the present study, it was verified that the artificial neural networks present great potential to be used in the different stages of a genetic improvement program of the cotton, aiming at the improvement of the fiber quality of the future cultivars.

  14. Neural network approaches for noisy language modeling.

    Science.gov (United States)

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  15. Regional frequency analysis using Growing Neural Gas network

    Science.gov (United States)

    Abdi, Amin; Hassanzadeh, Yousef; Ouarda, Taha B. M. J.

    2017-07-01

    The delineation of hydrologically homogeneous regions is an important issue in regional hydrological frequency analysis. In the present study, an application of the Growing Neural Gas (GNG) network for hydrological data clustering is presented. The GNG is an incremental and unsupervised neural network, which is able to adapt its structure during the training procedure without using a prior knowledge of the size and shape of the network. In the GNG algorithm, the Minimum Description Length (MDL) measure as the cluster validity index is utilized for determining the optimal number of clusters (sub-regions). The capability of the proposed algorithm is illustrated by regionalizing drought severities for 40 synoptic weather stations in Iran. To fulfill this aim, first a clustering method is applied to form the sub-regions and then a heterogeneity measure is used to test the degree of heterogeneity of the delineated sub-regions. According to the MDL measure and considering two different indices namely CS and Davies-Bouldin (DB) in the GNG network, the entire study area is subdivided in two sub-regions located in the eastern and western sides of Iran. In order to evaluate the performance of the GNG algorithm, a number of other commonly used clustering methods, like K-means, fuzzy C-means, self-organizing map and Ward method are utilized in this study. The results of the heterogeneity measure based on the L-moments approach reveal that only the GNG algorithm successfully yields homogeneous sub-regions in comparison to the other methods.

  16. Artificial neural network in cosmic landscape

    Science.gov (United States)

    Liu, Junyu

    2017-12-01

    In this paper we propose that artificial neural network, the basis of machine learning, is useful to generate the inflationary landscape from a cosmological point of view. Traditional numerical simulations of a global cosmic landscape typically need an exponential complexity when the number of fields is large. However, a basic application of artificial neural network could solve the problem based on the universal approximation theorem of the multilayer perceptron. A toy model in inflation with multiple light fields is investigated numerically as an example of such an application.

  17. Top tagging with deep neural networks [Vidyo

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent literature on deep neural networks for top tagging has focussed on image based techniques or multivariate approaches using high level jet substructure variables. Here, we take a sequential approach to this task by using anordered sequence of energy deposits as training inputs. Unlike previous approaches, this strategy does not result in a loss of information during pixelization or the calculation of high level features. We also propose new preprocessing methods that do not alter key physical quantities such as jet mass. We compare the performance of this approach to standard tagging techniques and present results evaluating the robustness of the neural network to pileup.

  18. Automatic identification of species with neural networks.

    Science.gov (United States)

    Hernández-Serna, Andrés; Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  19. Automatic identification of species with neural networks

    Directory of Open Access Journals (Sweden)

    Andrés Hernández-Serna

    2014-11-01

    Full Text Available A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  20. Pulse image recognition using fuzzy neural network.

    Science.gov (United States)

    Xu, L S; Meng, Max Q -H; Wang, K Q

    2007-01-01

    The automatic recognition of pulse images is the key in the research of computerized pulse diagnosis. In order to automatically differentiate the pulse patterns by using small samples, a fuzzy neural network to classify pulse images based on the knowledge of experts in traditional Chinese pulse diagnosis was designed. The designed classifier can make hard decision and soft decision for identifying 18 patterns of pulse images at the accuracy of 91%, which is better than the results that achieved by back-propagation neural network.

  1. Assessing Landslide Hazard Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, Farzad; Choobbasti, Asskar Janalizadeh; Barari, Amin

    2011-01-01

    failure" which is main concentration of the current research and "liquefaction failure". Shear failures along shear planes occur when the shear stress along the sliding surfaces exceed the effective shear strength. These slides have been referred to as landslide. An expert system based on artificial...... neural network has been developed for use in the stability evaluation of slopes under various geological conditions and engineering requirements. The Artificial neural network model of this research uses slope characteristics as input and leads to the output in form of the probability of failure...

  2. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  3. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  4. SAR ATR Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Tian Zhuangzhuang

    2016-06-01

    Full Text Available This study presents a new method of Synthetic Aperture Radar (SAR image target recognition based on a convolutional neural network. First, we introduce a class separability measure into the cost function to improve this network’s ability to distinguish between categories. Then, we extract SAR image features using the improved convolutional neural network and classify these features using a support vector machine. Experimental results using moving and stationary target acquisition and recognition SAR datasets prove the validity of this method.

  5. Tumor diagnosis using the backpropagation neural network method

    Science.gov (United States)

    Ma, Lixing; Sukuta, Sydney; Bruch, Reinhard F.; Afanasyeva, Natalia I.; Looney, Carl G.

    1998-04-01

    For characterization of skin cancer, an artificial neural network method has been developed to diagnose normal tissue, benign tumor and melanoma. The pattern recognition is based on a three-layer neural network fuzzy learning system. In this study, the input neuron data set is the Fourier transform IR spectrum obtained by a new fiberoptic evanescent wave Fourier transform IR spectroscopy method in the range of 1480 to 1850 cm-1. Ten input features are extracted from the absorbency values in this region. A single hidden layer of neural nodes with sigmoids activation functions clusters the feature space into small subclasses and the output nodes are separated in different nonconvex classes to permit nonlinear discrimination of disease states. The output is classified as three classes: normal tissue, benign tumor and melanoma. The results obtained from the neural network pattern recognition are shown to be consistent with traditional medical diagnosis. Input features have also been extracted from the absorbency spectra using chemical factor analysis. These abstract features or factors are also used in the classification.

  6. Exploiting network redundancy for low-cost neural network realizations.

    NARCIS (Netherlands)

    Keegstra, H; Jansen, WJ; Nijhuis, JAG; Spaanenburg, L; Stevens, H; Udding, JT

    1996-01-01

    A method is presented to optimize a trained neural network for physical realization styles. Target architectures are embedded microcontrollers or standard cell based ASIC designs. The approach exploits the redundancy in the network, required for successful training, to replace the synaptic weighting

  7. Removing Epistemological Bias From Empirical Observation of Neural Networks

    OpenAIRE

    Waldron, Ronan

    1994-01-01

    Also in Proceedings of the International Joint Conference on Neural Networks, Nagoya, Japan. This paper addresses the application of neural network research to a theory of autonomous systems. Neural networks, while enjoying considerable success in autonomous systems applications, have failed to provide a firm theoretical underpinning to neural systems embedded in their natural ecological context. This paper proposes a stochastic formulation of such an embedding. A neural sys...

  8. Parameter Identification by Bayes Decision and Neural Networks

    DEFF Research Database (Denmark)

    Kulczycki, P.; Schiøler, Henrik

    1994-01-01

    The problem of parameter identification by Bayes point estimation using neural networks is investigated.......The problem of parameter identification by Bayes point estimation using neural networks is investigated....

  9. On The Comparison of Artificial Neural Network (ANN) and ...

    African Journals Online (AJOL)

    West African Journal of Industrial and Academic Research ... This work presented the results of an experimental comparison of two models: Multinomial Logistic Regression (MLR) and Artificial Neural Network (ANN) for ... Keywords: Multinomial Logistic Regression, Artificial Neural Network, Correct classification rate.

  10. A NEURAL OSCILLATOR-NETWORK MODEL OF TEMPORAL PATTERN GENERATION

    NARCIS (Netherlands)

    Schomaker, Lambert

    Most contemporary neural network models deal with essentially static, perceptual problems of classification and transformation. Models such as multi-layer feedforward perceptrons generally do not incorporate time as an essential dimension, whereas biological neural networks are inherently temporal

  11. Training strategy for convolutional neural networks in pedestrian gender classification

    Science.gov (United States)

    Ng, Choon-Boon; Tay, Yong-Haur; Goi, Bok-Min

    2017-06-01

    In this work, we studied a strategy for training a convolutional neural network in pedestrian gender classification with limited amount of labeled training data. Unsupervised learning by k-means clustering on pedestrian images was used to learn the filters to initialize the first layer of the network. As a form of pre-training, supervised learning for the related task of pedestrian classification was performed. Finally, the network was fine-tuned for gender classification. We found that this strategy improved the network's generalization ability in gender classification, achieving better test results when compared to random weights initialization and slightly more beneficial than merely initializing the first layer filters by unsupervised learning. This shows that unsupervised learning followed by pre-training with pedestrian images is an effective strategy to learn useful features for pedestrian gender classification.

  12. Neural networks of human nature and nurture

    Directory of Open Access Journals (Sweden)

    Daniel S. Levine

    2009-11-01

    Full Text Available Neural network methods have facilitated the unification of several unfortunate splits in psychology, including nature versus nurture. We review the contributions of this methodology and then discuss tentative network theories of caring behavior, of uncaring behavior, and of how the frontal lobes are involved in the choices between them. The implications of our theory are optimistic about the prospects of society to encourage the human potential for caring.

  13. Neural network for sonogram gap filling

    DEFF Research Database (Denmark)

    Klebæk, Henrik; Jensen, Jørgen Arendt; Hansen, Lars Kai

    1995-01-01

    a neural network for predicting mean frequency of the velocity signal and its variance. The neural network then predicts the evolution of the mean and variance in the gaps, and the sonogram and audio signal are reconstructed from these. The technique is applied on in-vivo data from the carotid artery...... in the sonogram and in the audio signal, rendering the audio signal useless, thus making diagnosis difficult. The current goal for ultrasound scanners is to maintain a high refresh rate for the B-mode image and at the same time attain a high maximum velocity in the sonogram display. This precludes the intermixing...... series, and is shown to yield better results, i.e., the variances of the predictions are lower. The ability of the neural predictor to reconstruct both the sonogram and the audio signal, when only 50% of the time is used for velocity data acquisition, is demonstrated for the in-vivo data...

  14. Digital Neural Networks for New Media

    Science.gov (United States)

    Spaanenburg, Lambert; Malki, Suleyman

    Neural Networks perform computationally intensive tasks offering smart solutions for many new media applications. A number of analog and mixed digital/analog implementations have been proposed to smooth the algorithmic gap. But gradually, the digital implementation has become feasible, and the dedicated neural processor is on the horizon. A notable example is the Cellular Neural Network (CNN). The analog direction has matured for low-power, smart vision sensors; the digital direction is gradually being shaped into an IP-core for algorithm acceleration, especially for use in FPGA-based high-performance systems. The chapter discusses the next step towards a flexible and scalable multi-core engine using Application-Specific Integrated Processors (ASIP). This topographic engine can serve many new media tasks, as illustrated by novel applications in Homeland Security. We conclude with a view on the CNN kaleidoscope for the year 2020.

  15. Distributed controller clustering in software defined networks.

    Directory of Open Access Journals (Sweden)

    Ahmed Abdelaziz

    Full Text Available Software Defined Networking (SDN is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN SDN and Open Network Operating System (ONOS controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.

  16. Distributed controller clustering in software defined networks.

    Science.gov (United States)

    Abdelaziz, Ahmed; Fong, Ang Tan; Gani, Abdullah; Garba, Usman; Khan, Suleman; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond

    2017-01-01

    Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.

  17. Peeking Network States with Clustered Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jinoh [Texas A & M Univ., Commerce, TX (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-10-20

    Network traffic monitoring has long been a core element for effec- tive network management and security. However, it is still a chal- lenging task with a high degree of complexity for comprehensive analysis when considering multiple variables and ever-increasing traffic volumes to monitor. For example, one of the widely con- sidered approaches is to scrutinize probabilistic distributions, but it poses a scalability concern and multivariate analysis is not gen- erally supported due to the exponential increase of the complexity. In this work, we propose a novel method for network traffic moni- toring based on clustering, one of the powerful deep-learning tech- niques. We show that the new approach enables us to recognize clustered results as patterns representing the network states, which can then be utilized to evaluate “similarity” of network states over time. In addition, we define a new quantitative measure for the similarity between two compared network states observed in dif- ferent time windows, as a supportive means for intuitive analysis. Finally, we demonstrate the clustering-based network monitoring with public traffic traces, and show that the proposed approach us- ing the clustering method has a great opportunity for feasible, cost- effective network monitoring.

  18. Optimizing neural network models: motivation and case studies

    OpenAIRE

    Harp, S A; T. Samad

    2012-01-01

    Practical successes have been achieved  with neural network models in a variety of domains, including energy-related industry. The large, complex design space presented by neural networks is only minimally explored in current practice. The satisfactory results that nevertheless have been obtained testify that neural networks are a robust modeling technology; at the same time, however, the lack of a systematic design approach implies that the best neural network models generally  rem...

  19. Dynamic Object Identification with SOM-based neural networks

    Directory of Open Access Journals (Sweden)

    Aleksey Averkin

    2014-03-01

    Full Text Available In this article a number of neural networks based on self-organizing maps, that can be successfully used for dynamic object identification, is described. Unique SOM-based modular neural networks with vector quantized associative memory and recurrent self-organizing maps as modules are presented. The structured algorithms of learning and operation of such SOM-based neural networks are described in details, also some experimental results and comparison with some other neural networks are given.

  20. Stock Price Prediction Based on Procedural Neural Networks

    OpenAIRE

    Jiuzhen Liang; Wei Song; Mei Wang

    2011-01-01

    We present a spatiotemporal model, namely, procedural neural networks for stock price prediction. Compared with some successful traditional models on simulating stock market, such as BNN (backpropagation neural networks, HMM (hidden Markov model) and SVM (support vector machine)), the procedural neural network model processes both spacial and temporal information synchronously without slide time window, which is typically used in the well-known recurrent neural networks. Two differen...

  1. Computational capabilities of graph neural networks.

    Science.gov (United States)

    Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele

    2009-01-01

    In this paper, we will consider the approximation properties of a recently introduced neural network model called graph neural network (GNN), which can be used to process-structured data inputs, e.g., acyclic graphs, cyclic graphs, and directed or undirected graphs. This class of neural networks implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n onto an m-dimensional Euclidean space. We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks (FNNs). Some experimental examples are used to show the computational capabilities of the proposed model.

  2. Parameter estimation using compensatory neural networks

    Indian Academy of Sciences (India)

    Proposed here is a new neuron model, a basis for Compensatory Neural Network Architecture (CNNA), which not only reduces the total number of interconnections among neurons but also reduces the total computing time for training. The suggested model has properties of the basic neuron model as well as the higher ...

  3. Based on BP Neural Network Stock Prediction

    Science.gov (United States)

    Liu, Xiangwei; Ma, Xin

    2012-01-01

    The stock market has a high profit and high risk features, on the stock market analysis and prediction research has been paid attention to by people. Stock price trend is a complex nonlinear function, so the price has certain predictability. This article mainly with improved BP neural network (BPNN) to set up the stock market prediction model, and…

  4. Epileptiform spike detection via convolutional neural networks

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Jin, Jing; Maszczyk, Tomasz

    2016-01-01

    The EEG of epileptic patients often contains sharp waveforms called "spikes", occurring between seizures. Detecting such spikes is crucial for diagnosing epilepsy. In this paper, we develop a convolutional neural network (CNN) for detecting spikes in EEG of epileptic patients in an automated...

  5. Artificial neural networks and support vector mac

    Indian Academy of Sciences (India)

    Quantitative structure-property relationships of electroluminescent materials: Artificial neural networks and support vector machines to predict electroluminescence of organic molecules. ALANA FERNANDES GOLIN and RICARDO STEFANI. ∗. Laboratório de Estudos de Materiais (LEMAT), Instituto de Ciências Exatas e da ...

  6. Neural Networks for protein Structure Prediction

    DEFF Research Database (Denmark)

    Bohr, Henrik

    1998-01-01

    This is a review about neural network applications in bioinformatics. Especially the applications to protein structure prediction, e.g. prediction of secondary structures, prediction of surface structure, fold class recognition and prediction of the 3-dimensional structure of protein backbones...

  7. Towards semen quality assessment using neural networks

    DEFF Research Database (Denmark)

    Linneberg, Christian; Salamon, P.; Svarer, C.

    1994-01-01

    The paper presents the methodology and results from a neural net based classification of human sperm head morphology. The methodology uses a preprocessing scheme in which invariant Fourier descriptors are lumped into “energy” bands. The resulting networks are pruned using optimal brain damage...

  8. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  9. Convolutional Neural Networks - Generalizability and Interpretations

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David

    from data despite it being limited in amount or context representation. Within Machine Learning this thesis focuses on Convolutional Neural Networks for Computer Vision. The research aims to answer how to explore a model's generalizability to the whole population of data samples and how to interpret...

  10. Visualization of neural networks using saliency maps

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.; Kjems, Ulrik; Hansen, Lars Kai

    1995-01-01

    The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the ill-posed case, where the dimensionality of the input-field by far exceeds the number of examples. Several levels of approximations...

  11. Separable explanations of neural network decisions

    DEFF Research Database (Denmark)

    Rieger, Laura

    2017-01-01

    Deep Taylor Decomposition is a method used to explain neural network decisions. When applying this method to non-dominant classifications, the resulting explanation does not reflect important features for the chosen classification. We propose that this is caused by the dense layers and propose...

  12. Fast Fingerprint Classification with Deep Neural Network

    DEFF Research Database (Denmark)

    Michelsanti, Daniel; Guichi, Yanis; Ene, Andreea-Daniela

    2017-01-01

    . In this work we evaluate the performance of two pre-trained convolutional neural networks fine-tuned on the NIST SD4 benchmark database. The obtained results show that this approach is comparable with other results in the literature, with the advantage of a fast feature extraction stage....

  13. Empirical generalization assessment of neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1995-01-01

    This paper addresses the assessment of generalization performance of neural network models by use of empirical techniques. We suggest to use the cross-validation scheme combined with a resampling technique to obtain an estimate of the generalization performance distribution of a specific model...

  14. Localizing Tortoise Nests by Neural Networks.

    Directory of Open Access Journals (Sweden)

    Roberto Barbuti

    Full Text Available The goal of this research is to recognize the nest digging activity of tortoises using a device mounted atop the tortoise carapace. The device classifies tortoise movements in order to discriminate between nest digging, and non-digging activity (specifically walking and eating. Accelerometer data was collected from devices attached to the carapace of a number of tortoises during their two-month nesting period. Our system uses an accelerometer and an activity recognition system (ARS which is modularly structured using an artificial neural network and an output filter. For the purpose of experiment and comparison, and with the aim of minimizing the computational cost, the artificial neural network has been modelled according to three different architectures based on the input delay neural network (IDNN. We show that the ARS can achieve very high accuracy on segments of data sequences, with an extremely small neural network that can be embedded in programmable low power devices. Given that digging is typically a long activity (up to two hours, the application of ARS on data segments can be repeated over time to set up a reliable and efficient system, called Tortoise@, for digging activity recognition.

  15. Feature to prototype transition in neural networks

    Science.gov (United States)

    Krotov, Dmitry; Hopfield, John

    Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.

  16. Applying Artificial Neural Networks for Face Recognition

    Directory of Open Access Journals (Sweden)

    Thai Hoang Le

    2011-01-01

    Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.

  17. drinking water treatment using artificial neural network

    African Journals Online (AJOL)

    ogwueleka

    synaptic weights are used to store the knowledge.” The neural network approach is a branch of artificial intelligence. The ANN is based on a model of the human neurological system that consists of basic computing elements (called neurons) interconnected together (Figure 1). The model used for all classification attempts.

  18. Artificial neural networks in neutron dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado, G.A.; Perales M, W.A.; Robles R, J.A. [Unidades Academicas de Estudios Nucleares, UAZ, A.P. 336, 98000 Zacatecas (Mexico); Gallego, E.; Lorente, A. [Depto. de Ingenieria Nuclear, Universidad Politecnica de Madrid, (Spain)

    2005-07-01

    An artificial neural network has been designed to obtain the neutron doses using only the Bonner spheres spectrometer's count rates. Ambient, personal and effective neutron doses were included. 187 neutron spectra were utilized to calculate the Bonner count rates and the neutron doses. The spectra were transformed from lethargy to energy distribution and were re-binned to 31 energy groups using the MCNP 4C code. Re-binned spectra, UTA4 response matrix and fluence-to-dose coefficients were used to calculate the count rates in Bonner spheres spectrometer and the doses. Count rates were used as input and the respective doses were used as output during neural network training. Training and testing was carried out in Mat lab environment. The artificial neural network performance was evaluated using the {chi}{sup 2}- test, where the original and calculated doses were compared. The use of Artificial Neural Networks in neutron dosimetry is an alternative procedure that overcomes the drawbacks associated in this ill-conditioned problem. (Author)

  19. Learning chaotic attractors by neural networks

    NARCIS (Netherlands)

    Bakker, R; Schouten, JC; Giles, CL; Takens, F; van den Bleek, CM

    2000-01-01

    An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time series. During training, the algorithm learns to short-term predict the time series. At the same time a criterion, developed by Diks, van Zwet, Takens, and de Goede (1996) is monitored

  20. Nonlinear Time Series Analysis via Neural Networks

    Science.gov (United States)

    Volná, Eva; Janošek, Michal; Kocian, Václav; Kotyrba, Martin

    This article deals with a time series analysis based on neural networks in order to make an effective forex market [Moore and Roche, J. Int. Econ. 58, 387-411 (2002)] pattern recognition. Our goal is to find and recognize important patterns which repeatedly appear in the market history to adapt our trading system behaviour based on them.

  1. Neural networks, penalty logic and optimality theory

    NARCIS (Netherlands)

    Blutner, R.; Benz, A.; Blutner, R.

    2009-01-01

    Ever since the discovery of neural networks, there has been a controversy between two modes of information processing. On the one hand, symbolic systems have proven indispensable for our understanding of higher intelligence, especially when cognitive domains like language and reasoning are examined.

  2. Image inpainting using a neural network

    Directory of Open Access Journals (Sweden)

    Gapon Nikolay

    2017-01-01

    Full Text Available The paper describes a new method of two-dimensional signals reconstruction by restoring static images. A new method of spatial reconstruction of static images based on a geometric model using a neural network is proposed, it is based on the search for similar blocks and copying them into the region of distorted or missing pixel values.

  3. Foetal ECG recovery using dynamic neural networks.

    Science.gov (United States)

    Camps-Valls, Gustavo; Martínez-Sober, Marcelino; Soria-Olivas, Emilio; Magdalena-Benedito, Rafael; Calpe-Maravilla, Javier; Guerrero-Martínez, Juan

    2004-07-01

    Non-invasive electrocardiography has proven to be a very interesting method for obtaining information about the foetus state and thus to assure its well-being during pregnancy. One of the main applications in this field is foetal electrocardiogram (ECG) recovery by means of automatic methods. Evident problems found in the literature are the limited number of available registers, the lack of performance indicators, and the limited use of non-linear adaptive methods. In order to circumvent these problems, we first introduce the generation of synthetic registers and discuss the influence of different kinds of noise to the modelling. Second, a method which is based on numerical (correlation coefficient) and statistical (analysis of variance, ANOVA) measures allows us to select the best recovery model. Finally, finite impulse response (FIR) and gamma neural networks are included in the adaptive noise cancellation (ANC) scheme in order to provide highly non-linear, dynamic capabilities to the recovery model. Neural networks are benchmarked with classical adaptive methods such as the least mean squares (LMS) and the normalized LMS (NLMS) algorithms in simulated and real registers and some conclusions are drawn. For synthetic registers, the most determinant factor in the identification of the models is the foetal-maternal signal-to-noise ratio (SNR). In addition, as the electromyogram contribution becomes more relevant, neural networks clearly outperform the LMS-based algorithm. From the ANOVA test, we found statistical differences between LMS-based models and neural models when complex situations (high foetal-maternal and foetal-noise SNRs) were present. These conclusions were confirmed after doing robustness tests on synthetic registers, visual inspection of the recovered signals and calculation of the recognition rates of foetal R-peaks for real situations. Finally, the best compromise between model complexity and outcomes was provided by the FIR neural network. Both

  4. Exploring biological network structure with clustered random networks

    Directory of Open Access Journals (Sweden)

    Bansal Shweta

    2009-12-01

    Full Text Available Abstract Background Complex biological systems are often modeled as networks of interacting units. Networks of biochemical interactions among proteins, epidemiological contacts among hosts, and trophic interactions in ecosystems, to name a few, have provided useful insights into the dynamical processes that shape and traverse these systems. The degrees of nodes (numbers of interactions and the extent of clustering (the tendency for a set of three nodes to be interconnected are two of many well-studied network properties that can fundamentally shape a system. Disentangling the interdependent effects of the various network properties, however, can be difficult. Simple network models can help us quantify the structure of empirical networked systems and understand the impact of various topological properties on dynamics. Results Here we develop and implement a new Markov chain simulation algorithm to generate simple, connected random graphs that have a specified degree sequence and level of clustering, but are random in all other respects. The implementation of the algorithm (ClustRNet: Clustered Random Networks provides the generation of random graphs optimized according to a local or global, and relative or absolute measure of clustering. We compare our algorithm to other similar methods and show that ours more successfully produces desired network characteristics. Finding appropriate null models is crucial in bioinformatics research, and is often difficult, particularly for biological networks. As we demonstrate, the networks generated by ClustRNet can serve as random controls when investigating the impacts of complex network features beyond the byproduct of degree and clustering in empirical networks. Conclusion ClustRNet generates ensembles of graphs of specified edge structure and clustering. These graphs allow for systematic study of the impacts of connectivity and redundancies on network function and dynamics. This process is a key step in

  5. Exploring biological network structure with clustered random networks.

    Science.gov (United States)

    Bansal, Shweta; Khandelwal, Shashank; Meyers, Lauren Ancel

    2009-12-09

    Complex biological systems are often modeled as networks of interacting units. Networks of biochemical interactions among proteins, epidemiological contacts among hosts, and trophic interactions in ecosystems, to name a few, have provided useful insights into the dynamical processes that shape and traverse these systems. The degrees of nodes (numbers of interactions) and the extent of clustering (the tendency for a set of three nodes to be interconnected) are two of many well-studied network properties that can fundamentally shape a system. Disentangling the interdependent effects of the various network properties, however, can be difficult. Simple network models can help us quantify the structure of empirical networked systems and understand the impact of various topological properties on dynamics. Here we develop and implement a new Markov chain simulation algorithm to generate simple, connected random graphs that have a specified degree sequence and level of clustering, but are random in all other respects. The implementation of the algorithm (ClustRNet: Clustered Random Networks) provides the generation of random graphs optimized according to a local or global, and relative or absolute measure of clustering. We compare our algorithm to other similar methods and show that ours more successfully produces desired network characteristics.Finding appropriate null models is crucial in bioinformatics research, and is often difficult, particularly for biological networks. As we demonstrate, the networks generated by ClustRNet can serve as random controls when investigating the impacts of complex network features beyond the byproduct of degree and clustering in empirical networks. ClustRNet generates ensembles of graphs of specified edge structure and clustering. These graphs allow for systematic study of the impacts of connectivity and redundancies on network function and dynamics. This process is a key step in unraveling the functional consequences of the structural

  6. MBVCNN: Joint convolutional neural networks method for image recognition

    Science.gov (United States)

    Tong, Tong; Mu, Xiaodong; Zhang, Li; Yi, Zhaoxiang; Hu, Pei

    2017-05-01

    Aiming at the problem of objects in image recognition rectangle, but objects which are input into convolutional neural networks square, the object recognition model was put forward which was based on BING method to realize object estimate, used vectorization of convolutional neural networks to realize input square image in convolutional networks, therefore, built joint convolution neural networks, which achieve multiple size image input. Verified by experiments, the accuracy of multi-object image recognition was improved by 6.70% compared with single vectorization of convolutional neural networks. Therefore, image recognition method of joint convolutional neural networks can enhance the accuracy in image recognition, especially for target in rectangular shape.

  7. Analysis of neural networks in terms of domain functions

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a

  8. Extracting knowledge from supervised neural networks in image processing

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Slump, Cornelis H.; Spaanenburg, Lambert; Jain, R.; Abraham, A.; Faucher, C.; van der Zwaag, B.J.

    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a “magic tool��? but possibly even more as a

  9. neural network based load frequency control for restructuring power

    African Journals Online (AJOL)

    2012-03-01

    Mar 1, 2012 ... Abstract. In this study, an artificial neural network (ANN) application of load frequency control. (LFC) of a Multi-Area power system by using a neural network controller is presented. The comparison between a conventional Proportional Integral (PI) controller and the proposed artificial neural networks ...

  10. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    The application of neural networks to model a laboratory scale inverse fluidized bed reactor has been studied. A Radial Basis Function neural network has been successfully employed for the modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological ...

  11. Time series prediction with simple recurrent neural networks ...

    African Journals Online (AJOL)

    Simple recurrent neural networks are widely used in time series prediction. Most researchers and application developers often choose arbitrarily between Elman or Jordan simple recurrent neural networks for their applications. A hybrid of the two called Elman-Jordan (or Multi-recurrent) neural network is also being used.

  12. Application of radial basis neural network for state estimation of ...

    African Journals Online (AJOL)

    user

    An original application of radial basis function (RBF) neural network for power system state estimation is proposed in this paper. The property of massive parallelism of neural networks is employed for this. The application of RBF neural network for state estimation is investigated by testing its applicability on a IEEE 14 bus ...

  13. New Neural Network Methods for Forecasting Regional Employment

    NARCIS (Netherlands)

    Patuelli, R.; Reggiani, A; Nijkamp, P.; Blien, U.

    2006-01-01

    In this paper, a set of neural network (NN) models is developed to compute short-term forecasts of regional employment patterns in Germany. Neural networks are modern statistical tools based on learning algorithms that are able to process large amounts of data. Neural networks are enjoying

  14. The Artifical Neural Network as means for modeling Nonlinear Systems

    OpenAIRE

    Drábek Oldøich; Taufer Ivan

    1998-01-01

    The paper deals with nonlinear system identification based on neural network. The topic of this publication is simulation of training and testing a neural network. A contribution is assigned to technologists which are good at the clasical identification problems but their knowledges about identification based on neural network are only on the stage of theoretical bases.

  15. The Artifical Neural Network as means for modeling Nonlinear Systems

    Directory of Open Access Journals (Sweden)

    Drábek Oldøich

    1998-12-01

    Full Text Available The paper deals with nonlinear system identification based on neural network. The topic of this publication is simulation of training and testing a neural network. A contribution is assigned to technologists which are good at the clasical identification problems but their knowledges about identification based on neural network are only on the stage of theoretical bases.

  16. Algorithm For A Self-Growing Neural Network

    Science.gov (United States)

    Cios, Krzysztof J.

    1996-01-01

    CID3 algorithm simulates self-growing neural network. Constructs decision trees equivalent to hidden layers of neural network. Based on ID3 algorithm, which dynamically generates decision tree while minimizing entropy of information. CID3 algorithm generates feedforward neural network by use of either crisp or fuzzy measure of entropy.

  17. Fuzzy logic, neural networks, and soft computing

    Science.gov (United States)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial

  18. Optical implementation of neural networks

    Science.gov (United States)

    Yu, Francis T. S.; Guo, Ruyan

    2002-12-01

    An adaptive optical neuro-computing (ONC) using inexpensive pocket size liquid crystal televisions (LCTVs) had been developed by the graduate students in the Electro-Optics Laboratory at The Pennsylvania State University. Although this neuro-computing has only 8×8=64 neurons, it can be easily extended to 16×20=320 neurons. The major advantages of this LCTV architecture as compared with other reported ONCs, are low cost and the flexibility to operate. To test the performance, several neural net models are used. These models are Interpattern Association, Hetero-association and unsupervised learning algorithms. The system design considerations and experimental demonstrations are also included.

  19. Identifying Jets Using Artifical Neural Networks

    Science.gov (United States)

    Rosand, Benjamin; Caines, Helen; Checa, Sofia

    2017-09-01

    We investigate particle jet interactions with the Quark Gluon Plasma (QGP) using artificial neural networks modeled on those used in computer image recognition. We create jet images by binning jet particles into pixels and preprocessing every image. We analyzed the jets with a Multi-layered maxout network and a convolutional network. We demonstrate each network's effectiveness in differentiating simulated quenched jets from unquenched jets, and we investigate the method that the network uses to discriminate among different quenched jet simulations. Finally, we develop a greater understanding of the physics behind quenched jets by investigating what the network learnt as well as its effectiveness in differentiating samples. Yale College Freshman Summer Research Fellowship in the Sciences and Engineering.

  20. Fuzzy stochastic neural network model for structural system identification

    Science.gov (United States)

    Jiang, Xiaomo; Mahadevan, Sankaran; Yuan, Yong

    2017-01-01

    This paper presents a dynamic fuzzy stochastic neural network model for nonparametric system identification using ambient vibration data. The model is developed to handle two types of imprecision in the sensed data: fuzzy information and measurement uncertainties. The dimension of the input vector is determined by using the false nearest neighbor approach. A Bayesian information criterion is applied to obtain the optimum number of stochastic neurons in the model. A fuzzy C-means clustering algorithm is employed as a data mining tool to divide the sensed data into clusters with common features. The fuzzy stochastic model is created by combining the fuzzy clusters of input vectors with the radial basis activation functions in the stochastic neural network. A natural gradient method is developed based on the Kullback-Leibler distance criterion for quick convergence of the model training. The model is validated using a power density pseudospectrum approach and a Bayesian hypothesis testing-based metric. The proposed methodology is investigated with numerically simulated data from a Markov Chain model and a two-story planar frame, and experimentally sensed data from ambient vibration data of a benchmark structure.

  1. Hybrid discrete-time neural networks.

    Science.gov (United States)

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  2. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  3. Matrix representation of a Neural Network

    DEFF Research Database (Denmark)

    Christensen, Bjørn Klint

    Processing, by David Rummelhart (Rummelhart 1986) for an easy-to-read introduction. What the paper does explain is how a matrix representation of a neural net allows for a very simple implementation. The matrix representation is introduced in (Rummelhart 1986, chapter 9), but only for a two-layer linear...... network and the feedforward algorithm. This paper develops the idea further to three-layer non-linear networks and the backpropagation algorithm. Figure 1 shows the layout of a three-layer network. There are I input nodes, J hidden nodes and K output nodes all indexed from 0. Bias-node for the hidden...

  4. Reconstruction of periodic signals using neural networks

    Directory of Open Access Journals (Sweden)

    José Danilo Rairán Antolines

    2014-01-01

    Full Text Available In this paper, we reconstruct a periodic signal by using two neural networks. The first network is trained to approximate the period of a signal, and the second network estimates the corresponding coefficients of the signal's Fourier expansion. The reconstruction strategy consists in minimizing the mean-square error via backpro-pagation algorithms over a single neuron with a sine transfer function. Additionally, this paper presents mathematical proof about the quality of the approximation as well as a first modification of the algorithm, which requires less data to reach the same estimation; thus making the algorithm suitable for real-time implementations.

  5. Neural networks: Application to medical imaging

    Science.gov (United States)

    Clarke, Laurence P.

    1994-01-01

    The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.

  6. Fuzzy logic and neural network technologies

    Science.gov (United States)

    Villarreal, James A.; Lea, Robert N.; Savely, Robert T.

    1992-01-01

    Applications of fuzzy logic technologies in NASA projects are reviewed to examine their advantages in the development of neural networks for aerospace and commercial expert systems and control. Examples of fuzzy-logic applications include a 6-DOF spacecraft controller, collision-avoidance systems, and reinforcement-learning techniques. The commercial applications examined include a fuzzy autofocusing system, an air conditioning system, and an automobile transmission application. The practical use of fuzzy logic is set in the theoretical context of artificial neural systems (ANSs) to give the background for an overview of ANS research programs at NASA. The research and application programs include the Network Execution and Training Simulator and faster training algorithms such as the Difference Optimized Training Scheme. The networks are well suited for pattern-recognition applications such as predicting sunspots, controlling posture maintenance, and conducting adaptive diagnoses.

  7. A Topological Perspective of Neural Network Structure

    Science.gov (United States)

    Sizemore, Ann; Giusti, Chad; Cieslak, Matthew; Grafton, Scott; Bassett, Danielle

    The wiring patterns of white matter tracts between brain regions inform functional capabilities of the neural network. Indeed, densely connected and cyclically arranged cognitive systems may communicate and thus perform distinctly. However, previously employed graph theoretical statistics are local in nature and thus insensitive to such global structure. Here we present an investigation of the structural neural network in eight healthy individuals using persistent homology. An extension of homology to weighted networks, persistent homology records both circuits and cliques (all-to-all connected subgraphs) through a repetitive thresholding process, thus perceiving structural motifs. We report structural features found across patients and discuss brain regions responsible for these patterns, finally considering the implications of such motifs in relation to cognitive function.

  8. The relevance of network micro-structure for neural dynamics

    Directory of Open Access Journals (Sweden)

    Volker ePernice

    2013-06-01

    Full Text Available The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previousstudies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neuronsin recurrent networks. However, typically very simple random network models are considered in such studies. Here weuse a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much morevariable than commonly used network models, and which therefore promise to sample the space of recurrent networks ina more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology insimulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive datasetof networks and neuronal simulations we assess statistical relations between features of the network structure and the spikingactivity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics ofboth single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistentrelations between activity characteristics like spike-train irregularity or correlations and network properties, for example thedistributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that itis possible to estimate structural characteristics of the network from activity data. We also assess higher order correlationsof spiking activity in the various networks considered here, and find that their occurrence strongly depends on the networkstructure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpretspike train recordings from neural circuits.

  9. Robustness in Weighted Networks with Cluster Structure

    Directory of Open Access Journals (Sweden)

    Yi Zheng

    2014-01-01

    Full Text Available The vulnerability of complex systems induced by cascade failures revealed the comprehensive interaction of dynamics with network structure. The effect on cascade failures induced by cluster structure was investigated on three networks, small-world, scale-free, and module networks, of which the clustering coefficient is controllable by the random walk method. After analyzing the shifting process of load, we found that the betweenness centrality and the cluster structure play an important role in cascading model. Focusing on this point, properties of cascading failures were studied on model networks with adjustable clustering coefficient and fixed degree distribution. In the proposed weighting strategy, the path length of an edge is designed as the product of the clustering coefficient of its end nodes, and then the modified betweenness centrality of the edge is calculated and applied in cascade model as its weights. The optimal region of the weighting scheme and the size of the survival components were investigated by simulating the edge removing attack, under the rule of local redistribution based on edge weights. We found that the weighting scheme based on the modified betweenness centrality makes all three networks have better robustness against edge attack than the one based on the original betweenness centrality.

  10. A network security situation prediction model based on wavelet neural network with optimized parameters

    Directory of Open Access Journals (Sweden)

    Haibo Zhang

    2016-08-01

    Full Text Available The security incidents ion networks are sudden and uncertain, it is very hard to precisely predict the network security situation by traditional methods. In order to improve the prediction accuracy of the network security situation, we build a network security situation prediction model based on Wavelet Neural Network (WNN with optimized parameters by the Improved Niche Genetic Algorithm (INGA. The proposed model adopts WNN which has strong nonlinear ability and fault-tolerance performance. Also, the parameters for WNN are optimized through the adaptive genetic algorithm (GA so that WNN searches more effectively. Considering the problem that the adaptive GA converges slowly and easily turns to the premature problem, we introduce a novel niche technology with a dynamic fuzzy clustering and elimination mechanism to solve the premature convergence of the GA. Our final simulation results show that the proposed INGA-WNN prediction model is more reliable and effective, and it achieves faster convergence-speed and higher prediction accuracy than the Genetic Algorithm-Wavelet Neural Network (GA-WNN, Genetic Algorithm-Back Propagation Neural Network (GA-BPNN and WNN.

  11. Phase Diagram of Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamed eSeyed-Allaei

    2015-03-01

    Full Text Available In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probablilty of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations. but here, I take a different perspective, inspired by evolution. I simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable by nature. Networks which are configured according to the common values, have the best dynamic range in response to an impulse and their dynamic range is more robust in respect to synaptic weights. In fact, evolution has favored networks of best dynamic range. I present a phase diagram that shows the dynamic ranges of different networks of different parameteres. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. It may serve as a guideline to decide about the values of parameters in a simulation of spiking neural network.

  12. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  13. Fuzzy logic and neural networks basic concepts & application

    CERN Document Server

    Alavala, Chennakesava R

    2008-01-01

    About the Book: The primary purpose of this book is to provide the student with a comprehensive knowledge of basic concepts of fuzzy logic and neural networks. The hybridization of fuzzy logic and neural networks is also included. No previous knowledge of fuzzy logic and neural networks is required. Fuzzy logic and neural networks have been discussed in detail through illustrative examples, methods and generic applications. Extensive and carefully selected references is an invaluable resource for further study of fuzzy logic and neural networks. Each chapter is followed by a question bank

  14. Character Recognition Using Genetically Trained Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Diniz, C.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.

    1998-10-01

    Computationally intelligent recognition of characters and symbols addresses a wide range of applications including foreign language translation and chemical formula identification. The combination of intelligent learning and optimization algorithms with layered neural structures offers powerful techniques for character recognition. These techniques were originally developed by Sandia National Laboratories for pattern and spectral analysis; however, their ability to optimize vast amounts of data make them ideal for character recognition. An adaptation of the Neural Network Designer soflsvare allows the user to create a neural network (NN_) trained by a genetic algorithm (GA) that correctly identifies multiple distinct characters. The initial successfid recognition of standard capital letters can be expanded to include chemical and mathematical symbols and alphabets of foreign languages, especially Arabic and Chinese. The FIN model constructed for this project uses a three layer feed-forward architecture. To facilitate the input of characters and symbols, a graphic user interface (GUI) has been developed to convert the traditional representation of each character or symbol to a bitmap. The 8 x 8 bitmap representations used for these tests are mapped onto the input nodes of the feed-forward neural network (FFNN) in a one-to-one correspondence. The input nodes feed forward into a hidden layer, and the hidden layer feeds into five output nodes correlated to possible character outcomes. During the training period the GA optimizes the weights of the NN until it can successfully recognize distinct characters. Systematic deviations from the base design test the network's range of applicability. Increasing capacity, the number of letters to be recognized, requires a nonlinear increase in the number of hidden layer neurodes. Optimal character recognition performance necessitates a minimum threshold for the number of cases when genetically training the net. And, the

  15. Deep Gate Recurrent Neural Network

    Science.gov (United States)

    2016-11-22

    distribution, e.g. a particular book. In this experiment, we use a collection of writings by Nietzsche to train our network. In total, this corpus contains...sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics : Human Language Technologies, pages 142–150...Portland, Oregon, USA, June 2011. Association for Com- putational Linguistics . URL http://www.aclweb.org/anthology/P11-1015. Maja J Matari, Complex

  16. A Projection Neural Network for Constrained Quadratic Minimax Optimization.

    Science.gov (United States)

    Liu, Qingshan; Wang, Jun

    2015-11-01

    This paper presents a projection neural network described by a dynamic system for solving constrained quadratic minimax programming problems. Sufficient conditions based on a linear matrix inequality are provided for global convergence of the proposed neural network. Compared with some of the existing neural networks for quadratic minimax optimization, the proposed neural network in this paper is capable of solving more general constrained quadratic minimax optimization problems, and the designed neural network does not include any parameter. Moreover, the neural network has lower model complexities, the number of state variables of which is equal to that of the dimension of the optimization problems. The simulation results on numerical examples are discussed to demonstrate the effectiveness and characteristics of the proposed neural network.

  17. Identifying Geographic Clusters: A Network Analytic Approach

    CERN Document Server

    Catini, Roberto; Penner, Orion; Riccaboni, Massimo

    2015-01-01

    In recent years there has been a growing interest in the role of networks and clusters in the global economy. Despite being a popular research topic in economics, sociology and urban studies, geographical clustering of human activity has often studied been by means of predetermined geographical units such as administrative divisions and metropolitan areas. This approach is intrinsically time invariant and it does not allow one to differentiate between different activities. Our goal in this paper is to present a new methodology for identifying clusters, that can be applied to different empirical settings. We use a graph approach based on k-shell decomposition to analyze world biomedical research clusters based on PubMed scientific publications. We identify research institutions and locate their activities in geographical clusters. Leading areas of scientific production and their top performing research institutions are consistently identified at different geographic scales.

  18. Relaxation dynamics of maximally clustered networks

    Science.gov (United States)

    Klaise, Janis; Johnson, Samuel

    2018-01-01

    We study the relaxation dynamics of fully clustered networks (maximal number of triangles) to an unclustered state under two different edge dynamics—the double-edge swap, corresponding to degree-preserving randomization of the configuration model, and single edge replacement, corresponding to full randomization of the Erdős-Rényi random graph. We derive expressions for the time evolution of the degree distribution, edge multiplicity distribution and clustering coefficient. We show that under both dynamics networks undergo a continuous phase transition in which a giant connected component is formed. We calculate the position of the phase transition analytically using the Erdős-Rényi phenomenology.

  19. Neural Networks in R Using the Stuttgart Neural Network Simulator: RSNNS

    Directory of Open Access Journals (Sweden)

    Christopher Bergmeir

    2012-01-01

    Full Text Available Neural networks are important standard machine learning procedures for classification and regression. We describe the R package RSNNS that provides a convenient interface to the popular Stuttgart Neural Network Simulator SNNS. The main features are (a encapsulation of the relevant SNNS parts in a C++ class, for sequential and parallel usage of different networks, (b accessibility of all of the SNNSalgorithmic functionality from R using a low-level interface, and (c a high-level interface for convenient, R-style usage of many standard neural network procedures. The package also includes functions for visualization and analysis of the models and the training procedures, as well as functions for data input/output from/to the original SNNSfile formats.

  20. Investment Valuation Analysis with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Hüseyin İNCE

    2017-07-01

    Full Text Available This paper shows that discounted cash flow and net present value, which are traditional investment valuation models, can be combined with artificial neural network model forecasting. The main inputs for the valuation models, such as revenue, costs, capital expenditure, and their growth rates, are heavily related to sector dynamics and macroeconomics. The growth rates of those inputs are related to inflation and exchange rates. Therefore, predicting inflation and exchange rates is a critical issue for the valuation output. In this paper, the Turkish economy’s inflation rate and the exchange rate of USD/TRY are forecast by artificial neural networks and implemented to the discounted cash flow model. Finally, the results are benchmarked with conventional practices.

  1. Evaluating neural networks and artificial intelligence systems

    Science.gov (United States)

    Alberts, David S.

    1994-02-01

    Systems have no intrinsic value in and of themselves, but rather derive value from the contributions they make to the missions, decisions, and tasks they are intended to support. The estimation of the cost-effectiveness of systems is a prerequisite for rational planning, budgeting, and investment documents. Neural network and expert system applications, although similar in their incorporation of a significant amount of decision-making capability, differ from each other in ways that affect the manner in which they can be evaluated. Both these types of systems are, by definition, evolutionary systems, which also impacts their evaluation. This paper discusses key aspects of neural network and expert system applications and their impact on the evaluation process. A practical approach or methodology for evaluating a certain class of expert systems that are particularly difficult to measure using traditional evaluation approaches is presented.

  2. Artificial Neural Network for Displacement Vectors Determination

    Directory of Open Access Journals (Sweden)

    P. Bohmann

    1997-09-01

    Full Text Available An artificial neural network (NN for displacement vectors (DV determination is presented in this paper. DV are computed in areas which are essential for image analysis and computer vision, in areas where are edges, lines, corners etc. These special features are found by edges operators with the following filtration. The filtration is performed by a threshold function. The next step is DV computation by 2D Hamming artificial neural network. A method of DV computation is based on the full search block matching algorithms. The pre-processing (edges finding is the reason why the correlation function is very simple, the process of DV determination needs less computation and the structure of the NN is simpler.

  3. Neural Network Program Package for Prosody Modeling

    Directory of Open Access Journals (Sweden)

    J. Santarius

    2004-04-01

    Full Text Available This contribution describes the programme for one part of theautomatic Text-to-Speech (TTS synthesis. Some experiments (for example[14] documented the considerable improvement of the naturalness ofsynthetic speech, but this approach requires completing the inputfeature values by hand. This completing takes a lot of time for bigfiles. We need to improve the prosody by other approaches which useonly automatically classified features (input parameters. Theartificial neural network (ANN approach is used for the modeling ofprosody parameters. The program package contains all modules necessaryfor the text and speech signal pre-processing, neural network training,sensitivity analysis, result processing and a module for the creationof the input data protocol for Czech speech synthesizer ARTIC [1].

  4. Supervised Sequence Labelling with Recurrent Neural Networks

    CERN Document Server

    Graves, Alex

    2012-01-01

    Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional...

  5. Hierarchical Neural Network Structures for Phoneme Recognition

    CERN Document Server

    Vasquez, Daniel; Minker, Wolfgang

    2013-01-01

    In this book, hierarchical structures based on neural networks are investigated for automatic speech recognition. These structures are evaluated on the phoneme recognition task where a  Hybrid Hidden Markov Model/Artificial Neural Network paradigm is used. The baseline hierarchical scheme consists of two levels each which is based on a Multilayered Perceptron. Additionally, the output of the first level serves as a second level input. The computational speed of the phoneme recognizer can be substantially increased by removing redundant information still contained at the first level output. Several techniques based on temporal and phonetic criteria have been investigated to remove this redundant information. The computational time could be reduced by 57% whilst keeping the system accuracy comparable to the baseline hierarchical approach.

  6. Detecting Clusters/Communities in Social Networks.

    Science.gov (United States)

    Hoffman, Michaela; Steinley, Douglas; Gates, Kathleen M; Prinstein, Mitchell J; Brusco, Michael J

    2018-01-01

    Cohen's κ, a similarity measure for categorical data, has since been applied to problems in the data mining field such as cluster analysis and network link prediction. In this paper, a new application is examined: community detection in networks. A new algorithm is proposed that uses Cohen's κ as a similarity measure for each pair of nodes; subsequently, the κ values are then clustered to detect the communities. This paper defines and tests this method on a variety of simulated and real networks. The results are compared with those from eight other community detection algorithms. Results show this new algorithm is consistently among the top performers in classifying data points both on simulated and real networks. Additionally, this is one of the broadest comparative simulations for comparing community detection algorithms to date.

  7. Neural Network Solves "Traveling-Salesman" Problem

    Science.gov (United States)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  8. Learning in Neural Networks: VLSI Implementation Strategies

    Science.gov (United States)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  9. Convolutional Neural Networks for Font Classification

    OpenAIRE

    Tensmeyer, Chris; Saunders, Daniel; Martinez, Tony

    2017-01-01

    Classifying pages or text lines into font categories aids transcription because single font Optical Character Recognition (OCR) is generally more accurate than omni-font OCR. We present a simple framework based on Convolutional Neural Networks (CNNs), where a CNN is trained to classify small patches of text into predefined font classes. To classify page or line images, we average the CNN predictions over densely extracted patches. We show that this method achieves state-of-the-art performance...

  10. Deep Learning in Neural Networks: An Overview

    OpenAIRE

    Schmidhuber, Juergen

    2014-01-01

    In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpr...

  11. A Dynamic Neural Network Approach to CBM

    Science.gov (United States)

    2011-03-15

    Therefore post-processing is needed to extract the time difference between corresponding events from which to calculate the crankshaft rotational speed...potentially already available from existing sensors (such as a crankshaft timing device) and a Neural Network processor to carry out the calculation . As...files are designated with the “_genmod” suffix. These files were the sources for the training and testing sets and made the extraction process easy

  12. Artificial neural network cardiopulmonary modeling and diagnosis

    Science.gov (United States)

    Kangas, Lars J.; Keller, Paul E.

    1997-01-01

    The present invention is a method of diagnosing a cardiopulmonary condition in an individual by comparing data from a progressive multi-stage test for the individual to a non-linear multi-variate model, preferably a recurrent artificial neural network having sensor fusion. The present invention relies on a cardiovascular model developed from physiological measurements of an individual. Any differences between the modeled parameters and the parameters of an individual at a given time are used for diagnosis.

  13. Identifying Tracks Duplicates via Neural Network

    CERN Document Server

    Sunjerga, Antonio; CERN. Geneva. EP Department

    2017-01-01

    The goal of the project is to study feasibility of state of the art machine learning techniques in track reconstruction. Machine learning techniques provide promising ways to speed up the pattern recognition of tracks by adding more intelligence in the algorithms. Implementation of neural network to process of track duplicates identifying will be discussed. Different approaches are shown and results are compared to method that is currently in use.

  14. Portable Rule Extraction Method for Neural Network Decisions Reasoning

    Directory of Open Access Journals (Sweden)

    Darius PLIKYNAS

    2005-08-01

    Full Text Available Neural network (NN methods are sometimes useless in practical applications, because they are not properly tailored to the particular market's needs. We focus thereinafter specifically on financial market applications. NNs have not gained full acceptance here yet. One of the main reasons is the "Black Box" problem (lack of the NN decisions explanatory power. There are though some NN decisions rule extraction methods like decompositional, pedagogical or eclectic, but they suffer from low portability of the rule extraction technique across various neural net architectures, high level of granularity, algorithmic sophistication of the rule extraction technique etc. The authors propose to eliminate some known drawbacks using an innovative extension of the pedagogical approach. The idea is exposed by the use of a widespread MLP neural net (as a common tool in the financial problems' domain and SOM (input data space clusterization. The feedback of both nets' performance is related and targeted through the iteration cycle by achievement of the best matching between the decision space fragments and input data space clusters. Three sets of rules are generated algorithmically or by fuzzy membership functions. Empirical validation of the common financial benchmark problems is conducted with an appropriately prepared software solution.

  15. Multilingual Text Detection with Nonlinear Neural Network

    Directory of Open Access Journals (Sweden)

    Lin Li

    2015-01-01

    Full Text Available Multilingual text detection in natural scenes is still a challenging task in computer vision. In this paper, we apply an unsupervised learning algorithm to learn language-independent stroke feature and combine unsupervised stroke feature learning and automatically multilayer feature extraction to improve the representational power of text feature. We also develop a novel nonlinear network based on traditional Convolutional Neural Network that is able to detect multilingual text regions in the images. The proposed method is evaluated on standard benchmarks and multilingual dataset and demonstrates improvement over the previous work.

  16. Forecasting Energy Commodity Prices Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Massimo Panella

    2012-01-01

    Full Text Available A new machine learning approach for price modeling is proposed. The use of neural networks as an advanced signal processing tool may be successfully used to model and forecast energy commodity prices, such as crude oil, coal, natural gas, and electricity prices. Energy commodities have shown explosive growth in the last decade. They have become a new asset class used also for investment purposes. This creates a huge demand for better modeling as what occurred in the stock markets in the 1970s. Their price behavior presents unique features causing complex dynamics whose prediction is regarded as a challenging task. The use of a Mixture of Gaussian neural network may provide significant improvements with respect to other well-known models. We propose a computationally efficient learning of this neural network using the maximum likelihood estimation approach to calibrate the parameters. The optimal model is identified using a hierarchical constructive procedure that progressively increases the model complexity. Extensive computer simulations validate the proposed approach and provide an accurate description of commodities prices dynamics.

  17. Flood estimation: a neural network approach

    Energy Technology Data Exchange (ETDEWEB)

    Swain, P.C.; Seshachalam, C.; Umamahesh, N.V. [Regional Engineering Coll., Warangal (India). Water and Environment Div.

    2000-07-01

    The artificial neural network (ANN) approach described in this study aims at predicting the flood flow into a reservoir. This differs from the traditional methods of flow prediction in the sense that it belongs to a class of data driven approaches, where as the traditional methods are model driven. Physical processes influencing the occurrences of streamflow in a river are highly complex, and are very difficult to be modelled by available statistical or deterministic models. ANNs provide model free solutions and hence can be expected to be appropriate in these conditions. Non-linearity, adaptivity, evidential response and fault tolerance are additional properties and capabilities of the neural networks. This paper highlights the applicability of neural networks for predicting daily flood flow taking the Hirakud reservoir on river Mahanadi in Orissa, India as the case study. The correlation between the observed and predicted flows and the relative error are considered to measure the performance of the model. The correlation between the observed and the modelled flows are computed to be 0.9467 in testing phase of the model. (orig.)

  18. Identifying Broadband Rotational Spectra with Neural Networks

    Science.gov (United States)

    Zaleski, Daniel P.; Prozument, Kirill

    2017-06-01

    A typical broadband rotational spectrum may contain several thousand observable transitions, spanning many species. Identifying the individual spectra, particularly when the dynamic range reaches 1,000:1 or even 10,000:1, can be challenging. One approach is to apply automated fitting routines. In this approach, combinations of 3 transitions can be created to form a "triple", which allows fitting of the A, B, and C rotational constants in a Watson-type Hamiltonian. On a standard desktop computer, with a target molecule of interest, a typical AUTOFIT routine takes 2-12 hours depending on the spectral density. A new approach is to utilize machine learning to train a computer to recognize the patterns (frequency spacing and relative intensities) inherit in rotational spectra and to identify the individual spectra in a raw broadband rotational spectrum. Here, recurrent neural networks have been trained to identify different types of rotational spectra and classify them accordingly. Furthermore, early results in applying convolutional neural networks for spectral object recognition in broadband rotational spectra appear promising. Perez et al. "Broadband Fourier transform rotational spectroscopy for structure determination: The water heptamer." Chem. Phys. Lett., 2013, 571, 1-15. Seifert et al. "AUTOFIT, an Automated Fitting Tool for Broadband Rotational Spectra, and Applications to 1-Hexanal." J. Mol. Spectrosc., 2015, 312, 13-21. Bishop. "Neural networks for pattern recognition." Oxford university press, 1995.

  19. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  20. Artificial neural network applications in ionospheric studies

    Directory of Open Access Journals (Sweden)

    L. R. Cander

    1998-06-01

    Full Text Available The ionosphere of Earth exhibits considerable spatial changes and has large temporal variability of various timescales related to the mechanisms of creation, decay and transport of space ionospheric plasma. Many techniques for modelling electron density profiles through entire ionosphere have been developed in order to solve the "age-old problem" of ionospheric physics which has not yet been fully solved. A new way to address this problem is by applying artificial intelligence methodologies to current large amounts of solar-terrestrial and ionospheric data. It is the aim of this paper to show by the most recent examples that modern development of numerical models for ionospheric monthly median long-term prediction and daily hourly short-term forecasting may proceed successfully applying the artificial neural networks. The performance of these techniques is illustrated with different artificial neural networks developed to model and predict the temporal and spatial variations of ionospheric critical frequency, f0F2 and Total Electron Content (TEC. Comparisons between results obtained by the proposed approaches and measured f0F2 and TEC data provide prospects for future applications of the artificial neural networks in ionospheric studies.

  1. CALIBRATION OF ONLINE ANALYZERS USING NEURAL NETWORKS

    Energy Technology Data Exchange (ETDEWEB)

    Rajive Ganguli; Daniel E. Walsh; Shaohai Yu

    2003-12-05

    Neural networks were used to calibrate an online ash analyzer at the Usibelli Coal Mine, Healy, Alaska, by relating the Americium and Cesium counts to the ash content. A total of 104 samples were collected from the mine, with 47 being from screened coal, and the rest being from unscreened coal. Each sample corresponded to 20 seconds of coal on the running conveyor belt. Neural network modeling used the quick stop training procedure. Therefore, the samples were split into training, calibration and prediction subsets. Special techniques, using genetic algorithms, were developed to representatively split the sample into the three subsets. Two separate approaches were tried. In one approach, the screened and unscreened coal was modeled separately. In another, a single model was developed for the entire dataset. No advantage was seen from modeling the two subsets separately. The neural network method performed very well on average but not individually, i.e. though each prediction was unreliable, the average of a few predictions was close to the true average. Thus, the method demonstrated that the analyzers were accurate at 2-3 minutes intervals (average of 6-9 samples), but not at 20 seconds (each prediction).

  2. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Massive small unmanned aerial vehicles are envisioned to operate in the near future. While there are lots of research problems need to be addressed before dense operations can happen, trajectory modeling remains as one of the keys to understand and develop policies, regulations, and requirements for safe and efficient unmanned aerial vehicle operations. The fidelity requirement of a small unmanned vehicle trajectory model is high because these vehicles are sensitive to winds due to their small size and low operational altitude. Both vehicle control systems and dynamic models are needed for trajectory modeling, which makes the modeling a great challenge, especially considering the fact that manufactures are not willing to share their control systems. This work proposed to use a neural network approach for modelling small unmanned vehicle's trajectory without knowing its control system and bypassing exhaustive efforts for aerodynamic parameter identification. As a proof of concept, instead of collecting data from flight tests, this work used the trajectory data generated by a mathematical vehicle model for training and testing the neural network. The results showed great promise because the trained neural network can predict 4D trajectories accurately, and prediction errors were less than 2:0 meters in both temporal and spatial dimensions.

  3. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  4. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  5. A neural network model for texture discrimination.

    Science.gov (United States)

    Xing, J; Gerstein, G L

    1993-01-01

    A model of texture discrimination in visual cortex was built using a feedforward network with lateral interactions among relatively realistic spiking neural elements. The elements have various membrane currents, equilibrium potentials and time constants, with action potentials and synapses. The model is derived from the modified programs of MacGregor (1987). Gabor-like filters are applied to overlapping regions in the original image; the neural network with lateral excitatory and inhibitory interactions then compares and adjusts the Gabor amplitudes in order to produce the actual texture discrimination. Finally, a combination layer selects and groups various representations in the output of the network to form the final transformed image material. We show that both texture segmentation and detection of texture boundaries can be represented in the firing activity of such a network for a wide variety of synthetic to natural images. Performance details depend most strongly on the global balance of strengths of the excitatory and inhibitory lateral interconnections. The spatial distribution of lateral connective strengths has relatively little effect. Detailed temporal firing activities of single elements in the lateral connected network were examined under various stimulus conditions. Results show (as in area 17 of cortex) that a single element's response to image features local to its receptive field can be altered by changes in the global context.

  6. Categorization in neural networks and prosopagnosia

    Science.gov (United States)

    Virasoro, M. A.

    1989-12-01

    Prosopagnosia is a syndrome characterized by a generalized difficulty to visually recognize individual patterns among those that are similar, and can therefore be said to belong to the same category. I suggest that the existence of this disfunction may be an important clue for understanding the categorization process in the brain. In this direction the performance of neural networks under random destruction of synapses is analysed. It is found that in almost every network that stores correlated patterns the coding of the discriminating details between individuals inside a class is more sensitive to noise or to random destruction than the coding that distinguishes between classes. It follows that a process of death and/or deterioration at an intermediate level of intensity, even if it acts randomly on the network may lead to a malfunctioning of the network that resembles prosopagnosia.

  7. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Purpose: To develop an effective analytical method to distinguish old peels of Xinhui Pericarpium citri reticulatae (XPCR) stored for > 3 years from new peels stored for < 3 years. Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer feedforward neural ...

  8. An efficient neural network based method for medical image segmentation.

    Science.gov (United States)

    Torbati, Nima; Ayatollahi, Ahmad; Kermani, Ali

    2014-01-01

    The aim of this research is to propose a new neural network based method for medical image segmentation. Firstly, a modified self-organizing map (SOM) network, named moving average SOM (MA-SOM), is utilized to segment medical images. After the initial segmentation stage, a merging process is designed to connect the objects of a joint cluster together. A two-dimensional (2D) discrete wavelet transform (DWT) is used to build the input feature space of the network. The experimental results show that MA-SOM is robust to noise and it determines the input image pattern properly. The segmentation results of breast ultrasound images (BUS) demonstrate that there is a significant correlation between the tumor region selected by a physician and the tumor region segmented by our proposed method. In addition, the proposed method segments X-ray computerized tomography (CT) and magnetic resonance (MR) head images much better than the incremental supervised neural network (ISNN) and SOM-based methods. © 2013 Published by Elsevier Ltd.

  9. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2015-01-01

    Full Text Available Artificial neural networks (ANNs have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  10. A Hybrid Neural Network and Virtual Reality System for Spatial Language Processing

    OpenAIRE

    Martinez, Guillermina; Cangelosi, Angelo; Coventry, Kenny

    2001-01-01

    This paper describes a neural network model for the study of spatial language. It deals with both geometric and functional variables, which have been shown to play an important role in the comprehension of spatial prepositions. The network is integrated with a virtual reality interface for the direct manipulation of geometric and functional factors. The training uses experimental stimuli and data. Results show that the networks reach low training and generalization errors. Cluster analyses of...

  11. Clustering context-specific gene regulatory networks.

    Science.gov (United States)

    Ramesh, Archana; Trevino, Robert; VON Hoff, Daniel D; Kim, Seungchan

    2010-01-01

    Gene regulatory networks (GRNs) learned from high throughput genomic data are often hard to visualize due to the large number of nodes and edges involved, rendering them difficult to appreciate. This becomes an important issue when modular structures are inherent in the inferred networks, such as in the recently proposed context-specific GRNs.(12) In this study, we investigate the application of graph clustering techniques to discern modularity in such highly complex graphs, focusing on context-specific GRNs. Identified modules are then associated with a subset of samples and the key pathways enriched in the module. Specifically, we study the use of Markov clustering and spectral clustering on cancer datasets to yield evidence on the possible association amongst different tumor types. Two sets of gene expression profiling data were analyzed to reveal context-specificity as well as modularity in genomic regulations.

  12. UAV Trajectory Modeling Using Neural Networks

    Science.gov (United States)

    Xue, Min

    2017-01-01

    Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural

  13. Computing networks from cluster to cloud computing

    CERN Document Server

    Vicat-Blanc, Pascale; Guillier, Romaric; Soudan, Sebastien

    2013-01-01

    "Computing Networks" explores the core of the new distributed computing infrastructures we are using today:  the networking systems of clusters, grids and clouds. It helps network designers and distributed-application developers and users to better understand the technologies, specificities, constraints and benefits of these different infrastructures' communication systems. Cloud Computing will give the possibility for millions of users to process data anytime, anywhere, while being eco-friendly. In order to deliver this emerging traffic in a timely, cost-efficient, energy-efficient, and

  14. Runoff Modelling in Urban Storm Drainage by Neural Networks

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Brorsen, Michael; Schaarup-Jensen, Kjeld

    1995-01-01

    A neural network is used to simulate folw and water levels in a sewer system. The calibration of th neural network is based on a few measured events and the network is validated against measureed events as well as flow simulated with the MOUSE model (Lindberg and Joergensen, 1986). The neural...... network is used to compute flow or water level at selected points in the sewer system, and to forecast the flow from a small residential area. The main advantages of the neural network are the build-in self calibration procedure and high speed performance, but the neural network cannot be used to extract...... knowledge of the runoff process. The neural network was found to simulate 150 times faster than e.g. the MOUSE model....

  15. Network traffic anomaly prediction using Artificial Neural Network

    Science.gov (United States)

    Ciptaningtyas, Hening Titi; Fatichah, Chastine; Sabila, Altea

    2017-03-01

    As the excessive increase of internet usage, the malicious software (malware) has also increase significantly. Malware is software developed by hacker for illegal purpose(s), such as stealing data and identity, causing computer damage, or denying service to other user[1]. Malware which attack computer or server often triggers network traffic anomaly phenomena. Based on Sophos's report[2], Indonesia is the riskiest country of malware attack and it also has high network traffic anomaly. This research uses Artificial Neural Network (ANN) to predict network traffic anomaly based on malware attack in Indonesia which is recorded by Id-SIRTII/CC (Indonesia Security Incident Response Team on Internet Infrastructure/Coordination Center). The case study is the highest malware attack (SQL injection) which has happened in three consecutive years: 2012, 2013, and 2014[4]. The data series is preprocessed first, then the network traffic anomaly is predicted using Artificial Neural Network and using two weight update algorithms: Gradient Descent and Momentum. Error of prediction is calculated using Mean Squared Error (MSE) [7]. The experimental result shows that MSE for SQL Injection is 0.03856. So, this approach can be used to predict network traffic anomaly.

  16. Neural network controller for underwater work ROV. Suichu sagyoyo ROV no neural network controller

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Y.; Kidoshi, H.; Arahata, M.; Shoji, K.; Takahashi, Y. (Ishikawajima-Harima Heavy Industries, Co. Ltd., Tokyo (Japan))

    1993-07-01

    The previous underwater work ROV (remotely operated vehicle) has been controlled manually because its dynamic properties are changeable underwater. Ishikawajima-Harima Heavy Industries (IHI) has applied a neural network to an adaptive controller for the ROV. This paper describes objectives of the research, design of control logic, and tank experiments on a model ROV. For the neural network, manual operation was used to provide the initial learning data for the neural network in order to initialize control parameters for optimization. The model ROV was designed to achieve and maintain constant depth in normal operation. As a consequence of the tank experiments, it was demonstrated that the controller can acquire skill of operators, can further improve the acquired skill of operators, and can construct an automatic control system autonomically even if any dynamic properties are not known. 6 refs., 8 figs.

  17. Neural Networks of Colored Sequence Synesthesia

    Science.gov (United States)

    Narayan, Manjari; Allen, Genevera I.

    2013-01-01

    Synesthesia is a condition in which normal stimuli can trigger anomalous associations. In this study, we exploit synesthesia to understand how the synesthetic experience can be explained by subtle changes in network properties. Of the many forms of synesthesia, we focus on colored sequence synesthesia, a form in which colors are associated with overlearned sequences, such as numbers and letters (graphemes). Previous studies have characterized synesthesia using resting-state connectivity or stimulus-driven analyses, but it remains unclear how network properties change as synesthetes move from one condition to another. To address this gap, we used functional MRI in humans to identify grapheme-specific brain regions, thereby constructing a functional “synesthetic” network. We then explored functional connectivity of color and grapheme regions during a synesthesia-inducing fMRI paradigm involving rest, auditory grapheme stimulation, and audiovisual grapheme stimulation. Using Markov networks to represent direct relationships between regions, we found that synesthetes had more connections during rest and auditory conditions. We then expanded the network space to include 90 anatomical regions, revealing that synesthetes tightly cluster in visual regions, whereas controls cluster in parietal and frontal regions. Together, these results suggest that synesthetes have increased connectivity between grapheme and color regions, and that synesthetes use visual regions to a greater extent than controls when presented with dynamic grapheme stimulation. These data suggest that synesthesia is better characterized by studying global network dynamics than by individual properties of a single brain region. PMID:23986245

  18. Neural networks of colored sequence synesthesia.

    Science.gov (United States)

    Tomson, Steffie N; Narayan, Manjari; Allen, Genevera I; Eagleman, David M

    2013-08-28

    Synesthesia is a condition in which normal stimuli can trigger anomalous associations. In this study, we exploit synesthesia to understand how the synesthetic experience can be explained by subtle changes in network properties. Of the many forms of synesthesia, we focus on colored sequence synesthesia, a form in which colors are associated with overlearned sequences, such as numbers and letters (graphemes). Previous studies have characterized synesthesia using resting-state connectivity or stimulus-driven analyses, but it remains unclear how network properties change as synesthetes move from one condition to another. To address this gap, we used functional MRI in humans to identify grapheme-specific brain regions, thereby constructing a functional "synesthetic" network. We then explored functional connectivity of color and grapheme regions during a synesthesia-inducing fMRI paradigm involving rest, auditory grapheme stimulation, and audiovisual grapheme stimulation. Using Markov networks to represent direct relationships between regions, we found that synesthetes had more connections during rest and auditory conditions. We then expanded the network space to include 90 anatomical regions, revealing that synesthetes tightly cluster in visual regions, whereas controls cluster in parietal and frontal regions. Together, these results suggest that synesthetes have increased connectivity between grapheme and color regions, and that synesthetes use visual regions to a greater extent than controls when presented with dynamic grapheme stimulation. These data suggest that synesthesia is better characterized by studying global network dynamics than by individual properties of a single brain region.

  19. Neural Networks that Learn Temporal Sequences by Selection

    Science.gov (United States)

    Dehaene, Stanislas; Changeux, Jean-Pierre; Nadal, Jean-Pierre

    1987-05-01

    A model for formal neural networks that learn temporal sequences by selection is proposed on the basis of observations on the acquisition of song by birds, on sequence-detecting neurons, and on allosteric receptors. The model relies on hypothetical elementary devices made up of three neurons, the synaptic triads, which yield short-term modification of synaptic efficacy through heterosynaptic interactions, and on a local Hebbian learning rule. The functional units postulated are mutually inhibiting clusters of synergic neurons and bundles of synapses. Networks formalized on this basis display capacities for passive recognition and for production of temporal sequences that may include repetitions. Introduction of the learning rule leads to the differentiation of sequence-detecting neurons and to the stabilization of ongoing temporal sequences. A network architecture composed of three layers of neuronal clusters is shown to exhibit active recognition and learning of time sequences by selection: the network spontaneously produces prerepresentations that are selected according to their resonance with the input percepts. Predictions of the model are discussed.

  20. Marginalization in Random Nonlinear Neural Networks

    Science.gov (United States)

    Vasudeva Raju, Rajkumar; Pitkow, Xaq

    2015-03-01

    Computations involved in tasks like causal reasoning in the brain require a type of probabilistic inference known as marginalization. Marginalization corresponds to averaging over irrelevant variables to obtain the probability of the variables of interest. This is a fundamental operation that arises whenever input stimuli depend on several variables, but only some are task-relevant. Animals often exhibit behavior consistent with marginalizing over some variables, but the neural substrate of this computation is unknown. It has been previously shown (Beck et al. 2011) that marginalization can be performed optimally by a deterministic nonlinear network that implements a quadratic interaction of neural activity with divisive normalization. We show that a simpler network can perform essentially the same computation. These Random Nonlinear Networks (RNN) are feedforward networks with one hidden layer, sigmoidal activation functions, and normally-distributed weights connecting the input and hidden layers. We train the output weights connecting the hidden units to an output population, such that the output model accurately represents a desired marginal probability distribution without significant information loss compared to optimal marginalization. Simulations for the case of linear coordinate transformations show that the RNN model has good marginalization performance, except for highly uncertain inputs that have low amplitude population responses. Behavioral experiments, based on these results, could then be used to identify if this model does indeed explain how the brain performs marginalization.

  1. Neural Network Model of memory retrieval

    Directory of Open Access Journals (Sweden)

    Stefano eRecanatesi

    2015-12-01

    Full Text Available Human memory can store large amount of information. Nevertheless, recalling is often achallenging task. In a classical free recall paradigm, where participants are asked to repeat abriefly presented list of words, people make mistakes for lists as short as 5 words. We present amodel for memory retrieval based on a Hopfield neural network where transition between itemsare determined by similarities in their long-term memory representations. Meanfield analysis ofthe model reveals stable states of the network corresponding (1 to single memory representationsand (2 intersection between memory representations. We show that oscillating feedback inhibitionin the presence of noise induces transitions between these states triggering the retrieval ofdifferent memories. The network dynamics qualitatively predicts the distribution of time intervalsrequired to recall new memory items observed in experiments. It shows that items having largernumber of neurons in their representation are statistically easier to recall and reveals possiblebottlenecks in our ability of retrieving memories. Overall, we propose a neural network model ofinformation retrieval broadly compatible with experimental observations and is consistent with ourrecent graphical model (Romani et al., 2013.

  2. Spectral redemption in clustering sparse networks.

    Science.gov (United States)

    Krzakala, Florent; Moore, Cristopher; Mossel, Elchanan; Neeman, Joe; Sly, Allan; Zdeborová, Lenka; Zhang, Pan

    2013-12-24

    Spectral algorithms are classic approaches to clustering and community detection in networks. However, for sparse networks the standard versions of these algorithms are suboptimal, in some cases completely failing to detect communities even when other algorithms such as belief propagation can do so. Here, we present a class of spectral algorithms based on a nonbacktracking walk on the directed edges of the graph. The spectrum of this operator is much better-behaved than that of the adjacency matrix or other commonly used matrices, maintaining a strong separation between the bulk eigenvalues and the eigenvalues relevant to community structure even in the sparse case. We show that our algorithm is optimal for graphs generated by the stochastic block model, detecting communities all of the way down to the theoretical limit. We also show the spectrum of the nonbacktracking operator for some real-world networks, illustrating its advantages over traditional spectral clustering.

  3. Neural Network Control of Asymmetrical Multilevel Converters

    Directory of Open Access Journals (Sweden)

    Patrice WIRA

    2009-12-01

    Full Text Available This paper proposes a neural implementation of a harmonic eliminationstrategy (HES to control a Uniform Step Asymmetrical Multilevel Inverter(USAMI. The mapping between the modulation rate and the requiredswitching angles is learned and approximated with a Multi-Layer Perceptron(MLP neural network. After learning, appropriate switching angles can bedetermined with the neural network leading to a low-computational-costneural controller which is well suited for real-time applications. Thistechnique can be applied to multilevel inverters with any number of levels. Asan example, a nine-level inverter and an eleven-level inverter are consideredand the optimum switching angles are calculated on-line. Comparisons to thewell-known sinusoidal pulse-width modulation (SPWM have been carriedout in order to evaluate the performance of the proposed approach. Simulationresults demonstrate the technical advantages of the proposed neuralimplementation over the conventional method (SPWM in eliminatingharmonics while controlling a nine-level and eleven-level USAMI. Thisneural approach is applied for the supply of an asynchronous machine andresults show that it ensures a highest quality torque by efficiently cancelingthe harmonics generated by the inverters.

  4. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  5. Flood routing modelling with Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    R. Peters

    2006-01-01

    Full Text Available For the modelling of the flood routing in the lower reaches of the Freiberger Mulde river and its tributaries the one-dimensional hydrodynamic modelling system HEC-RAS has been applied. Furthermore, this model was used to generate a database to train multilayer feedforward networks. To guarantee numerical stability for the hydrodynamic modelling of some 60 km of streamcourse an adequate resolution in space requires very small calculation time steps, which are some two orders of magnitude smaller than the input data resolution. This leads to quite high computation requirements seriously restricting the application – especially when dealing with real time operations such as online flood forecasting. In order to solve this problem we tested the application of Artificial Neural Networks (ANN. First studies show the ability of adequately trained multilayer feedforward networks (MLFN to reproduce the model performance.

  6. Quantum generalisation of feedforward neural networks

    Science.gov (United States)

    Wan, Kwok Ho; Dahlsten, Oscar; Kristjánsson, Hlér; Gardner, Robert; Kim, M. S.

    2017-09-01

    We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e., unitary (the classical networks we generalise are called feedforward, and have step-function activation functions). The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically.

  7. A general regression artificial neural network for two-phase flow regime identification

    Energy Technology Data Exchange (ETDEWEB)

    Tambouratzis, Tatiana, E-mail: tatianatambouratzis@gmail.co [Department of Industrial Management and Technology, University of Piraeus, 107 Deligiorgi St., Piraeus 185 34 (Greece); Department of Nuclear Engineering, Chalmers University of Technology, SE-41296 Goeteborg (Sweden); Pazsit, Imre, E-mail: imre@chalmers.s [Department of Nuclear Engineering, Chalmers University of Technology, SE-41296 Goeteborg (Sweden); Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48019 (United States)

    2010-05-15

    Supplementing the collection of artificial neural network methodologies devised for monitoring energy producing installations, a general regression artificial neural network is proposed for the identification of the two-phase flow that occurs in the coolant channels of boiling water reactors. The utilization of a limited number of image features derived from radiography images affords the proposed approach with efficiency and non-invasiveness. Additionally, the application of counter-clustering to the input patterns prior to training accomplishes an 80% reduction in network size as well as in training and test time. Cross-validation tests confirm accurate on-line flow regime identification.

  8. Morphological development of Aspergillus niger in submerged citric acid fermentation as a function of the spore inoculum level. Application of neural network and cluster analysis for characterization of mycelial morphology

    Directory of Open Access Journals (Sweden)

    Mattey Michael

    2006-01-01

    Full Text Available Abstract Background Although the citric acid fermentation by Aspergillus niger is one of the most important industrial microbial processes and various aspects of the fermentation appear in a very large number of publications since the 1950s, the effect of the spore inoculum level on fungal morphology is a rather neglected area. The aim of the presented investigations was to quantify the effects of changing spore inoculum level on the resulting mycelial morphology and to investigate the physiology that underlines the phenomena. Batch fermentations were carried out in a stirred tank bioreactor, which were inoculated directly with spores in concentrations ranging from 104 to 109 spores per ml. Morphological features, evaluated by digital image analysis, were classified using an artificial neural network (ANN, which considered four main object types: globular and elongated pellets, clumps and free mycelial trees. The significance of the particular morphological features and their combination was determined by cluster analysis. Results Cell volume fraction analysis for the various inoculum levels tested revealed that by rising the spore inoculum level from 104 to 109 spores per ml, a clear transition from pelleted to dispersed forms occurs. Glucosamine formation and release by the mycelium appears to be related to spore inoculum level. Maximum concentrations detected in fermentations inoculated with 104 and 105 spores/ml, where pellets predominated. At much higher inoculum levels (108, 109 spores/ml, lower dissolved oxygen levels during the early fermentation phase were associated with slower ammonium ions uptakes and significantly lower glucosamine concentrations while the mycelium developed in dispersed morphologies. A big increase in the main and total hyphal lengths and branching frequency was observed in mycelial trees as inoculum levels rise from 104 to 109 spores/ml, while in aggregated forms particle sizes and their compactness decreased

  9. Morphological development of Aspergillus niger in submerged citric acid fermentation as a function of the spore inoculum level. Application of neural network and cluster analysis for characterization of mycelial morphology

    Science.gov (United States)

    Papagianni, Maria; Mattey, Michael

    2006-01-01

    Background Although the citric acid fermentation by Aspergillus niger is one of the most important industrial microbial processes and various aspects of the fermentation appear in a very large number of publications since the 1950s, the effect of the spore inoculum level on fungal morphology is a rather neglected area. The aim of the presented investigations was to quantify the effects of changing spore inoculum level on the resulting mycelial morphology and to investigate the physiology that underlines the phenomena. Batch fermentations were carried out in a stirred tank bioreactor, which were inoculated directly with spores in concentrations ranging from 104 to 109 spores per ml. Morphological features, evaluated by digital image analysis, were classified using an artificial neural network (ANN), which considered four main object types: globular and elongated pellets, clumps and free mycelial trees. The significance of the particular morphological features and their combination was determined by cluster analysis. Results Cell volume fraction analysis for the various inoculum levels tested revealed that by rising the spore inoculum level from 104 to 109 spores per ml, a clear transition from pelleted to dispersed forms occurs. Glucosamine formation and release by the mycelium appears to be related to spore inoculum level. Maximum concentrations detected in fermentations inoculated with 104 and 105 spores/ml, where pellets predominated. At much higher inoculum levels (108, 109 spores/ml), lower dissolved oxygen levels during the early fermentation phase were associated with slower ammonium ions uptakes and significantly lower glucosamine concentrations while the mycelium developed in dispersed morphologies. A big increase in the main and total hyphal lengths and branching frequency was observed in mycelial trees as inoculum levels rise from 104 to 109 spores/ml, while in aggregated forms particle sizes and their compactness decreased. Conclusion The methods used in

  10. The Usage of Neural Networks for the Medical Diagnosis

    OpenAIRE

    Malyshevska, Kateryna

    2009-01-01

    The problem of cancer diagnosis from multi-channel images using the neural networks is investigated. The goal of this work is to classify the different tissue types which are used to determine the cancer risk. The radial basis function networks and backpropagation neural networks are used for classification. The results of experiments are presented.

  11. Daily Nigerian peak load forecasting using artificial neural network ...

    African Journals Online (AJOL)

    A daily peak load forecasting technique that uses artificial neural network with seasonal indices is presented in this paper. A neural network of relatively smaller size than the main prediction network is used to predict the daily peak load for a period of one year over which the actual daily load data are available using one ...

  12. Prediction of Parametric Roll Resonance by Multilayer Perceptron Neural Network

    DEFF Research Database (Denmark)

    Míguez González, M; López Peña, F.; Díaz Casás, V.

    2011-01-01

    acknowledged in the last few years. This work proposes a prediction system based on a multilayer perceptron (MP) neural network. The training and testing of the MP network is accomplished by feeding it with simulated data of a three degrees-of-freedom nonlinear model of a fishing vessel. The neural network...

  13. Advances in Artificial Neural Networks - Methodological Development and Application

    Science.gov (United States)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  14. Particle swarm optimization of a neural network model in a ...

    Indian Academy of Sciences (India)

    This paper presents a particle swarm optimization (PSO) technique to train an artificial neural network (ANN) for prediction of flank wear in drilling, and compares the network performance with that of the back propagation neural network (BPNN). This analysis is carried out following a series of experiments employing high ...

  15. Energy Aware Clustering Algorithms for Wireless Sensor Networks

    Science.gov (United States)

    Rakhshan, Noushin; Rafsanjani, Marjan Kuchaki; Liu, Chenglian

    2011-09-01

    The sensor nodes deployed in wireless sensor networks (WSNs) are extremely power constrained, so maximizing the lifetime of the entire networks is mainly considered in the design. In wireless sensor networks, hierarchical network structures have the advantage of providing scalable and energy efficient solutions. In this paper, we investigate different clustering algorithms for WSNs and also compare these clustering algorithms based on metrics such as clustering distribution, cluster's load balancing, Cluster Head's (CH) selection strategy, CH's role rotation, node mobility, clusters overlapping, intra-cluster communications, reliability, security and location awareness.

  16. Clustering of neural code words revealed by a first-order phase transition.

    Science.gov (United States)

    Huang, Haiping; Toyoizumi, Taro

    2016-06-01

    A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., code words, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of code words in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural code words using retinal data by computing the entropy of code words as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal code words in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of code words and located within a reachable distance from most code words. This code-word space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the code-word space that shapes information representation in a biological network.

  17. Clustering of neural code words revealed by a first-order phase transition

    Science.gov (United States)

    Huang, Haiping; Toyoizumi, Taro

    2016-06-01

    A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., code words, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of code words in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural code words using retinal data by computing the entropy of code words as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal code words in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of code words and located within a reachable distance from most code words. This code-word space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the code-word space that shapes information representation in a biological network.

  18. Survey on Neural Networks Used for Medical Image Processing.

    Science.gov (United States)

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2009-02-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques.

  19. Permeability prediction in shale gas reservoirs using Neural Network

    Science.gov (United States)

    Aliouane, Leila; Ouadfeul, Sid-Ali

    2017-04-01

    Here, we suggest the use of the artificial neural network for permeability prediction in shale gas reservoirs using artificial neural network. Prediction of Permeability in shale gas reservoirs is a complicated task that requires new models where Darcy's fluid flow model is not suitable. Proposed idea is based on the training of neural network machine using the set of well-logs data as an input and the measured permeability as an output. In this case the Multilayer Perceptron neural network machines is used with Levenberg Marquardt algorithm. Application to two horizontal wells drilled in the Barnett shale formation exhibit the power of neural network model to resolve such as problem. Keywords: Artificial neural network, permeability, prediction , shale gas.

  20. Financial Time Series Prediction Using Elman Recurrent Random Neural Networks

    Science.gov (United States)

    Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli

    2016-01-01

    In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423

  1. QCD-Aware Neural Networks for Jet Physics

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Recent progress in applying machine learning for jet physics has been built upon an analogy between calorimeters and images. In this work, we present a novel class of recursive neural networks built instead upon an analogy between QCD and natural languages. In the analogy, four-momenta are like words and the clustering history of sequential recombination jet algorithms is like the parsing of a sentence. Our approach works directly with the four-momenta of a variable-length set of particles, and the jet-based neural network topology varies on an event-by-event basis. Our experiments highlight the flexibility of our method for building task-specific jet embeddings and show that recursive architectures are significantly more accurate and data efficient than previous image-based networks. We extend the analogy from individual jets (sentences) to full events (paragraphs), and show for the first time an event-level classifier operating...

  2. A hybrid neural network model for noisy data regression.

    Science.gov (United States)

    Lee, Eric W M; Lim, Chee Peng; Yuen, Richard K K; Lo, S M

    2004-04-01

    A hybrid neural network model, based on the fusion of fuzzy adaptive resonance theory (FA ART) and the general regression neural network (GRNN), is proposed in this paper. Both FA and the GRNN are incremental learning systems and are very fast in network training. The proposed hybrid model, denoted as GRNNFA, is able to retain these advantages and, at the same time, to reduce the computational requirements in calculating and storing information of the kernels. A clustering version of the GRNN is designed with data compression by FA for noise removal. An adaptive gradient-based kernel width optimization algorithm has also been devised. Convergence of the gradient descent algorithm can be accelerated by the geometric incremental growth of the updating factor. A series of experiments with four benchmark datasets have been conducted to assess and compare effectiveness of GRNNFA with other approaches. The GRNNFA model is also employed in a novel application task for predicting the evacuation time of patrons at typical karaoke centers in Hong Kong in the event of fire. The results positively demonstrate the applicability of GRNNFA in noisy data regression problems.

  3. Feedforward Backpropagation Neural Networks in Prediction of Farmer Risk Preferences

    OpenAIRE

    Kastens, Terry L.; Featherstone, Allen M.

    1996-01-01

    An out-of-sample prediction of Kansas farmers' responses to five surveyed questions involving risk is used to compare ordered multinomial logistic regression models with feedforward backpropagation neural network models. Although the logistic models often predict more accurately than the neural network models in a mean-squared error sense, the neural network models are shown to be more accommodating of loss functions associated with a desire to predict certain combinations of categorical resp...

  4. Pixel-wise Segmentation of Street with Neural Networks

    OpenAIRE

    Bittel, Sebastian; Kaiser, Vitali; Teichmann, Marvin; Thoma, Martin

    2015-01-01

    Pixel-wise street segmentation of photographs taken from a drivers perspective is important for self-driving cars and can also support other object recognition tasks. A framework called SST was developed to examine the accuracy and execution time of different neural networks. The best neural network achieved an $F_1$-score of 89.5% with a simple feedforward neural network which trained to solve a regression task.

  5. Survey on Neural Networks Used for Medical Image Processing

    OpenAIRE

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2009-01-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) Wh...

  6. One pass learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance. Copyright

  7. Neural networks analysis on SSME vibration simulation data

    Science.gov (United States)

    Lo, Ching F.; Wu, Kewei

    1993-01-01

    The neural networks method is applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME to supplement the statistical method utilized in the prototype system. The investigation of neural networks analysis is conducted using SSME vibration data from a NASA developed numerical simulator. The limited application of neural networks to the HPFTP has also shown the effectiveness in diagnosing the anomalies of turbopump vibrations.

  8. A Neural Network-Based Interval Pattern Matcher

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2015-07-01

    Full Text Available One of the most important roles in the machine learning area is to classify, and neural networks are very important classifiers. However, traditional neural networks cannot identify intervals, let alone classify them. To improve their identification ability, we propose a neural network-based interval matcher in our paper. After summarizing the theoretical construction of the model, we take a simple and a practical weather forecasting experiment, which show that the recognizer accuracy reaches 100% and that is promising.

  9. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  10. Neural Networks for Modeling and Control of Particle Accelerators

    CERN Document Server

    Edelen, A.L.; Chase, B.E.; Edstrom, D.; Milton, S.V.; Stabile, P.

    2016-01-01

    We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.

  11. Training product unit neural networks with genetic algorithms

    Science.gov (United States)

    Janson, D. J.; Frenzel, J. F.; Thelen, D. C.

    1991-01-01

    The training of product neural networks using genetic algorithms is discussed. Two unusual neural network techniques are combined; product units are employed instead of the traditional summing units and genetic algorithms train the network rather than backpropagation. As an example, a neural netork is trained to calculate the optimum width of transistors in a CMOS switch. It is shown how local minima affect the performance of a genetic algorithm, and one method of overcoming this is presented.

  12. Wave transmission prediction of multilayer floating breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Patil, S.G.; Hegde, A.V.

    in unison to solve a specific problem. The network learns through examples, so it requires good examples to train properly and further a trained network model can be used for prediction purpose. Proceedings of ICOE 2009 Wave transmission... prediction of multilayer floating breakwater using neural network 577 In order to allow the network to learn both non-linear and linear relationships between input nodes and output nodes, multiple-layer neural networks are often used...

  13. Parameterizing Stellar Spectra Using Deep Neural Networks

    Science.gov (United States)

    Li, Xiang-Ru; Pan, Ru-Yang; Duan, Fu-Qing

    2017-03-01

    Large-scale sky surveys are observing massive amounts of stellar spectra. The large number of stellar spectra makes it necessary to automatically parameterize spectral data, which in turn helps in statistically exploring properties related to the atmospheric parameters. This work focuses on designing an automatic scheme to estimate effective temperature ({T}{eff}), surface gravity ({log}g) and metallicity [Fe/H] from stellar spectra. A scheme based on three deep neural networks (DNNs) is proposed. This scheme consists of the following three procedures: first, the configuration of a DNN is initialized using a series of autoencoder neural networks; second, the DNN is fine-tuned using a gradient descent scheme; third, three atmospheric parameters {T}{eff}, {log}g and [Fe/H] are estimated using the computed DNNs. The constructed DNN is a neural network with six layers (one input layer, one output layer and four hidden layers), for which the number of nodes in the six layers are 3821, 1000, 500, 100, 30 and 1, respectively. This proposed scheme was tested on both real spectra and theoretical spectra from Kurucz’s new opacity distribution function models. Test errors are measured with mean absolute errors (MAEs). The errors on real spectra from the Sloan Digital Sky Survey (SDSS) are 0.1477, 0.0048 and 0.1129 dex for {log}g, {log}{T}{eff} and [Fe/H] (64.85 K for {T}{eff}), respectively. Regarding theoretical spectra from Kurucz’s new opacity distribution function models, the MAE of the test errors are 0.0182, 0.0011 and 0.0112 dex for {log}g, {log}{T}{eff} and [Fe/H] (14.90 K for {T}{eff}), respectively.

  14. Precipitation Nowcast using Deep Recurrent Neural Network

    Science.gov (United States)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2016-12-01

    An accurate precipitation nowcast (0-6 hours) with a fine temporal and spatial resolution has always been an important prerequisite for flood warning, streamflow prediction and risk management. Most of the popular approaches used for forecasting precipitation can be categorized into two groups. One type of precipitation forecast relies on numerical modeling of the physical dynamics of atmosphere and another is based on empirical and statistical regression models derived by local hydrologists or meteorologists. Given the recent advances in artificial intelligence, in this study a powerful Deep Recurrent Neural Network, termed as Long Short-Term Memory (LSTM) model, is creatively used to extract the patterns and forecast the spatial and temporal variability of Cloud Top Brightness Temperature (CTBT) observed from GOES satellite. Then, a 0-6 hours precipitation nowcast is produced using a Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN) algorithm, in which the CTBT nowcast is used as the PERSIANN algorithm's raw inputs. Two case studies over the continental U.S. have been conducted that demonstrate the improvement of proposed approach as compared to a classical Feed Forward Neural Network and a couple simple regression models. The advantages and disadvantages of the proposed method are summarized with regard to its capability of pattern recognition through time, handling of vanishing gradient during model learning, and working with sparse data. The studies show that the LSTM model performs better than other methods, and it is able to learn the temporal evolution of the precipitation events through over 1000 time lags. The uniqueness of PERSIANN's algorithm enables an alternative precipitation nowcast approach as demonstrated in this study, in which the CTBT prediction is produced and used as the inputs for generating precipitation nowcast.

  15. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  16. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  17. Optical-Correlator Neural Network Based On Neocognitron

    Science.gov (United States)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  18. Material procedure quality forecast based on genetic BP neural network

    Science.gov (United States)

    Zheng, Bao-Hua

    2017-07-01

    Material procedure quality forecast plays an important role in quality control. This paper proposes a prediction model based on genetic algorithm (GA) and back propagation (BP) neural network. It can obtain the initial weights and thresholds of optimized BP neural network with the GA global search ability. A material process quality prediction model with the optimized BP neural network is adopted to predict the error of future process to measure the accuracy of process quality. The results show that the proposed method has the advantages of high accuracy and fast convergence rate compared with BP neural network.

  19. Neural network models: Insights and prescriptions from practical applications

    Energy Technology Data Exchange (ETDEWEB)

    Samad, T. [Honeywell Technology Center, Minneapolis, MN (United States)

    1995-12-31

    Neural networks are no longer just a research topic; numerous applications are now testament to their practical utility. In the course of developing these applications, researchers and practitioners have been faced with a variety of issues. This paper briefly discusses several of these, noting in particular the rich connections between neural networks and other, more conventional technologies. A more comprehensive version of this paper is under preparation that will include illustrations on real examples. Neural networks are being applied in several different ways. Our focus here is on neural networks as modeling technology. However, much of the discussion is also relevant to other types of applications such as classification, control, and optimization.

  20. Power converters and AC electrical drives with linear neural networks

    CERN Document Server

    Cirrincione, Maurizio

    2012-01-01

    The first book of its kind, Power Converters and AC Electrical Drives with Linear Neural Networks systematically explores the application of neural networks in the field of power electronics, with particular emphasis on the sensorless control of AC drives. It presents the classical theory based on space-vectors in identification, discusses control of electrical drives and power converters, and examines improvements that can be attained when using linear neural networks. The book integrates power electronics and electrical drives with artificial neural networks (ANN). Organized into four parts,

  1. Liquefaction Microzonation of Babol City Using Artificial Neural Network

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Choobbasti, A.J.; Barari, Amin

    2012-01-01

    that will be less susceptible to damage during earthquakes. The scope of present study is to prepare the liquefaction microzonation map for the Babol city based on Seed and Idriss (1983) method using artificial neural network. Artificial neural network (ANN) is one of the artificial intelligence (AI) approaches...... is proposed in this paper. To meet this objective, an effort is made to introduce a total of 30 boreholes data in an area of 7 km2 which includes the results of field tests into the neural network model and the prediction of artificial neural network is checked in some test boreholes, finally the liquefaction...

  2. A hardware implementation of neural network with modified HANNIBAL architecture

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Bum youb; Chung, Duck Jin [Inha University, Inchon (Korea, Republic of)

    1996-03-01

    A digital hardware architecture for artificial neural network with learning capability is described in this paper. It is a modified hardware architecture known as HANNIBAL(Hardware Architecture for Neural Networks Implementing Back propagation Algorithm Learning). For implementing an efficient neural network hardware, we analyzed various type of multiplier which is major function block of neuro-processor cell. With this result, we design a efficient digital neural network hardware using serial/parallel multiplier, and test the operation. We also analyze the hardware efficiency with logic level simulation. (author). 14 refs., 10 figs., 3 tabs.

  3. Neural network and its application to CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nikravesh, M.; Kovscek, A.R.; Patzek, T.W. [Lawrence Berkeley National Lab., CA (United States)] [and others

    1997-02-01

    We present an integrated approach to imaging the progress of air displacement by spontaneous imbibition of oil into sandstone. We combine Computerized Tomography (CT) scanning and neural network image processing. The main aspects of our approach are (I) visualization of the distribution of oil and air saturation by CT, (II) interpretation of CT scans using neural networks, and (III) reconstruction of 3-D images of oil saturation from the CT scans with a neural network model. Excellent agreement between the actual images and the neural network predictions is found.

  4. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    to the biological neurons, works on the input and output passing through a hidden layer. The ANN used here is a data- oriented modeling technique to find relations between input and output patterns by self learning and without any fixed mathematical form assumed... = 1/p ? Ep (2) Where, Ep = ? ? (Tk ?Ok)2 (3) p is the total number of training patterns; Tk is the actual output and Ok is the predicted output at kth output node. In the learning process of backpropagation neural network...

  5. Convolutional neural networks and face recognition task

    Science.gov (United States)

    Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.

    2017-09-01

    Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.

  6. Convolution neural networks for ship type recognition

    Science.gov (United States)

    Rainey, Katie; Reeder, John D.; Corelli, Alexander G.

    2016-05-01

    Algorithms to automatically recognize ship type from satellite imagery are desired for numerous maritime applications. This task is difficult, and example imagery accurately labeled with ship type is hard to obtain. Convolutional neural networks (CNNs) have shown promise in image recognition settings, but many of these applications rely on the availability of thousands of example images for training. This work attempts to under- stand for which types of ship recognition tasks CNNs might be well suited. We report the results of baseline experiments applying a CNN to several ship type classification tasks, and discuss many of the considerations that must be made in approaching this problem.

  7. Artificial Neural Network applied to lightning flashes

    Science.gov (United States)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  8. Defect detection on videos using neural network

    Directory of Open Access Journals (Sweden)

    Sizyakin Roman

    2017-01-01

    Full Text Available In this paper, we consider a method for defects detection in a video sequence, which consists of three main steps; frame compensation, preprocessing by a detector, which is base on the ranking of pixel values, and the classification of all pixels having anomalous values using convolutional neural networks. The effectiveness of the proposed method shown in comparison with the known techniques on several frames of the video sequence with damaged in natural conditions. The analysis of the obtained results indicates the high efficiency of the proposed method. The additional use of machine learning as postprocessing significantly reduce the likelihood of false alarm.

  9. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  10. Sonar discrimination of cylinders from different angles using neural networks neural networks

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Au, Whiwlow; Larsen, Jan

    1999-01-01

    This paper describes an underwater object discrimination system applied to recognize cylinders of various compositions from different angles. The system is based on a new combination of simulated dolphin clicks, simulated auditory filters and artificial neural networks. The model demonstrates its...

  11. Characterization of Early Cortical Neural Network ...

    Science.gov (United States)

    We examined the development of neural network activity using microelectrode array (MEA) recordings made in multi-well MEA plates (mwMEAs) over the first 12 days in vitro (DIV). In primary cortical cultures made from postnatal rats, action potential spiking activity was essentially absent on DIV 2 and developed rapidly between DIV 5 and 12. Spiking activity was primarily sporadic and unorganized at early DIV, and became progressively more organized with time in culture, with bursting parameters, synchrony and network bursting increasing between DIV 5 and 12. We selected 12 features to describe network activity and principal components analysis using these features demonstrated a general segregation of data by age at both the well and plate levels. Using a combination of random forest classifiers and Support Vector Machines, we demonstrated that 4 features (CV of within burst ISI, CV of IBI, network spike rate and burst rate) were sufficient to predict the age (either DIV 5, 7, 9 or 12) of each well recording with >65% accuracy. When restricting the classification problem to a binary decision, we found that classification improved dramatically, e.g. 95% accuracy for discriminating DIV 5 vs DIV 12 wells. Further, we present a novel resampling approach to determine the number of wells that might be needed for conducting comparisons of different treatments using mwMEA plates. Overall, these results demonstrate that network development on mwMEA plates is similar to

  12. Stable architectures for deep neural networks

    Science.gov (United States)

    Haber, Eldad; Ruthotto, Lars

    2018-01-01

    Deep neural networks have become invaluable tools for supervised machine learning, e.g. classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Critical issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper, we propose new forward propagation techniques inspired by systems of ordinary differential equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.

  13. Phase diagram of spiking neural networks.

    Science.gov (United States)

    Seyed-Allaei, Hamed

    2015-01-01

    In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2%, 20% of neurons are inhibitory and 80% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters - excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillates in α or β frequencies, independent of external stimuli.

  14. An efficient neural network approach to dynamic robot motion planning.

    Science.gov (United States)

    Yang, S X; Meng, M

    2000-03-01

    In this paper, a biologically inspired neural network approach to real-time collision-free motion planning of mobile robots or robot manipulators in a nonstationary environment is proposed. Each neuron in the topologically organized neural network has only local connections, whose neural dynamics is characterized by a shunting equation. Thus the computational complexity linearly depends on the neural network size. The real-time robot motion is planned through the dynamic activity landscape of the neural network without any prior knowledge of the dynamic environment, without explicitly searching over the free workspace or the collision paths, and without any learning procedures. Therefore it is computationally efficient. The global stability of the neural network is guaranteed by qualitative analysis and the Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies.

  15. Programmable synaptic chip for electronic neural networks

    Science.gov (United States)

    Moopenn, A.; Langenbacher, H.; Thakoor, A. P.; Khanna, S. K.

    1988-01-01

    A binary synaptic matrix chip has been developed for electronic neural networks. The matrix chip contains a programmable 32X32 array of 'long channel' NMOSFET binary connection elements implemented in a 3-micron bulk CMOS process. Since the neurons are kept off-chip, the synaptic chip serves as a 'cascadable' building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silicon area. The performance of synaptic chip in a 32-neuron breadboard system in an associative memory test application is discussed.

  16. Dynamics of macro- and microscopic neural networks

    DEFF Research Database (Denmark)

    Mikkelsen, Kaare

    2014-01-01

    GN), which is a class of signals with a non-trivial low-frequency component. It is assumed that certain characteristica about the low-frequency component can yield information about the neural processes behind the signal. The method has been used in a range of different studies over the course of the past 10...... that the method continues to find use, of which examples are presented. In the second part of the thesis, numerical simulations of networks of neurons are described. To simplify the analysis, a relatively simpled neuron model - Leaky Integrate and Fire - is chosen. The strengths of the connections between...... shown that the syncronizing effect of the plasticity disappears when the strengths of the connections are frozen in time. Subsequently, the so-called ``Sisyphus'' mechanism is discussed, which is shown to cause slow fluctuations in the both the network synchronization and the strengths...

  17. A Convolutional Neural Network Neutrino Event Classifier

    CERN Document Server

    Aurisano, A; Rocco, D; Himmel, A; Messier, M D; Niner, E; Pawloski, G; Psihas, F; Sousa, A; Vahle, P

    2016-01-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  18. Study on Optimized Elman Neural Network Classification Algorithm Based on PLS and CA

    Science.gov (United States)

    Zhao, Dean; Shen, Tian; Zhao, Yuyan

    2014-01-01

    High-dimensional large sample data sets, between feature variables and between samples, may cause some correlative or repetitive factors, occupy lots of storage space, and consume much computing time. Using the Elman neural network to deal with them, too many inputs will influence the operating efficiency and recognition accuracy; too many simultaneous training samples, as well as being not able to get precise neural network model, also restrict the recognition accuracy. Aiming at these series of problems, we introduce the partial least squares (PLS) and cluster analysis (CA) into Elman neural network algorithm, by the PLS for dimension reduction which can eliminate the correlative and repetitive factors of the features. Using CA eliminates the correlative and repetitive factors of the sample. If some subclass becomes small sample, with high-dimensional feature and fewer numbers, PLS shows a unique advantage. Each subclass is regarded as one training sample to train the different precise neural network models. Then simulation samples are discriminated and classified into different subclasses, using the corresponding neural network to recognize it. An optimized Elman neural network classification algorithm based on PLS and CA (PLS-CA-Elman algorithm) is established. The new algorithm aims at improving the operating efficiency and recognition accuracy. By the case analysis, the new algorithm has unique superiority, worthy of further promotion. PMID:25165470

  19. Brain tumor segmentation with Deep Neural Networks.

    Science.gov (United States)

    Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo

    2017-01-01

    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks

    Science.gov (United States)

    Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li

    2016-06-01

    Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.

  1. Artificial neural networks in pancreatic disease.

    Science.gov (United States)

    Bartosch-Härlid, A; Andersson, B; Aho, U; Nilsson, J; Andersson, R

    2008-07-01

    An artificial neural network (ANNs) is a non-linear pattern recognition technique that is rapidly gaining in popularity in medical decision-making. This study investigated the use of ANNs for diagnostic and prognostic purposes in pancreatic disease, especially acute pancreatitis and pancreatic cancer. PubMed was searched for articles on the use of ANNs in pancreatic diseases using the MeSH terms 'neural networks (computer)', 'pancreatic neoplasms', 'pancreatitis' and 'pancreatic diseases'. A systematic review of the articles was performed. Eleven articles were identified, published between 1993 and 2007. The situations that lend themselves best to analysis by ANNs are complex multifactorial relationships, medical decisions when a second opinion is needed and when automated interpretation is required, for example in a situation of an inadequate number of experts. Conventional linear models have limitations in terms of diagnosis and prediction of outcome in acute pancreatitis and pancreatic cancer. Management of these disorders can be improved by applying ANNs to existing clinical parameters and newly established gene expression profiles. (c) 2008 British Journal of Surgery Society Ltd. Published by John Wiley & Sons, Ltd.

  2. BOUNDARY DEPTH INFORMATION USING HOPFIELD NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    S. Xu

    2016-06-01

    Full Text Available Depth information is widely used for representation, reconstruction and modeling of 3D scene. Generally two kinds of methods can obtain the depth information. One is to use the distance cues from the depth camera, but the results heavily depend on the device, and the accuracy is degraded greatly when the distance from the object is increased. The other one uses the binocular cues from the matching to obtain the depth information. It is more and more mature and convenient to collect the depth information of different scenes by stereo matching methods. In the objective function, the data term is to ensure that the difference between the matched pixels is small, and the smoothness term is to smooth the neighbors with different disparities. Nonetheless, the smoothness term blurs the boundary depth information of the object which becomes the bottleneck of the stereo matching. This paper proposes a novel energy function for the boundary to keep the discontinuities and uses the Hopfield neural network to solve the optimization. We first extract the region of interest areas which are the boundary pixels in original images. Then, we develop the boundary energy function to calculate the matching cost. At last, we solve the optimization globally by the Hopfield neural network. The Middlebury stereo benchmark is used to test the proposed method, and results show that our boundary depth information is more accurate than other state-of-the-art methods and can be used to optimize the results of other stereo matching methods.

  3. Maximum Entropy Approaches to Living Neural Networks

    Directory of Open Access Journals (Sweden)

    John M. Beggs

    2010-01-01

    Full Text Available Understanding how ensembles of neurons collectively interact will be a key step in developing a mechanistic theory of cognitive processes. Recent progress in multineuron recording and analysis techniques has generated tremendous excitement over the physiology of living neural networks. One of the key developments driving this interest is a new class of models based on the principle of maximum entropy. Maximum entropy models have been reported to account for spatial correlation structure in ensembles of neurons recorded from several different types of data. Importantly, these models require only information about the firing rates of individual neurons and their pairwise correlations. If this approach is generally applicable, it would drastically simplify the problem of understanding how neural networks behave. Given the interest in this method, several groups now have worked to extend maximum entropy models to account for temporal correlations. Here, we review how maximum entropy models have been applied to neuronal ensemble data to account for spatial and temporal correlations. We also discuss criticisms of the maximum entropy approach that argue that it is not generally applicable to larger ensembles of neurons. We conclude that future maximum entropy models will need to address three issues: temporal correlations, higher-order correlations, and larger ensemble sizes. Finally, we provide a brief list of topics for future research.

  4. Neural network analysis for hazardous waste characterization

    Energy Technology Data Exchange (ETDEWEB)

    Misra, M.; Pratt, L.Y.; Farris, C. [Colorado School of Mines, Golden, CO (United States)] [and others

    1995-12-31

    This paper is a summary of our work in developing a system for interpreting electromagnetic (EM) and magnetic sensor information from the dig face characterization experimental cell at INEL to determine the depth and nature of buried objects. This project contained three primary components: (1) development and evaluation of several geophysical interpolation schemes for correcting missing or noisy data, (2) development and evaluation of several wavelet compression schemes for removing redundancies from the data, and (3) construction of two neural networks that used the results of steps (1) and (2) to determine the depth and nature of buried objects. This work is a proof-of-concept study that demonstrates the feasibility of this approach. The resulting system was able to determine the nature of buried objects correctly 87% of the time and was able to locate a buried object to within an average error of 0.8 feet. These statistics were gathered based on a large test set and so can be considered reliable. Considering the limited nature of this study, these results strongly indicate the feasibility of this approach, and the importance of appropriate preprocessing of neural network input data.

  5. Object Classification Using Substance Based Neural Network

    Directory of Open Access Journals (Sweden)

    P. Sengottuvelan

    2014-01-01

    Full Text Available Object recognition has shown tremendous increase in the field of image analysis. The required set of image objects is identified and retrieved on the basis of object recognition. In this paper, we propose a novel classification technique called substance based image classification (SIC using a wavelet neural network. The foremost task of SIC is to remove the surrounding regions from an image to reduce the misclassified portion and to effectively reflect the shape of an object. At first, the image to be extracted is performed with SIC system through the segmentation of the image. Next, in order to attain more accurate information, with the extracted set of regions, the wavelet transform is applied for extracting the configured set of features. Finally, using the neural network classifier model, misclassification over the given natural images and further background images are removed from the given natural image using the LSEG segmentation. Moreover, to increase the accuracy of object classification, SIC system involves the removal of the regions in the surrounding image. Performance evaluation reveals that the proposed SIC system reduces the occurrence of misclassification and reflects the exact shape of an object to approximately 10–15%.

  6. Energy coding in neural network with inhibitory neurons.

    Science.gov (United States)

    Wang, Ziyin; Wang, Rubin; Fang, Ruiyan

    2015-04-01

    This paper aimed at assessing and comparing the effects of the inhibitory neurons in the neural network on the neural energy distribution, and the network activities in the absence of the inhibitory neurons to understand the nature of neural energy distribution and neural energy coding. Stimulus, synchronous oscillation has significant difference between neural networks with and without inhibitory neurons, and this difference can be quantitatively evaluated by the characteristic energy distribution. In addition, the synchronous oscillation difference of the neural activity can be quantitatively described by change of the energy distribution if the network parameters are gradually adjusted. Compared with traditional method of correlation coefficient analysis, the quantitative indicators based on nervous energy distribution characteristics are more effective in reflecting the dynamic features of the neural network activities. Meanwhile, this neural coding method from a global perspective of neural activity effectively avoids the current defects of neural encoding and decoding theory and enormous difficulties encountered. Our studies have shown that neural energy coding is a new coding theory with high efficiency and great potential.

  7. Learning of N-layers neural network

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2005-01-01

    Full Text Available In the last decade we can observe increasing number of applications based on the Artificial Intelligence that are designed to solve problems from different areas of human activity. The reason why there is so much interest in these technologies is that the classical way of solutions does not exist or these technologies are not suitable because of their robustness. They are often used in applications like Business Intelligence that enable to obtain useful information for high-quality decision-making and to increase competitive advantage.One of the most widespread tools for the Artificial Intelligence are the artificial neural networks. Their high advantage is relative simplicity and the possibility of self-learning based on set of pattern situations.For the learning phase is the most commonly used algorithm back-propagation error (BPE. The base of BPE is the method minima of error function representing the sum of squared errors on outputs of neural net, for all patterns of the learning set. However, while performing BPE and in the first usage, we can find out that it is necessary to complete the handling of the learning factor by suitable method. The stability of the learning process and the rate of convergence depend on the selected method. In the article there are derived two functions: one function for the learning process management by the relative great error function value and the second function when the value of error function approximates to global minimum.The aim of the article is to introduce the BPE algorithm in compact matrix form for multilayer neural networks, the derivation of the learning factor handling method and the presentation of the results.

  8. Performance of artificial neural networks and genetical evolved artificial neural networks unfolding techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M. [Escuela Politecnica Superior, Departamento de Electrotecnia y Electronica, Avda. Menendez Pidal s/n, Cordoba (Spain); Martinez B, M. R.; Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Gallego D, E.; Lorente F, A. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, ETSI Industriales, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Mendez V, R.; Los Arcos M, J. M.; Guerrero A, J. E., E-mail: morvymm@yahoo.com.m [CIEMAT, Laboratorio de Metrologia de Radiaciones Ionizantes, Avda. Complutense 22, 28040 Madrid (Spain)

    2011-02-15

    With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks (Ann) have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Ann still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning Ann parameters. In recent years the use of hybrid technologies, combining Ann and genetic algorithms, has been utilized to. In this work, several Ann topologies were trained and tested using Ann and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out. (Author)

  9. Network cluster detecting in associated bi-graph picture

    CERN Document Server

    He, Zhe; Xu, Rui-Jie; Wang, Bing-Hong; Ou-Yang, Zhong-Can

    2014-01-01

    We find that there is a close relationship between the associated bigraph and the clustering. the imbedding of the bigraph into some space can identify the clusters. Thus, we propose a new method for network cluster detecting through associated bigraph,of which the physical meaning is clear and the time complexity is acceptable. These characteristics help people to understand the structure and character of networks. We uncover the clusters on serval real networks in this paper as examples. The Zachary Network, which presents the structure of a karate club,can be partation into two clusters correctly by this method. And the Dolphin network is partitioned reasonably.

  10. Semantic segmentation of bioimages using convolutional neural networks

    CSIR Research Space (South Africa)

    Wiehman, S

    2016-07-01

    Full Text Available Convolutional neural networks have shown great promise in both general image segmentation problems as well as bioimage segmentation. In this paper, the application of different convolutional network architectures is explored on the C. elegans live...

  11. Artificial neural networks with an infinite number of nodes

    Science.gov (United States)

    Blekas, K.; Lagaris, I. E.

    2017-10-01

    A new class of Artificial Neural Networks is described incorporating a node density function and functional weights. This network containing an infinite number of nodes, excels in generalizing and possesses a superior extrapolation capability.

  12. Altered Synchronizations among Neural Networks in Geriatric Depression.

    Science.gov (United States)

    Wang, Lihong; Chou, Ying-Hui; Potter, Guy G; Steffens, David C

    2015-01-01

    Although major depression has been considered as a manifestation of discoordinated activity between affective and cognitive neural networks, only a few studies have examined the relationships among neural networks directly. Because of the known disconnection theory, geriatric depression could be a useful model in studying the interactions among different networks. In the present study, using independent component analysis to identify intrinsically connected neural networks, we investigated the alterations in synchronizations among neural networks in geriatric depression to better understand the underlying neural mechanisms. Resting-state fMRI data was collected from thirty-two patients with geriatric depression and thirty-two age-matched never-depressed controls. We compared the resting-state activities between the two groups in the default-mode, central executive, attention, salience, and affective networks as well as correlations among these networks. The depression group showed stronger activity than the controls in an affective network, specifically within the orbitofrontal region. However, unlike the never-depressed controls, geriatric depression group lacked synchronized/antisynchronized activity between the affective network and the other networks. Those depressed patients with lower executive function has greater synchronization between the salience network with the executive and affective networks. Our results demonstrate the effectiveness of the between-network analyses in examining neural models for geriatric depression.

  13. Adaptive training of feedforward neural networks by Kalman filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering; Tuerkcan, E. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands)

    1995-02-01

    Adaptive training of feedforward neural networks by Kalman filtering is described. Adaptive training is particularly important in estimation by neural network in real-time environmental where the trained network is used for system estimation while the network is further trained by means of the information provided by the experienced/exercised ongoing operation. As result of this, neural network adapts itself to a changing environment to perform its mission without recourse to re-training. The performance of the training method is demonstrated by means of actual process signals from a nuclear power plant. (orig.).

  14. Complex brain networks: From topological communities to clustered ...

    Indian Academy of Sciences (India)

    We investigate synchronisation dynamics on the corticocortical network of the cat by modelling each node of the network (cortical area) with a subnetwork of interacting excitable neurons. We find that this network of networks displays clustered synchronisation behaviour and the dynamical clusters closely coincide with the ...

  15. ANNA: A Convolutional Neural Network Code for Spectroscopic Analysis

    Science.gov (United States)

    Lee-Brown, Donald; Anthony-Twarog, Barbara J.; Twarog, Bruce A.

    2018-01-01

    We present ANNA, a Python-based convolutional neural network code for the automated analysis of stellar spectra. ANNA provides a flexible framework that allows atmospheric parameters such as temperature and metallicity to be determined with accuracies comparable to those of established but less efficient techniques. ANNA performs its parameterization extremely quickly; typically several thousand spectra can be analyzed in less than a second. Additionally, the code incorporates features which greatly speed up the training process necessary for the neural network to measure spectra accurately, resulting in a tool that can easily be run on a single desktop or laptop computer. Thus, ANNA is useful in an era when spectrographs increasingly have the capability to collect dozens to hundreds of spectra each night. This talk will cover the basic features included in ANNA and demonstrate its performance in two use cases: an open cluster abundance analysis involving several hundred spectra, and a metal-rich field star study. Applicability of the code to large survey datasets will also be discussed.

  16. Automated Modeling of Microwave Structures by Enhanced Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2006-12-01

    Full Text Available The paper describes the methodology of the automated creation of neural models of microwave structures. During the creation process, artificial neural networks are trained using the combination of the particle swarm optimization and the quasi-Newton method to avoid critical training problems of the conventional neural nets. In the paper, neural networks are used to approximate the behavior of a planar microwave filter (moment method, Zeland IE3D. In order to evaluate the efficiency of neural modeling, global optimizations are performed using numerical models and neural ones. Both approaches are compared from the viewpoint of CPU-time demands and the accuracy. Considering conclusions, methodological recommendations for including neural networks to the microwave design are formulated.

  17. Neural networks to formulate special fats

    Directory of Open Access Journals (Sweden)

    Garcia, R. K.

    2012-09-01

    Full Text Available Neural networks are a branch of artificial intelligence based on the structure and development of biological systems, having as its main characteristic the ability to learn and generalize knowledge. They are used for solving complex problems for which traditional computing systems have a low efficiency. To date, applications have been proposed for different sectors and activities. In the area of fats and oils, the use of neural networks has focused mainly on two issues: the detection of adulteration and the development of fatty products. The formulation of fats for specific uses is the classic case of a complex problem where an expert or group of experts defines the proportions of each base, which, when mixed, provide the specifications for the desired product. Some conventional computer systems are currently available to assist the experts; however, these systems have some shortcomings. This article describes in detail a system for formulating fatty products, shortenings or special fats, from three or more components by using neural networks (MIX. All stages of development, including design, construction, training, evaluation, and operation of the network will be outlined.

    Las redes neuronales son una rama de la inteligencia artificial basadas en la estructura y funcionamiento de sistemas biológicos, teniendo como principal característica la capacidad de aprender y generalizar conocimiento. Estas son utilizadas en la resolución de problemas complejos, en los cuales los sistemas computacionales tradicionales presentan una eficiencia baja. Hasta la fecha, han sido propuestas aplicaciones para los más diversos sectores y actividades. En el área de grasas y aceites, la utilización de redes neuronales se ha concentrado principalmente en dos asuntos: la detección de adulteraciones y la formulación de productos grasos. La formulación de grasas para uso específico es el caso clásico de problema complejo donde un experto o grupo de

  18. Rod-Shaped Neural Units for Aligned 3D Neural Network Connection.

    Science.gov (United States)

    Kato-Negishi, Midori; Onoe, Hiroaki; Ito, Akane; Takeuchi, Shoji

    2017-08-01

    This paper proposes neural tissue units with aligned nerve fibers (called rod-shaped neural units) that connect neural networks with aligned neurons. To make the proposed units, 3D fiber-shaped neural tissues covered with a calcium alginate hydrogel layer are prepared with a microfluidic system and are cut in an accurate and reproducible manner. These units have aligned nerve fibers inside the hydrogel layer and connectable points on both ends. By connecting the units with a poly(dimethylsiloxane) guide, 3D neural tissues can be constructed and maintained for more than two weeks of culture. In addition, neural networks can be formed between the different neural units via synaptic connections. Experimental results indicate that the proposed rod-shaped neural units are effective tools for the construction of spatially complex connections with aligned nerve fibers in vitro. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Multiple image sensor data fusion through artificial neural networks

    Science.gov (United States)

    With multisensor data fusion technology, the data from multiple sensors are fused in order to make a more accurate estimation of the environment through measurement, processing and analysis. Artificial neural networks are the computational models that mimic biological neural networks. With high per...

  20. Behaviour in O of the Neural Networks Training Cost

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1998-01-01

    We study the behaviour in zero of the derivatives of the cost function used when training non-linear neural networks. It is shown that a fair number offirst, second and higher order derivatives vanish in zero, validating the belief that 0 is a peculiar and potentially harmful location....... These calculations arerelated to practical and theoretical aspects of neural networks training....

  1. Neural network model to control an experimental chaotic pendulum

    NARCIS (Netherlands)

    Bakker, R; Schouten, JC; Takens, F; vandenBleek, CM

    1996-01-01

    A feedforward neural network was trained to predict the motion of an experimental, driven, and damped pendulum operating in a chaotic regime. The network learned the behavior of the pendulum from a time series of the pendulum's angle, the single measured variable. The validity of the neural

  2. Classification of Urinary Calculi using Feed-Forward Neural Networks

    African Journals Online (AJOL)

    In this work the results of classification of these types of calculi (using their infrared spectra in the region 1450–450 cm–1) by feed-forward neural networks are presented. Genetic algorithms were used for optimization of neural networks and for selection of the spectral regions most suitable for classification purposes.

  3. Optimal Brain Surgeon on Artificial Neural Networks in

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Job, Jonas Hultmann; Klyver, Katrine

    2012-01-01

    It is shown how the procedure know as optimal brain surgeon can be used to trim and optimize artificial neural networks in nonlinear structural dynamics. Beside optimizing the neural network, and thereby minimizing computational cost in simulation, the surgery procedure can also serve as a quick...

  4. Neural networks as a tool for unit commitment

    DEFF Research Database (Denmark)

    Rønne-Hansen, Peter; Rønne-Hansen, Jan

    1991-01-01

    Some of the fundamental problems when solving the power system unit commitment problem by means of neural networks have been attacked. It has been demonstrated for a small example that neural networks might be a viable alternative. Some of the major problems solved in this initiating phase form...

  5. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    1995-01-01

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  6. Classes of feedforward neural networks and their circuit complexity

    NARCIS (Netherlands)

    Shawe-Taylor, John S.; Anthony, Martin H.G.; Kern, Walter

    1992-01-01

    This paper aims to place neural networks in the context of boolean circuit complexity. We define appropriate classes of feedforward neural networks with specified fan-in, accuracy of computation and depth and using techniques of communication complexity proceed to show that the classes fit into a

  7. Identification of Non-Linear Structures using Recurrent Neural Networks

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Nielsen, Søren R. K.; Hansen, H. I.

    Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure.......Two different partially recurrent neural networks structured as Multi Layer Perceptrons (MLP) are investigated for time domain identification of a non-linear structure....

  8. Boosted jet identification using particle candidates and deep neural networks

    CERN Document Server

    CMS Collaboration

    2017-01-01

    This note presents developments for the identification of hadronically decaying top quarks using deep neural networks in CMS. A new method that utilizes one dimensional convolutional neural networks based on jet constituent particles is proposed. Alternative methods using boosted decision trees based on jet observables are compared. The new method shows significant improvement in performance.

  9. Mapping Neural Network Derived from the Parzen Window Estimator

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Hartmann, U.

    1992-01-01

    The article presents a general theoretical basis for the construction of mapping neural networks. The theory is based on the Parzen Window estimator for......The article presents a general theoretical basis for the construction of mapping neural networks. The theory is based on the Parzen Window estimator for...

  10. Implementation of neural network based non-linear predictive control

    DEFF Research Database (Denmark)

    Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole

    1999-01-01

    of non-linear systems. GPC is model based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model, a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis...

  11. Neural Network for Optimization of Existing Control Systems

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1995-01-01

    The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems.......The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems....

  12. Using Neural Networks to Predict MBA Student Success

    Science.gov (United States)

    Naik, Bijayananda; Ragothaman, Srinivasan

    2004-01-01

    Predicting MBA student performance for admission decisions is crucial for educational institutions. This paper evaluates the ability of three different models--neural networks, logit, and probit to predict MBA student performance in graduate programs. The neural network technique was used to classify applicants into successful and marginal student…

  13. Artificial Neural Network Modeling of an Inverse Fluidized Bed ...

    African Journals Online (AJOL)

    MICHAEL

    modeling of the inverse fluidized bed reactor. In the proposed model, the trained neural network represents the kinetics of biological decomposition of pollutants in the reactor. The neural network has been trained with experimental data obtained from an inverse fluidized bed reactor treating the starch industry wastewater.

  14. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    user

    Consequently, many structures based on artificial neural network (ANN) have been developed in the literature, The most significant ... Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total Harmonic Distortion. 1. ..... and pure shunt active fitters, IEEE 38th Conf on Industry Applications, Vol. 2, pp.

  15. Application of Neural Networks to House Pricing and Bond Rating

    NARCIS (Netherlands)

    Daniëls, H.A.M.; Kamp, B.; Verkooijen, W.J.H.

    1997-01-01

    Feed forward neural networks receive a growing attention as a data modelling tool in economic classification problems. It is well-known that controlling the design of a neural network can be cumbersome. Inaccuracies may lead to a manifold of problems in the application such as higher errors due to

  16. Neural Networks for Language Identification: A Comparative Study.

    Science.gov (United States)

    MacNamara, Shane; Cunningham, Padraig; Byrne, John

    1998-01-01

    Analyzes a neural network for its ability to perform a task involving identification of the language entries in a 19th-century library catalog containing entries in 14 different languages. Compares the neural network's performance with that of trigrams and a suffix/morphology analysis; the trigrams prove to be superior. (AEF)

  17. Multilayer perceptron neural network for downscaling rainfall in arid ...

    Indian Academy of Sciences (India)

    Multilayer perceptron neural network for downscaling rainfall in arid region: A case study of Baluchistan, Pakistan ... A multilayer perceptron (MLP) neural network has been proposed in the present study for the downscaling of rainfall in the data scarce arid region of Baluchistan province of Pakistan, which is considered as ...

  18. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  19. Artificial neural networks in predicting current in electric arc furnaces

    Science.gov (United States)

    Panoiu, M.; Panoiu, C.; Iordan, A.; Ghiormez, L.

    2014-03-01

    The paper presents a study of the possibility of using artificial neural networks for the prediction of the current and the voltage of Electric Arc Furnaces. Multi-layer perceptron and radial based functions Artificial Neural Networks implemented in Matlab were used. The study is based on measured data items from an Electric Arc Furnace in an industrial plant in Romania.

  20. Parameter estimation of an aeroelastic aircraft using neural networks

    Indian Academy of Sciences (India)

    e-mail: scr@iitk.ac.in. Abstract. Application of neural networks to the problem of aerodynamic modelling and parameter estimation for aeroelastic aircraft is addressed. A neural model capable of ... of the network in terms of the number of neurons in the hidden layer, the learning rate, the momentum rate etc. is not an exact ...

  1. Artificial neural networks for prediction of percentage of water ...

    Indian Academy of Sciences (India)

    According to these input parameters, in the neural networks model, the percentage of water absorption of each specimen was predicted. The training and testing results in the neural networks model have shown a strong potential for predicting the percentage of water absorption of the geopolymer specimens.

  2. Comparative performance of some popular artificial neural network ...

    Indian Academy of Sciences (India)

    Comparative performance of some popular artificial neural network algorithms on benchmark and function approximation problems ... dynamic range of these functions, it is suggested that these functions can also be considered as standard benchmark problems for function approximation using artificial neural networks.

  3. Spacecraft Neural Network Control System Design using FPGA

    OpenAIRE

    Hanaa T. El-Madany; Faten H. Fahmy; Ninet M. A. El-Rahman; Hassen T. Dorrah

    2011-01-01

    Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffer...

  4. IDENTIFICATION AND CONTROL OF AN ASYNCHRONOUS MACHINE USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A ZERGAOUI

    2000-06-01

    Full Text Available In this work, we present the application of artificial neural networks to the identification and control of the asynchronous motor, which is a complex nonlinear system with variable internal dynamics.  We show that neural networks can be applied to control the stator currents of the induction motor.  The results of the different simulations are presented to evaluate the performance of the neural controller proposed.

  5. Efficiently passing messages in distributed spiking neural network simulation.

    Science.gov (United States)

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  6. A Novel Neural Network for Generally Constrained Variational Inequalities.

    Science.gov (United States)

    Gao, Xingbao; Liao, Li-Zhi

    2017-09-01

    This paper presents a novel neural network for solving generally constrained variational inequality problems by constructing a system of double projection equations. By defining proper convex energy functions, the proposed neural network is proved to be stable in the sense of Lyapunov and converges to an exact solution of the original problem for any starting point under the weaker cocoercivity condition or the monotonicity condition of the gradient mapping on the linear equation set. Furthermore, two sufficient conditions are provided to ensure the stability of the proposed neural network for a special case. The proposed model overcomes some shortcomings of existing continuous-time neural networks for constrained variational inequality, and its stability only requires some monotonicity conditions of the underlying mapping and the concavity of nonlinear inequality constraints on the equation set. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.

  7. Neural network for constrained nonsmooth optimization using Tikhonov regularization.

    Science.gov (United States)

    Qin, Sitian; Fan, Dejun; Wu, Guangxi; Zhao, Lijun

    2015-03-01

    This paper presents a one-layer neural network to solve nonsmooth convex optimization problems based on the Tikhonov regularization method. Firstly, it is shown that the optimal solution of the original problem can be approximated by the optimal solution of a strongly convex optimization problems. Then, it is proved that for any initial point, the state of the proposed neural network enters the equality feasible region in finite time, and is globally convergent to the unique optimal solution of the related strongly convex optimization problems. Compared with the existing neural networks, the proposed neural network has lower model complexity and does not need penalty parameters. In the end, some numerical examples and application are given to illustrate the effectiveness and improvement of the proposed neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Obstacle avoidance for power wheelchair using bayesian neural network.

    Science.gov (United States)

    Trieu, Hoang T; Nguyen, Hung T; Willey, Keith

    2007-01-01

    In this paper we present a real-time obstacle avoidance algorithm using a Bayesian neural network for a laser based wheelchair system. The raw laser data is modified to accommodate the wheelchair dimensions, allowing the free-space to be determined accurately in real-time. Data acquisition is performed to collect the patterns required for training the neural network. A Bayesian frame work is applied to determine the optimal neural network structure for the training data. This neural network is trained under the supervision of the Bayesian rule and the obstacle avoidance task is then implemented for the wheelchair system. Initial results suggest this approach provides an effective solution for autonomous tasks, suggesting Bayesian neural networks may be useful for wider assistive technology applications.

  9. 23rd Workshop of the Italian Neural Networks Society (SIREN)

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2014-01-01

    This volume collects a selection of contributions which has been presented at the 23rd Italian Workshop on Neural Networks, the yearly meeting of the Italian Society for Neural Networks (SIREN). The conference was held in Vietri sul Mare, Salerno, Italy during May 23-24, 2013. The annual meeting of SIREN is sponsored by International Neural Network Society (INNS), European Neural Network Society (ENNS) and IEEE Computational Intelligence Society (CIS). The book – as well as the workshop-  is organized in two main components, a special session and a group of regular sessions featuring different aspects and point of views of artificial neural networks, artificial and natural intelligence, as well as psychological and cognitive theories for modeling human behaviors and human machine interactions, including Information Communication applications of compelling interest.  .

  10. Investigation of efficient features for image recognition by neural networks.

    Science.gov (United States)

    Goltsev, Alexander; Gritsenko, Vladimir

    2012-04-01

    In the paper, effective and simple features for image recognition (named LiRA-features) are investigated in the task of handwritten digit recognition. Two neural network classifiers are considered-a modified 3-layer perceptron LiRA and a modular assembly neural network. A method of feature selection is proposed that analyses connection weights formed in the preliminary learning process of a neural network classifier. In the experiments using the MNIST database of handwritten digits, the feature selection procedure allows reduction of feature number (from 60 000 to 7000) preserving comparable recognition capability while accelerating computations. Experimental comparison between the LiRA perceptron and the modular assembly neural network is accomplished, which shows that recognition capability of the modular assembly neural network is somewhat better. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Issues in the use of neural networks in information retrieval

    CERN Document Server

    Iatan, Iuliana F

    2017-01-01

    This book highlights the ability of neural networks (NNs) to be excellent pattern matchers and their importance in information retrieval (IR), which is based on index term matching. The book defines a new NN-based method for learning image similarity and describes how to use fuzzy Gaussian neural networks to predict personality. It introduces the fuzzy Clifford Gaussian network, and two concurrent neural models: (1) concurrent fuzzy nonlinear perceptron modules, and (2) concurrent fuzzy Gaussian neural network modules. Furthermore, it explains the design of a new model of fuzzy nonlinear perceptron based on alpha level sets and describes a recurrent fuzzy neural network model with a learning algorithm based on the improved particle swarm optimization method.

  12. Neural Networks for Synthesis and Optimization of Antenna Arrays

    Directory of Open Access Journals (Sweden)

    S. A. Djennas

    2007-04-01

    Full Text Available This paper describes a usual application of back-propagation neural networks for synthesis and optimization of antenna array. The neural network is able to model and to optimize the antennas arrays, by acting on radioelectric or geometric parameters and by taking into account predetermined general criteria. The neural network allows not only establishing important analytical equations for the optimization step, but also a great flexibility between the system parameters in input and output. This step of optimization becomes then possible due to the explicit relation given by the neural network. According to different formulations of the synthesis problem such as acting on the feed law (amplitude and/or phase and/or space position of the radiating sources, results on antennas arrays synthesis and optimization by neural networks are presented and discussed. However ANN is able to generate very fast the results of synthesis comparing to other approaches.

  13. On Learning Cluster Coefficient of Private Networks.

    Science.gov (United States)

    Wang, Yue; Wu, Xintao; Zhu, Jun; Xiang, Yang

    2012-01-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as clustering coefficient or modularity often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we treat a graph statistics as a function f and develop a divide and conquer approach to enforce differential privacy. The basic procedure of this approach is to first decompose the target computation f into several less complex unit computations f1, …, fm connected by basic mathematical operations (e.g., addition, subtraction, multiplication, division), then perturb the output of each fi with Laplace noise derived from its own sensitivity value and the distributed privacy threshold ε i , and finally combine those perturbed fi as the perturbed output of computation f. We examine how various operations affect the accuracy of complex computations. When unit computations have large global sensitivity values, we enforce the differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We illustrate our approach by using clustering coefficient, which is a popular statistics used in social network analysis. Empirical evaluations on five real social networks and various synthetic graphs generated from three random graph models show the developed divide and conquer approach outperforms the direct approach.

  14. CytoCluster: A Cytoscape Plugin for Cluster Analysis and Visualization of Biological Networks.

    Science.gov (United States)

    Li, Min; Li, Dongyan; Tang, Yu; Wu, Fangxiang; Wang, Jianxin

    2017-08-31

    Nowadays, cluster analysis of biological networks has become one of the most important approaches to identifying functional modules as well as predicting protein complexes and network biomarkers. Furthermore, the visualization of clustering results is crucial to display the structure of biological networks. Here we present CytoCluster, a cytoscape plugin integrating six clustering algorithms, HC-PIN (Hierarchical Clustering algorithm in Protein Interaction Networks), OH-PIN (identifying Overlapping and Hierarchical modules in Protein Interaction Networks), IPCA (Identifying Protein Complex Algorithm), ClusterONE (Clustering with Overlapping Neighborhood Expansion), DCU (Detecting Complexes based on Uncertain graph model), IPC-MCE (Identifying Protein Complexes based on Maximal Complex Extension), and BinGO (the Biological networks Gene Ontology) function. Users can select different clustering algorithms according to their requirements. The main function of these six clustering algorithms is to detect protein complexes or functional modules. In addition, BinGO is used to determine which Gene Ontology (GO) categories are statistically overrepresented in a set of genes or a subgraph of a biological network. CytoCluster can be easily expanded, so that more clustering algorithms and functions can be added to this plugin. Since it was created in July 2013, CytoCluster has been downloaded more than 9700 times in the Cytoscape App store and has already been applied to the analysis of different biological networks. CytoCluster is available from http://apps.cytoscape.org/apps/cytocluster.

  15. Piecewise convexity of artificial neural networks.

    Science.gov (United States)

    Rister, Blaine; Rubin, Daniel L

    2017-10-01

    Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Using the NeAT toolbox to compare networks to networks, clusters to clusters, and network to clusters.

    Science.gov (United States)

    Brohée, Sylvain

    2012-01-01

    In this chapter, we present and interpret some operations on biological networks that can easily performed with NeAT, a set of Web tools aimed at studying biological networks (or graphs) and classifications. These approaches are of particular interest for biologists and scientists who need to assess the reliability of new datasets (either experimental or predicted) by comparing them to established references. Firstly, we describe the steps that will allow a nonspecialist user to compare two networks to compute their union and the statistical significance of their intersection. Next, we show how to map functional classes (e.g., GO categories, sets of regulons or complexes) onto a biological network. A third protocol explains how to compare two sets of functional classes, e.g., to assess statistically the biological relevance of some computationally returned groups of genes (clustering). The metrics as well as the results obtained by following the different protocols are extensively described and explained. NeAT is available at the following URL: http://rsat.bigre.ulb.ac.be/rsat/index_neat.html.

  17. A neural network for noise correlation classification

    Science.gov (United States)

    Paitz, Patrick; Gokhberg, Alexey; Fichtner, Andreas

    2018-02-01

    We present an artificial neural network (ANN) for the classification of ambient seismic noise correlations into two categories, suitable and unsuitable for noise tomography. By using only a small manually classified data subset for network training, the ANN allows us to classify large data volumes with low human effort and to encode the valuable subjective experience of data analysts that cannot be captured by a deterministic algorithm. Based on a new feature extraction procedure that exploits the wavelet-like nature of seismic time-series, we efficiently reduce the dimensionality of noise correlation data, still keeping relevant features needed for automated classification. Using global- and regional-scale data sets, we show that classification errors of 20 per cent or less can be achieved when the network training is performed with as little as 3.5 per cent and 16 per cent of the data sets, respectively. Furthermore, the ANN trained on the regional data can be applied to the global data, and vice versa, without a significant increase of the classification error. An experiment where four students manually classified the data, revealed that the classification error they would assign to each other is substantially larger than the classification error of the ANN (>35 per cent). This indicates that reproducibility would be hampered more by human subjectivity than by imperfections of the ANN.

  18. Antagonistic neural networks underlying differentiated leadership roles.

    Science.gov (United States)

    Boyatzis, Richard E; Rochford, Kylie; Jack, Anthony I

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks - the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success.

  19. Antagonistic Neural Networks Underlying Differentiated Leadership Roles

    Directory of Open Access Journals (Sweden)

    Richard Eleftherios Boyatzis

    2014-03-01

    Full Text Available The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950’s. Recent research in neuroscience suggests that the division between task oriented and socio-emotional oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks -- the Task Positive Network (TPN and the Default Mode Network (DMN. Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success.

  20. Antagonistic neural networks underlying differentiated leadership roles

    Science.gov (United States)

    Boyatzis, Richard E.; Rochford, Kylie; Jack, Anthony I.

    2014-01-01

    The emergence of two distinct leadership roles, the task leader and the socio-emotional leader, has been documented in the leadership literature since the 1950s. Recent research in neuroscience suggests that the division between task-oriented and socio-emotional-oriented roles derives from a fundamental feature of our neurobiology: an antagonistic relationship between two large-scale cortical networks – the task-positive network (TPN) and the default mode network (DMN). Neural activity in TPN tends to inhibit activity in the DMN, and vice versa. The TPN is important for problem solving, focusing of attention, making decisions, and control of action. The DMN plays a central role in emotional self-awareness, social cognition, and ethical decision making. It is also strongly linked to creativity and openness to new ideas. Because activation of the TPN tends to suppress activity in the DMN, an over-emphasis on task-oriented leadership may prove deleterious to social and emotional aspects of leadership. Similarly, an overemphasis on the DMN would result in difficulty focusing attention, making decisions, and solving known problems. In this paper, we will review major streams of theory and research on leadership roles in the context of recent findings from neuroscience and psychology. We conclude by suggesting that emerging research challenges the assumption that role differentiation is both natural and necessary, in particular when openness to new ideas, people, emotions, and ethical concerns are important to success. PMID:24624074

  1. DATA CLASSIFICATION WITH NEURAL CLASSIFIER USING RADIAL BASIS FUNCTION WITH DATA REDUCTION USING HIERARCHICAL CLUSTERING

    Directory of Open Access Journals (Sweden)

    M. Safish Mary

    2012-04-01

    Full Text Available Classification of large amount of data is a time consuming process but crucial for analysis and decision making. Radial Basis Function networks are widely used for classification and regression analysis. In this paper, we have studied the performance of RBF neural networks to classify the sales of cars based on the demand, using kernel density estimation algorithm which produces classification accuracy comparable to data classification accuracy provided by support vector machines. In this paper, we have proposed a new instance based data selection method where redundant instances are removed with help of a threshold thus improving the time complexity with improved classification accuracy. The instance based selection of the data set will help reduce the number of clusters formed thereby reduces the number of centers considered for building the RBF network. Further the efficiency of the training is improved by applying a hierarchical clustering technique to reduce the number of clusters formed at every step. The paper explains the algorithm used for classification and for conditioning the data. It also explains the complexities involved in classification of sales data for analysis and decision-making.

  2. New Heterogeneous Clustering Protocol for Prolonging Wireless Sensor Networks Lifetime

    Directory of Open Access Journals (Sweden)

    Md. Golam Rashed

    2014-06-01

    Full Text Available Clustering in wireless sensor networks is one of the crucial methods for increasing of network lifetime. The network characteristics of existing classical clustering protocols for wireless sensor network are homogeneous. Clustering protocols fail to maintain the stability of the system, especially when nodes are heterogeneous. We have seen that the behavior of Heterogeneous-Hierarchical Energy Aware Routing Protocol (H-HEARP becomes very unstable once the first node dies, especially in the presence of node heterogeneity. In this paper we assume a new clustering protocol whose network characteristics is heterogeneous for prolonging of network lifetime. The computer simulation results demonstrate that the proposed clustering algorithm outperforms than other clustering algorithms in terms of the time interval before the death of the first node (we refer to as stability period. The simulation results also show the high performance of the proposed clustering algorithm for higher values of extra energy brought by more powerful nodes.

  3. An image segmentation method based on network clustering model

    Science.gov (United States)

    Jiao, Yang; Wu, Jianshe; Jiao, Licheng

    2018-01-01

    Network clustering phenomena are ubiquitous in nature and human society. In this paper, a method involving a network clustering model is proposed for mass segmentation in mammograms. First, the watershed transform is used to divide an image into regions, and features of the image are computed. Then a graph is constructed from the obtained regions and features. The network clustering model is applied to realize clustering of nodes in the graph. Compared with two classic methods, the algorithm based on the network clustering model performs more effectively in experiments.

  4. Hybrid Fuzzy Wavelet Neural Networks Architecture Based on Polynomial Neural Networks and Fuzzy Set/Relation Inference-Based Wavelet Neurons.

    Science.gov (United States)

    Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold

    2017-08-11

    This paper presents a hybrid fuzzy wavelet neural network (HFWNN) realized with the aid of polynomial neural networks (PNNs) and fuzzy inference-based wavelet neurons (FIWNs). Two types of FIWNs including fuzzy set inference-based wavelet neurons (FSIWNs) and fuzzy relation inference-based wavelet neurons (FRIWNs) are proposed. In particular, a FIWN without any fuzzy set component (viz., a premise part of fuzzy rule) becomes a wavelet neuron (WN). To alleviate the limitations of the conventional wavelet neural networks or fuzzy wavelet neural networks whose parameters are determined based on a purely random basis, the parameters of wavelet functions standing in FIWNs or WNs are initialized by using the C-Means clustering method. The overall architecture of the HFWNN is similar to the one of the typical PNNs. The main strategies in the design of HFWNN are developed as follows. First, the first layer of the network consists of FIWNs (e.g., FSIWN or FRIWN) that are used to reflect the uncertainty of data, while the second and higher layers consist of WNs, which exhibit a high level of flexibility and realize a linear combination of wavelet functions. Second, the parameters used in the design of the HFWNN are adjusted through genetic optimization. To evaluate the performance of the proposed HFWNN, several publicly available data are considered. Furthermore a thorough comparative analysis is covered.

  5. Combining neural networks for protein secondary structure prediction

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric

    1995-01-01

    In this paper structured neural networks are applied to the problem of predicting the secondary structure of proteins. A hierarchical approach is used where specialized neural networks are designed for each structural class and then combined using another neural network. The submodels are designed...... by using a priori knowledge of the mapping between protein building blocks and the secondary structure and by using weight sharing. Since none of the individual networks have more than 600 adjustable weights over-fitting is avoided. When ensembles of specialized experts are combined the performance...

  6. Ship Benchmark Shaft and Engine Gain FDI Using Neural Network

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Izadi-Zamanabadi, Roozbeh

    2002-01-01

    This paper concerns fault detection and isolation based on neural network modeling. A neural network is trained to recognize the input-output behavior of a nonlinear plant, and faults are detected if the output estimated by the network differs from the measured plant output by more than a specified...... threshold value. In the paper a method for determining this threshold based on the neural network model is proposed, which can be used for a design strategy to handle residual sensitivity to input variations. The proposed method is used for successful FDI of a diesel engine gain fault in a ship propulsion...

  7. Forex Market Prediction Using NARX Neural Network with Bagging

    Directory of Open Access Journals (Sweden)

    Shahbazi Nima

    2016-01-01

    Full Text Available We propose a new methodfor predicting movements in Forex market based on NARX neural network withtime shifting bagging techniqueand financial indicators, such as relative strength index and stochastic indicators. Neural networks have prominent learning ability but they often exhibit bad and unpredictable performance for noisy data. When compared with the static neural networks, our method significantly reducesthe error rate of the responseandimproves the performance of the prediction. We tested three different types ofarchitecture for predicting the response and determined the best network approach. We applied our method to prediction the hourly foreign exchange rates and found remarkable predictability in comprehensive experiments with 2 different foreign exchange rates (GBPUSD and EURUSD.

  8. Sign Language Recognition using Neural Networks

    Directory of Open Access Journals (Sweden)

    Sabaheta Djogic

    2014-11-01

    Full Text Available – Sign language plays a great role as communication media for people with hearing difficulties.In developed countries, systems are made for overcoming a problem in communication with deaf people. This encouraged us to develop a system for the Bosnian sign language since there is a need for such system. The work is done with the use of digital image processing methods providing a system that teaches a multilayer neural network using a back propagation algorithm. Images are processed by feature extraction methods, and by masking method the data set has been created. Training is done using cross validation method for better performance thus; an accuracy of 84% is achieved.

  9. Recognition of Gestures using Artifical Neural Network

    Directory of Open Access Journals (Sweden)

    Marcel MORE

    2013-12-01

    Full Text Available Sensors for motion measurements are now becoming more widespread. Thanks to their parameters and affordability they are already used not only in the professional sector, but also in devices intended for daily use or entertainment. One of their applications is in control of devices by gestures. Systems that can determine type of gesture from measured motion have many uses. Some are for example in medical practice, but they are still more often used in devices such as cell phones, where they serve as a non-standard form of input. Today there are already several approaches for solving this problem, but building sufficiently reliable system is still a challenging task. In our project we are developing solution based on artificial neural network. In difference to other solutions, this one doesn’t require building model for each measuring system and thus it can be used in combination with various sensors just with minimal changes in his structure.

  10. Spatial Dynamics of Multilayer Cellular Neural Networks

    Science.gov (United States)

    Wu, Shi-Liang; Hsu, Cheng-Hsiung

    2018-02-01

    The purpose of this work is to study the spatial dynamics of one-dimensional multilayer cellular neural networks. We first establish the existence of rightward and leftward spreading speeds of the model. Then we show that the spreading speeds coincide with the minimum wave speeds of the traveling wave fronts in the right and left directions. Moreover, we obtain the asymptotic behavior of the traveling wave fronts when the wave speeds are positive and greater than the spreading speeds. According to the asymptotic behavior and using various kinds of comparison theorems, some front-like entire solutions are constructed by combining the rightward and leftward traveling wave fronts with different speeds and a spatially homogeneous solution of the model. Finally, various qualitative features of such entire solutions are investigated.

  11. Enhancing Hohlraum Design with Artificial Neural Networks

    Science.gov (United States)

    Peterson, J. L.; Berzak Hopkins, L. F.; Humbird, K. D.; Brandon, S. T.; Field, J. E.; Langer, S. H.; Nora, R. C.; Spears, B. K.

    2017-10-01

    A primary goal of hohlraum design is to efficiently convert available laser power and energy to capsule drive, compression and ultimately fusion neutron yield. However, a major challenge of this multi-dimensional optimization problem is the relative computational expense of hohlraum simulations. In this work, we explore overcoming this obstacle with the use of artificial neural networks built off ensembles of hohlraum simulations. These machine learning systems emulate the behavior of full simulations in a fraction of the time, thereby enabling the rapid exploration of design parameters. We will demonstrate this technology with a search for modifications to existing high-yield designs that can maximize neutron production within NIF's current laser power and energy constraints. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-734401.

  12. Tropical Timber Identification using Backpropagation Neural Network

    Science.gov (United States)

    Siregar, B.; Andayani, U.; Fatihah, N.; Hakim, L.; Fahmi, F.

    2017-01-01

    Each and every type of wood has different characteristics. Identifying the type of wood properly is important, especially for industries that need to know the type of timber specifically. However, it requires expertise in identifying the type of wood and only limited experts available. In addition, the manual identification even by experts is rather inefficient because it requires a lot of time and possibility of human errors. To overcome these problems, a digital image based method to identify the type of timber automatically is needed. In this study, backpropagation neural network is used as artificial intelligence component. Several stages were developed: a microscope image acquisition, pre-processing, feature extraction using gray level co-occurrence matrix and normalization of data extraction using decimal scaling features. The results showed that the proposed method was able to identify the timber with an accuracy of 94%.

  13. Thrips (Thysanoptera) identification using artificial neural networks.

    Science.gov (United States)

    Fedor, P; Malenovský, I; Vanhara, J; Sierka, W; Havel, J

    2008-10-01

    We studied the use of a supervised artificial neural network (ANN) model for semi-automated identification of 18 common European species of Thysanoptera from four genera: Aeolothrips Haliday (Aeolothripidae), Chirothrips Haliday, Dendrothrips Uzel, and Limothrips Haliday (all Thripidae). As input data, we entered 17 continuous morphometric and two qualitative two-state characters measured or determined on different parts of the thrips body (head, pronotum, forewing and ovipositor) and the sex. Our experimental data set included 498 thrips specimens. A relatively simple ANN architecture (multilayer perceptrons with a single hidden layer) enabled a 97% correct simultaneous identification of both males and females of all the 18 species in an independent test. This high reliability of classification is promising for a wider application of ANN in the practice of Thysanoptera identification.

  14. Neural network training as a dissipative process.

    Science.gov (United States)

    Gori, Marco; Maggini, Marco; Rossi, Alessandro

    2016-09-01

    This paper analyzes the practical issues and reports some results on a theory in which learning is modeled as a continuous temporal process driven by laws describing the interactions of intelligent agents with their own environment. The classic regularization framework is paired with the idea of temporal manifolds by introducing the principle of least cognitive action, which is inspired by the related principle of mechanics. The introduction of the counterparts of the kinetic and potential energy leads to an interpretation of learning as a dissipative process. As an example, we apply the theory to supervised learning in neural networks and show that the corresponding Euler-Lagrange differential equations can be connected to the classic gradient descent algorithm on the supervised pairs. We give preliminary experiments to confirm the soundness of the theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Neural networks in support of manned space

    Science.gov (United States)

    Werbos, Paul J.

    1989-01-01

    Many lobbyists in Washington have argued that artificial intelligence (AI) is an alternative to manned space activity. In actuality, this is the opposite of the truth, especially as regards artificial neural networks (ANNs), that form of AI which has the greatest hope of mimicking human abilities in learning, ability to interface with sensors and actuators, flexibility and balanced judgement. ANNs and their relation to expert systems (the more traditional form of AI), and the limitations of both technologies are briefly reviewed. A Few highlights of recent work on ANNs, including an NSF-sponsored workshop on ANNs for control applications are given. Current thinking on ANNs for use in certain key areas (the National Aerospace Plane, teleoperation, the control of large structures, fault diagnostics, and docking) which may be crucial to the long term future of man in space is discussed.

  16. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Directory of Open Access Journals (Sweden)

    Lukas Falat

    2016-01-01

    Full Text Available This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  17. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    Science.gov (United States)

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  18. Financial time series prediction using spiking neural networks.

    Directory of Open Access Journals (Sweden)

    David Reid

    Full Text Available In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  19. Analog neural network-based helicopter gearbox health monitoring system.

    Science.gov (United States)

    Monsen, P T; Dzwonczyk, M; Manolakos, E S

    1995-12-01

    The development of a reliable helicopter gearbox health monitoring system (HMS) has been the subject of considerable research over the past 15 years. The deployment of such a system could lead to a significant saving in lives and vehicles as well as dramatically reduce the cost of helicopter maintenance. Recent research results indicate that a neural network-based system could provide a viable solution to the problem. This paper presents two neural network-based realizations of an HMS system. A hybrid (digital/analog) neural system is proposed as an extremely accurate off-line monitoring tool used to reduce helicopter gearbox maintenance costs. In addition, an all analog neural network is proposed as a real-time helicopter gearbox fault monitor that can exploit the ability of an analog neural network to directly compute the discrete Fourier transform (DFT) as a sum of weighted samples. Hardware performance results are obtained using the Integrated Neural Computing Architecture (INCA/1) analog neural network platform that was designed and developed at The Charles Stark Draper Laboratory. The results indicate that it is possible to achieve a 100% fault detection rate with 0% false alarm rate by performing a DFT directly on the first layer of INCA/1 followed by a small-size two-layer feed-forward neural network and a simple post-processing majority voting stage.

  20. Architecture Analysis of an FPGA-Based Hopfield Neural Network

    Directory of Open Access Journals (Sweden)

    Miguel Angelo de Abreu de Sousa

    2014-01-01

    Full Text Available Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.