WorldWideScience

Sample records for neural computing methodology

  1. Multiscale methodology for bone remodelling simulation using coupled finite element and neural network computation.

    Science.gov (United States)

    Hambli, Ridha; Katerchi, Houda; Benhamou, Claude-Laurent

    2011-02-01

    The aim of this paper is to develop a multiscale hierarchical hybrid model based on finite element analysis and neural network computation to link mesoscopic scale (trabecular network level) and macroscopic (whole bone level) to simulate the process of bone remodelling. As whole bone simulation, including the 3D reconstruction of trabecular level bone, is time consuming, finite element calculation is only performed at the macroscopic level, whilst trained neural networks are employed as numerical substitutes for the finite element code needed for the mesoscale prediction. The bone mechanical properties are updated at the macroscopic scale depending on the morphological and mechanical adaptation at the mesoscopic scale computed by the trained neural network. The digital image-based modelling technique using μ-CT and voxel finite element analysis is used to capture volume elements representative of 2 mm³ at the mesoscale level of the femoral head. The input data for the artificial neural network are a set of bone material parameters, boundary conditions and the applied stress. The output data are the updated bone properties and some trabecular bone factors. The current approach is the first model, to our knowledge, that incorporates both finite element analysis and neural network computation to rapidly simulate multilevel bone adaptation.

  2. Neural Networks Methodology and Applications

    CERN Document Server

    Dreyfus, Gérard

    2005-01-01

    Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts ands seemlessly edited to present a coherent and comprehensive, yet not redundant, practically-oriented...

  3. Optics in neural computation

    Science.gov (United States)

    Levene, Michael John

    In all attempts to emulate the considerable powers of the brain, one is struck by both its immense size, parallelism, and complexity. While the fields of neural networks, artificial intelligence, and neuromorphic engineering have all attempted oversimplifications on the considerable complexity, all three can benefit from the inherent scalability and parallelism of optics. This thesis looks at specific aspects of three modes in which optics, and particularly volume holography, can play a part in neural computation. First, holography serves as the basis of highly-parallel correlators, which are the foundation of optical neural networks. The huge input capability of optical neural networks make them most useful for image processing and image recognition and tracking. These tasks benefit from the shift invariance of optical correlators. In this thesis, I analyze the capacity of correlators, and then present several techniques for controlling the amount of shift invariance. Of particular interest is the Fresnel correlator, in which the hologram is displaced from the Fourier plane. In this case, the amount of shift invariance is limited not just by the thickness of the hologram, but by the distance of the hologram from the Fourier plane. Second, volume holography can provide the huge storage capacity and high speed, parallel read-out necessary to support large artificial intelligence systems. However, previous methods for storing data in volume holograms have relied on awkward beam-steering or on as-yet non- existent cheap, wide-bandwidth, tunable laser sources. This thesis presents a new technique, shift multiplexing, which is capable of very high densities, but which has the advantage of a very simple implementation. In shift multiplexing, the reference wave consists of a focused spot a few millimeters in front of the hologram. Multiplexing is achieved by simply translating the hologram a few tens of microns or less. This thesis describes the theory for how shift

  4. Neural Computations in Binaural Hearing

    Science.gov (United States)

    Wagner, Hermann

    Binaural hearing helps humans and animals to localize and unmask sounds. Here, binaural computations in the barn owl's auditory system are discussed. Barn owls use the interaural time difference (ITD) for azimuthal sound localization, and they use the interaural level difference (ELD) for elevational sound localization. ITD and ILD and their precursors are processed in separate neural pathways, the time pathway and the intensity pathway, respectively. Representation of ITD involves four main computational steps, while the representation of ILD is accomplished in three steps. In the discussion neural processing in the owl's auditory system is compared with neural computations present in mammals.

  5. Methodology of Neural Design: Applications in Microwave Engineering

    OpenAIRE

    Z. Raida; P. Pomenka

    2006-01-01

    In the paper, an original methodology for the automatic creation of neural models of microwave structures is proposed and verified. Following the methodology, neural models of the prescribed accuracy are built within the minimum CPU time. Validity of the proposed methodology is verified by developing neural models of selected microwave structures. Functionality of neural models is verified in a design - a neural model is joined with a genetic algorithm to find a global minimum of a formulat...

  6. Air quality estimation by computational intelligence methodologies

    Directory of Open Access Journals (Sweden)

    Ćirić Ivan T.

    2012-01-01

    Full Text Available The subject of this study is to compare different computational intelligence methodologies based on artificial neural networks used for forecasting an air quality parameter - the emission of CO2, in the city of Niš. Firstly, inputs of the CO2 emission estimator are analyzed and their measurement is explained. It is known that the traffic is the single largest emitter of CO2 in Europe. Therefore, a proper treatment of this component of pollution is very important for precise estimation of emission levels. With this in mind, measurements of traffic frequency and CO2 concentration were carried out at critical intersections in the city, as well as the monitoring of a vehicle direction at the crossroad. Finally, based on experimental data, different soft computing estimators were developed, such as feed forward neural network, recurrent neural network, and hybrid neuro-fuzzy estimator of CO2 emission levels. Test data for some characteristic cases presented at the end of the paper shows good agreement of developed estimator outputs with experimental data. Presented results are a true indicator of the implemented method usability. [Projekat Ministarstva nauke Republike Srbije, br. III42008-2/2011: Evaluation of Energy Performances and br. TR35016/2011: Indoor Environment Quality of Educational Buildings in Serbia with Impact to Health and Research of MHD Flows around the Bodies, in the Tip Clearances and Channels and Application in the MHD Pumps Development

  7. Methodology of Neural Design: Applications in Microwave Engineering

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2006-06-01

    Full Text Available In the paper, an original methodology for the automatic creation of neural models of microwave structures is proposed and verified. Following the methodology, neural models of the prescribed accuracy are built within the minimum CPU time. Validity of the proposed methodology is verified by developing neural models of selected microwave structures. Functionality of neural models is verified in a design - a neural model is joined with a genetic algorithm to find a global minimum of a formulated objective function. The objective function is minimized using different versions of genetic algorithms, and their mutual combinations. The verified methodology of the automated creation of accurate neural models of microwave structures, and their association with global optimization routines are the most important original features of the paper.

  8. Neural Computation and the Computational Theory of Cognition

    Science.gov (United States)

    Piccinini, Gualtiero; Bahar, Sonya

    2013-01-01

    We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism--neural processes are computations in the…

  9. Computational neural learning formalisms for manipulator inverse kinematics

    Science.gov (United States)

    Gulati, Sandeep; Barhen, Jacob; Iyengar, S. Sitharama

    1989-01-01

    An efficient, adaptive neural learning paradigm for addressing the inverse kinematics of redundant manipulators is presented. The proposed methodology exploits the infinite local stability of terminal attractors - a new class of mathematical constructs which provide unique information processing capabilities to artificial neural systems. For robotic applications, synaptic elements of such networks can rapidly acquire the kinematic invariances embedded within the presented samples. Subsequently, joint-space configurations, required to follow arbitrary end-effector trajectories, can readily be computed. In a significant departure from prior neuromorphic learning algorithms, this methodology provides mechanisms for incorporating an in-training skew to handle kinematics and environmental constraints.

  10. Advances in Artificial Neural Networks - Methodological Development and Application

    Science.gov (United States)

    Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other ne...

  11. Soft computing integrating evolutionary, neural, and fuzzy systems

    CERN Document Server

    Tettamanzi, Andrea

    2001-01-01

    Soft computing encompasses various computational methodologies, which, unlike conventional algorithms, are tolerant of imprecision, uncertainty, and partial truth. Soft computing technologies offer adaptability as a characteristic feature and thus permit the tracking of a problem through a changing environment. Besides some recent developments in areas like rough sets and probabilistic networks, fuzzy logic, evolutionary algorithms, and artificial neural networks are core ingredients of soft computing, which are all bio-inspired and can easily be combined synergetically. This book presents a well-balanced integration of fuzzy logic, evolutionary computing, and neural information processing. The three constituents are introduced to the reader systematically and brought together in differentiated combinations step by step. The text was developed from courses given by the authors and offers numerous illustrations as

  12. Optimal neural computations require analog processors

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-31

    This paper discusses some of the limitations of hardware implementations of neural networks. The authors start by presenting neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural networks. Further, the focus will be on hardware imposed constraints. They will present recent results for three different alternatives of parallel implementations of neural networks: digital circuits, threshold gate circuits, and analog circuits. The area and the delay will be related to the neurons` fan-in and to the precision of their synaptic weights. The main conclusion is that hardware-efficient solutions require analog computations, and suggests the following two alternatives: (i) cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow the use of the third dimension (e.g. using optical interconnections).

  13. A Design Methodology for Computer Security Testing

    OpenAIRE

    Ramilli, Marco

    2013-01-01

    The field of "computer security" is often considered something in between Art and Science. This is partly due to the lack of widely agreed and standardized methodologies to evaluate the degree of the security of a system. This dissertation intends to contribute to this area by investigating the most common security testing strategies applied nowadays and by proposing an enhanced methodology that may be effectively applied to different threat scenarios with the same degree of effectiveness. ...

  14. Advances in neural networks computational and theoretical issues

    CERN Document Server

    Esposito, Anna; Morabito, Francesco

    2015-01-01

    This book collects research works that exploit neural networks and machine learning techniques from a multidisciplinary perspective. Subjects covered include theoretical, methodological and computational topics which are grouped together into chapters devoted to the discussion of novelties and innovations related to the field of Artificial Neural Networks as well as the use of neural networks for applications, pattern recognition, signal processing, and special topics such as the detection and recognition of multimodal emotional expressions and daily cognitive functions, and  bio-inspired memristor-based networks.  Providing insights into the latest research interest from a pool of international experts coming from different research fields, the volume becomes valuable to all those with any interest in a holistic approach to implement believable, autonomous, adaptive, and context-aware Information Communication Technologies.

  15. Advances in Artificial Neural Networks – Methodological Development and Application

    Directory of Open Access Journals (Sweden)

    Yanbo Huang

    2009-08-01

    Full Text Available Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological

  16. Cortical Neural Computation by Discrete Results Hypothesis.

    Science.gov (United States)

    Castejon, Carlos; Nuñez, Angel

    2016-01-01

    One of the most challenging problems we face in neuroscience is to understand how the cortex performs computations. There is increasing evidence that the power of the cortical processing is produced by populations of neurons forming dynamic neuronal ensembles. Theoretical proposals and multineuronal experimental studies have revealed that ensembles of neurons can form emergent functional units. However, how these ensembles are implicated in cortical computations is still a mystery. Although cell ensembles have been associated with brain rhythms, the functional interaction remains largely unclear. It is still unknown how spatially distributed neuronal activity can be temporally integrated to contribute to cortical computations. A theoretical explanation integrating spatial and temporal aspects of cortical processing is still lacking. In this Hypothesis and Theory article, we propose a new functional theoretical framework to explain the computational roles of these ensembles in cortical processing. We suggest that complex neural computations underlying cortical processing could be temporally discrete and that sensory information would need to be quantized to be computed by the cerebral cortex. Accordingly, we propose that cortical processing is produced by the computation of discrete spatio-temporal functional units that we have called "Discrete Results" (Discrete Results Hypothesis). This hypothesis represents a novel functional mechanism by which information processing is computed in the cortex. Furthermore, we propose that precise dynamic sequences of "Discrete Results" is the mechanism used by the cortex to extract, code, memorize and transmit neural information. The novel "Discrete Results" concept has the ability to match the spatial and temporal aspects of cortical processing. We discuss the possible neural underpinnings of these functional computational units and describe the empirical evidence supporting our hypothesis. We propose that fast-spiking (FS

  17. Computational capabilities of graph neural networks.

    Science.gov (United States)

    Scarselli, Franco; Gori, Marco; Tsoi, Ah Chung; Hagenbuchner, Markus; Monfardini, Gabriele

    2009-01-01

    In this paper, we will consider the approximation properties of a recently introduced neural network model called graph neural network (GNN), which can be used to process-structured data inputs, e.g., acyclic graphs, cyclic graphs, and directed or undirected graphs. This class of neural networks implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n onto an m-dimensional Euclidean space. We characterize the functions that can be approximated by GNNs, in probability, up to any prescribed degree of precision. This set contains the maps that satisfy a property called preservation of the unfolding equivalence, and includes most of the practically useful functions on graphs; the only known exception is when the input graph contains particular patterns of symmetries when unfolding equivalence may not be preserved. The result can be considered an extension of the universal approximation property established for the classic feedforward neural networks (FNNs). Some experimental examples are used to show the computational capabilities of the proposed model.

  18. Fuzzy logic, neural networks, and soft computing

    Science.gov (United States)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial

  19. Emerging trends in neuro engineering and neural computation

    CERN Document Server

    Lee, Kendall; Garmestani, Hamid; Lim, Chee

    2017-01-01

    This book focuses on neuro-engineering and neural computing, a multi-disciplinary field of research attracting considerable attention from engineers, neuroscientists, microbiologists and material scientists. It explores a range of topics concerning the design and development of innovative neural and brain interfacing technologies, as well as novel information acquisition and processing algorithms to make sense of the acquired data. The book also highlights emerging trends and advances regarding the applications of neuro-engineering in real-world scenarios, such as neural prostheses, diagnosis of neural degenerative diseases, deep brain stimulation, biosensors, real neural network-inspired artificial neural networks (ANNs) and the predictive modeling of information flows in neuronal networks. The book is broadly divided into three main sections including: current trends in technological developments, neural computation techniques to make sense of the neural behavioral data, and application of these technologie...

  20. Computational intelligence synergies of fuzzy logic, neural networks and evolutionary computing

    CERN Document Server

    Siddique, Nazmul

    2013-01-01

    Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing presents an introduction to some of the cutting edge technological paradigms under the umbrella of computational intelligence. Computational intelligence schemes are investigated with the development of a suitable framework for fuzzy logic, neural networks and evolutionary computing, neuro-fuzzy systems, evolutionary-fuzzy systems and evolutionary neural systems. Applications to linear and non-linear systems are discussed with examples. Key features: Covers all the aspect

  1. Thermal sensation prediction by soft computing methodology.

    Science.gov (United States)

    Jović, Srđan; Arsić, Nebojša; Vilimonović, Jovana; Petković, Dalibor

    2016-12-01

    Thermal comfort in open urban areas is very factor based on environmental point of view. Therefore it is need to fulfill demands for suitable thermal comfort during urban planning and design. Thermal comfort can be modeled based on climatic parameters and other factors. The factors are variables and they are changed throughout the year and days. Therefore there is need to establish an algorithm for thermal comfort prediction according to the input variables. The prediction results could be used for planning of time of usage of urban areas. Since it is very nonlinear task, in this investigation was applied soft computing methodology in order to predict the thermal comfort. The main goal was to apply extreme leaning machine (ELM) for forecasting of physiological equivalent temperature (PET) values. Temperature, pressure, wind speed and irradiance were used as inputs. The prediction results are compared with some benchmark models. Based on the results ELM can be used effectively in forecasting of PET. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Management of health care expenditure by soft computing methodology

    Science.gov (United States)

    Maksimović, Goran; Jović, Srđan; Jovanović, Radomir; Aničić, Obrad

    2017-01-01

    In this study was managed the health care expenditure by soft computing methodology. The main goal was to predict the gross domestic product (GDP) according to several factors of health care expenditure. Soft computing methodologies were applied since GDP prediction is very complex task. The performances of the proposed predictors were confirmed with the simulation results. According to the results, support vector regression (SVR) has better prediction accuracy compared to other soft computing methodologies. The soft computing methods benefit from the soft computing capabilities of global optimization in order to avoid local minimum issues.

  3. Effective Methodology for Security Risk Assessment of Computer Systems

    OpenAIRE

    Daniel F. García; Adrián Fernández

    2013-01-01

    Today, computer systems are more and more complex and support growing security risks. The security managers need to find effective security risk assessment methodologies that allow modeling well the increasing complexity of current computer systems but also maintaining low the complexity of the assessment procedure. This paper provides a brief analysis of common security risk assessment methodologies leading to the selection of a proper methodology to fulfill these requirements. Then, a detai...

  4. Brains--Computers--Machines: Neural Engineering in Science Classrooms

    Science.gov (United States)

    Chudler, Eric H.; Bergsman, Kristen Clapper

    2016-01-01

    Neural engineering is an emerging field of high relevance to students, teachers, and the general public. This feature presents online resources that educators and scientists can use to introduce students to neural engineering and to integrate core ideas from the life sciences, physical sciences, social sciences, computer science, and engineering…

  5. Computational modeling of neural plasticity for self-organization of neural networks.

    Science.gov (United States)

    Chrol-Cannon, Joseph; Jin, Yaochu

    2014-11-01

    Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); S.M. Bohte (Sander)

    2016-01-01

    textabstractBiological neurons communicate with a sparing exchange of pulses - spikes. It is an open question how real spiking neurons produce the kind of powerful neural computation that is possible with deep artificial neural networks, using only so very few spikes to communicate. Building on

  7. Computer Presentation Programs and Teaching Research Methodologies

    Directory of Open Access Journals (Sweden)

    Vahid Motamedi

    2015-05-01

    Full Text Available Supplementing traditional chalk and board instruction with computer delivery has been viewed positively by students who have reported increased understanding and more interaction with the instructor when computer presentations are used in the classroom. Some problems contributing to student errors while taking class notes might be transcription of numbers to the board, and handwriting of the instructor can be resolved in careful construction of computer presentations. The use of computer presentation programs promises to increase the effectiveness of learning by making content more readily available, by reducing the cost and effort of producing quality content, and by allowing content to be more easily shared. This paper describes how problems can be overcome by using presentation packages for instruction.

  8. Methodological testing: Are fast quantum computers illusions?

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Steven [Tachyon Design Automation, San Francisco, CA (United States)

    2013-07-01

    Popularity of the idea for computers constructed from the principles of QM started with Feynman's 'Lectures On Computation', but he called the idea crazy and dependent on statistical mechanics. In 1987, Feynman published a paper in 'Quantum Implications - Essays in Honor of David Bohm' on negative probabilities which he said gave him cultural shock. The problem with imagined fast quantum computers (QC) is that speed requires both statistical behavior and truth of the mathematical formalism. The Swedish Royal Academy 2012 Nobel Prize in physics press release touted the discovery of methods to control ''individual quantum systems'', to ''measure and control very fragile quantum states'' which enables ''first steps towards building a new type of super fast computer based on quantum physics.'' A number of examples where widely accepted mathematical descriptions have turned out to be problematic are examined: Problems with the use of Oracles in P=NP computational complexity, Paul Finsler's proof of the continuum hypothesis, and Turing's Enigma code breaking versus William tutte's Colossus. I view QC research as faith in computational oracles with wished for properties. Arther Fine's interpretation in 'The Shaky Game' of Einstein's skepticism toward QM is discussed. If Einstein's reality as space-time curvature is correct, then space-time computers will be the next type of super fast computer.

  9. Artificial Neural Network Metamodels of Stochastic Computer Simulations

    Science.gov (United States)

    1994-08-10

    23 Haddock, J. and O’Keefe, R., "Using Artificial Intelligence to Facilitate Manufacturing Systems Simulation," Computers & Industrial Engineering , Vol...Feedforward Neural Networks," Computers & Industrial Engineering , Vol. 21, No. 1- 4, (1991), pp. 247-251. 87 Proceedings of the 1992 Summer Computer...Using Simulation Experiments," Computers & Industrial Engineering , Vol. 22, No. 2 (1992), pp. 195-209. 119 Kuei, C. and Madu, C., "Polynomial

  10. A Methodological Review of Computer Science Education Research

    Science.gov (United States)

    Randolph, Justus; Julnes, George; Sutinen, Erkki; Lehman, Steve

    2008-01-01

    Methodological reviews have been used successfully to identify research trends and improve research practice in a variety of academic fields. Although there have been three methodological reviews of the emerging field of computer science education research, they lacked reliability or generalizability. Therefore, because of the capacity for a…

  11. Handwritten Digits Recognition Using Neural Computing

    Directory of Open Access Journals (Sweden)

    Călin Enăchescu

    2009-12-01

    Full Text Available In this paper we present a method for the recognition of handwritten digits and a practical implementation of this method for real-time recognition. A theoretical framework for the neural networks used to classify the handwritten digits is also presented.The classification task is performed using a Convolutional Neural Network (CNN. CNN is a special type of multy-layer neural network, being trained with an optimized version of the back-propagation learning algorithm.CNN is designed to recognize visual patterns directly from pixel images with minimal preprocessing, being capable to recognize patterns with extreme variability (such as handwritten characters, and with robustness to distortions and simple geometric transformations.The main contributions of this paper are related to theoriginal methods for increasing the efficiency of the learning algorithm by preprocessing the images before the learning process and a method for increasing the precision and performance for real-time applications, by removing the non useful information from the background.By combining these strategies we have obtained an accuracy of 96.76%, using as training set the NIST (National Institute of Standards and Technology database.

  12. Earth Station Neural Network Control Methodology and Simulation

    OpenAIRE

    Hanaa T. El-Madany; Faten H. Fahmy; Ninet M. A. El-Rahman; Hassen T. Dorrah

    2012-01-01

    Renewable energy resources are inexhaustible, clean as compared with conventional resources. Also, it is used to supply regions with no grid, no telephone lines, and often with difficult accessibility by common transport. Satellite earth stations which located in remote areas are the most important application of renewable energy. Neural control is a branch of the general field of intelligent control, which is based on the concept of artificial intelligence. This paper presents the mathematic...

  13. Computationally Efficient Neural Network Intrusion Security Awareness

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Milos Manic

    2009-08-01

    An enhanced version of an algorithm to provide anomaly based intrusion detection alerts for cyber security state awareness is detailed. A unique aspect is the training of an error back-propagation neural network with intrusion detection rule features to provide a recognition basis. Network packet details are subsequently provided to the trained network to produce a classification. This leverages rule knowledge sets to produce classifications for anomaly based systems. Several test cases executed on ICMP protocol revealed a 60% identification rate of true positives. This rate matched the previous work, but 70% less memory was used and the run time was reduced to less than 1 second from 37 seconds.

  14. Reversible logic synthesis methodologies with application to quantum computing

    CERN Document Server

    Taha, Saleem Mohammed Ridha

    2016-01-01

    This book opens the door to a new interesting and ambitious world of reversible and quantum computing research. It presents the state of the art required to travel around that world safely. Top world universities, companies and government institutions  are in a race of developing new methodologies, algorithms and circuits on reversible logic, quantum logic, reversible and quantum computing and nano-technologies. In this book, twelve reversible logic synthesis methodologies are presented for the first time in a single literature with some new proposals. Also, the sequential reversible logic circuitries are discussed for the first time in a book. Reversible logic plays an important role in quantum computing. Any progress in the domain of reversible logic can be directly applied to quantum logic. One of the goals of this book is to show the application of reversible logic in quantum computing. A new implementation of wavelet and multiwavelet transforms using quantum computing is performed for this purpose. Rese...

  15. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan

    2015-04-01

    Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.

  16. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  17. Neural Computations in a Dynamical System with Multiple Time Scales

    Science.gov (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions. PMID:27679569

  18. Fish species recognition using computer vision and a neural network

    NARCIS (Netherlands)

    Storbeck, F.; Daan, B.

    2001-01-01

    A system is described to recognize fish species by computer vision and a neural network program. The vision system measures a number of features of fish as seen by a camera perpendicular to a conveyor belt. The features used here are the widths and heights at various locations along the fish. First

  19. A neural circuit for angular velocity computation

    Directory of Open Access Journals (Sweden)

    Samuel B Snider

    2010-12-01

    Full Text Available In one of the most remarkable feats of motor control in the animal world, some Diptera, such as the housefly, can accurately execute corrective flight maneuvers in tens of milliseconds. These reflexive movements are achieved by the halteres, gyroscopic force sensors, in conjunction with rapidly-tunable wing-steering muscles. Specifically, the mechanosensory campaniform sensilla located at the base of the halteres transduce and transform rotation-induced gyroscopic forces into information about the angular velocity of the fly's body. But how exactly does the fly's neural architecture generate the angular velocity from the lateral strain forces on the left and right halteres? To explore potential algorithms, we built a neuro-mechanical model of the rotation detection circuit. We propose a neurobiologically plausible method by which the fly could accurately separate and measure the three-dimensional components of an imposed angular velocity. Our model assumes a single sign-inverting synapse and formally resembles some models of directional selectivity by the retina. Using multidimensional error analysis, we demonstrate the robustness of our model under a variety of input conditions. Our analysis reveals the maximum information available to the fly given its physical architecture and the mathematics governing the rotation-induced forces at the haltere's end knob.

  20. A neural circuit for angular velocity computation.

    Science.gov (United States)

    Snider, Samuel B; Yuste, Rafael; Packer, Adam M

    2010-01-01

    In one of the most remarkable feats of motor control in the animal world, some Diptera, such as the housefly, can accurately execute corrective flight maneuvers in tens of milliseconds. These reflexive movements are achieved by the halteres, gyroscopic force sensors, in conjunction with rapidly tunable wing steering muscles. Specifically, the mechanosensory campaniform sensilla located at the base of the halteres transduce and transform rotation-induced gyroscopic forces into information about the angular velocity of the fly's body. But how exactly does the fly's neural architecture generate the angular velocity from the lateral strain forces on the left and right halteres? To explore potential algorithms, we built a neuromechanical model of the rotation detection circuit. We propose a neurobiologically plausible method by which the fly could accurately separate and measure the three-dimensional components of an imposed angular velocity. Our model assumes a single sign-inverting synapse and formally resembles some models of directional selectivity by the retina. Using multidimensional error analysis, we demonstrate the robustness of our model under a variety of input conditions. Our analysis reveals the maximum information available to the fly given its physical architecture and the mathematics governing the rotation-induced forces at the haltere's end knob.

  1. Computation and control with neural nets

    Energy Technology Data Exchange (ETDEWEB)

    Corneliusen, A.; Terdal, P.; Knight, T.; Spencer, J.

    1989-10-04

    As energies have increased exponentially with time so have the size and complexity of accelerators and control systems. NN may offer the kinds of improvements in computation and control that are needed to maintain acceptable functionality. For control their associative characteristics could provide signal conversion or data translation. Because they can do any computation such as least squares, they can close feedback loops autonomously to provide intelligent control at the point of action rather than at a central location that requires transfers, conversions, hand-shaking and other costly repetitions like input protection. Both computation and control can be integrated on a single chip, printed circuit or an optical equivalent that is also inherently faster through full parallel operation. For such reasons one expects lower costs and better results. Such systems could be optimized by integrating sensor and signal processing functions. Distributed nets of such hardware could communicate and provide global monitoring and multiprocessing in various ways e.g. via token, slotted or parallel rings (or Steiner trees) for compatibility with existing systems. Problems and advantages of this approach such as an optimal, real-time Turing machine are discussed. Simple examples are simulated and hardware implemented using discrete elements that demonstrate some basic characteristics of learning and parallelism. Future microprocessors' are predicted and requested on this basis. 19 refs., 18 figs.

  2. Internal models and neural computation in the vestibular system.

    Science.gov (United States)

    Green, Andrea M; Angelaki, Dora E

    2010-01-01

    The vestibular system is vital for motor control and spatial self-motion perception. Afferents from the otolith organs and the semicircular canals converge with optokinetic, somatosensory and motor-related signals in the vestibular nuclei, which are reciprocally interconnected with the vestibulocerebellar cortex and deep cerebellar nuclei. Here, we review the properties of the many cell types in the vestibular nuclei, as well as some fundamental computations implemented within this brainstem-cerebellar circuitry. These include the sensorimotor transformations for reflex generation, the neural computations for inertial motion estimation, the distinction between active and passive head movements, as well as the integration of vestibular and proprioceptive information for body motion estimation. A common theme in the solution to such computational problems is the concept of internal models and their neural implementation. Recent studies have shed new insights into important organizational principles that closely resemble those proposed for other sensorimotor systems, where their neural basis has often been more difficult to identify. As such, the vestibular system provides an excellent model to explore common neural processing strategies relevant both for reflexive and for goal-directed, voluntary movement as well as perception.

  3. Neural Network Methodology for Earthquake Early Warning - first applications

    Science.gov (United States)

    Wenzel, F.; Koehler, N.; Cua, G.; Boese, M.

    2007-12-01

    PreSEIS is a method for earthquake early warning for finite faults (Böse, 2006) that is based on Artificial Neural Networks (ANN's), which are used for the mapping of seismic observations onto likely source parameters, including the moment magnitude and the location of an earthquake. PreSEIS integrates all available information on ground shaking at different sensors in a seismic network and up-dates the estimates of seismic source parameters regularly with proceeding time. PreSEIS has been developed and tested with synthetic waveform data using the example of Istanbul, Turkey (Böse, 2006). We will present first results of the application of PreSEIS to real data from Southern California, recorded at stations from the Southern California Seismic Network. The dataset consists of 69 shallow local earthquakes with moment magnitudes ranging between 1.96 and 7.1. The data come from broadband (20 or 40 Hz) or high broadband (80 or 100 Hz), high gain channels (3-component). The Southern California dataset will allow a comparison of our results to those of the Virtual Seismologist (Cua, 2004). We used the envelopes of the waveforms defined by Cua (2004) as input for the ANN's. The envelopes were obtained by taking the maximum absolute amplitude value of the recorded ground motion time history over a 1-second time window. Due to the fact that not all of the considered stations have recorded each earthquake, the missing records were replaced by synthetic envelopes, calculated by envelope attenuation relationships developed by Cua (2004).

  4. A Neural Computational Model of Incentive Salience

    Science.gov (United States)

    Zhang, Jun; Berridge, Kent C.; Tindell, Amy J.; Smith, Kyle S.; Aldridge, J. Wayne

    2009-01-01

    Incentive salience is a motivational property with ‘magnet-like’ qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of ‘wanting’ and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue) occurs during certain states, without necessarily requiring (re)learning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization). Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered ‘wanting’ only by

  5. Neural computing architectures: The design of brain-like machines

    Energy Technology Data Exchange (ETDEWEB)

    Aleksander, I.

    1989-01-01

    Theoretical and applications aspects of neural-network (NN) computers are discussed in chapters contributed by European experts. Topics addressed include speech recognition based on topology-preserving neural maps, neural-map applications, backpropagation in nonfeedforward NNs, a parallel-distributed-processing learning approach to natural language, the learning capabilities of Boolean NNs, the logic of connectionist systems, and a probabilistic-logic NN for associative learning. Consideration is given to N-tuple sampling and genetic algorithms for speech recognition; the dynamic behavior of Boolean NNs; statistical mechanics and NNs; digital NNs, matched filters, and optical implementations; heteroassociative NNs using cabling vs link-disabling local modification rules; and the generation of movement trajectories in primates and robots. Also provided is an overview of parallel distributed processing.

  6. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  7. SHIPBUILDING PRODUCTION PROCESS DESIGN METHODOLOGY USING COMPUTER SIMULATION

    Directory of Open Access Journals (Sweden)

    Marko Hadjina

    2015-06-01

    Full Text Available In this research a shipbuilding production process design methodology, using computer simulation, is suggested. It is expected from suggested methodology to give better and more efficient tool for complex shipbuilding production processes design procedure. Within the first part of this research existing practice for production process design in shipbuilding was discussed, its shortcomings and problem were emphasized. In continuing, discrete event simulation modelling method, as basis of suggested methodology, is investigated and described regarding its special characteristics, advantages and reasons for application, especially in shipbuilding production process. Furthermore, simulation modeling basics were described as well as suggested methodology for production process procedure. Case study of suggested methodology application for designing a robotized profile fabrication production process line is demonstrated. Selected design solution, acquired with suggested methodology was evaluated through comparison with robotized profile cutting production line installation in a specific shipyard production process. Based on obtained data from real production the simulation model was further enhanced. Finally, on grounds of this research, results and droved conclusions, directions for further research are suggested.

  8. A neural algorithm for a fundamental computing problem.

    Science.gov (United States)

    Dasgupta, Sanjoy; Stevens, Charles F; Navlakha, Saket

    2017-11-10

    Similarity search-for example, identifying similar images in a database or similar documents on the web-is a fundamental computing problem faced by large-scale information retrieval systems. We discovered that the fruit fly olfactory circuit solves this problem with a variant of a computer science algorithm (called locality-sensitive hashing). The fly circuit assigns similar neural activity patterns to similar odors, so that behaviors learned from one odor can be applied when a similar odor is experienced. The fly algorithm, however, uses three computational strategies that depart from traditional approaches. These strategies can be translated to improve the performance of computational similarity searches. This perspective helps illuminate the logic supporting an important sensory function and provides a conceptually new algorithm for solving a fundamental computational problem. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  9. Methodology for computer-assisted optimization of waste flow

    Directory of Open Access Journals (Sweden)

    Popa Cicerone Laurentiu

    2017-01-01

    Full Text Available The paper reports the development of a methodology based on computer simulations with the purpose to support decisions in designing the optimal architecture of different types of selective waste collection systems and recycling systems. The design of such systems is a complex task which involves both a very good knowledge of selective waste collection system equipment characteristics and of recycling processes, and the correct placing of the equipment along the flow so that to avoid underutilization of the structural elements and to avoid bottlenecks which generate low productivity or even blockages. The methodology is applied for three case studies in which different types of waste flow models are investigated: hybrid waste flows (windshields recycling, discrete waste flows (waste electric and electronic equipment collection and continuous flows (industrial and automotive used oil collection and recycling. The architectures of these systems are optimized using the developed methodology in order to increase usage degree and productivity.

  10. An efficient hysteresis modeling methodology and its implementation in field computation applications

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A., E-mail: adlyamr@gmail.com [Electrical Power and Machines Dept., Faculty of Engineering, Cairo University, Giza 12613 (Egypt); Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12613 (Egypt)

    2017-07-15

    Highlights: • An approach to simulate hysteresis while taking shape anisotropy into consideration. • Utilizing the ensemble of triangular sub-regions hysteresis models in field computation. • A novel tool capable of carrying out field computation while keeping track of hysteresis losses. • The approach may be extended for 3D tetra-hedra sub-volumes. - Abstract: Field computation in media exhibiting hysteresis is crucial to a variety of applications such as magnetic recording processes and accurate determination of core losses in power devices. Recently, Hopfield neural networks (HNN) have been successfully configured to construct scalar and vector hysteresis models. This paper presents an efficient hysteresis modeling methodology and its implementation in field computation applications. The methodology is based on the application of the integral equation approach on discretized triangular magnetic sub-regions. Within every triangular sub-region, hysteresis properties are realized using a 3-node HNN. Details of the approach and sample computation results are given in the paper.

  11. Advances in neural networks computational intelligence for ICT

    CERN Document Server

    Esposito, Anna; Morabito, Francesco; Pasero, Eros

    2016-01-01

    This carefully edited book is putting emphasis on computational and artificial intelligent methods for learning and their relative applications in robotics, embedded systems, and ICT interfaces for psychological and neurological diseases. The book is a follow-up of the scientific workshop on Neural Networks (WIRN 2015) held in Vietri sul Mare, Italy, from the 20th to the 22nd of May 2015. The workshop, at its 27th edition became a traditional scientific event that brought together scientists from many countries, and several scientific disciplines. Each chapter is an extended version of the original contribution presented at the workshop, and together with the reviewers’ peer revisions it also benefits from the live discussion during the presentation. The content of book is organized in the following sections. 1. Introduction, 2. Machine Learning, 3. Artificial Neural Networks: Algorithms and models, 4. Intelligent Cyberphysical and Embedded System, 5. Computational Intelligence Methods for Biomedical ICT in...

  12. Computer aided die design: A new open-source methodology

    Science.gov (United States)

    Carneiro, Olga Sousa; Rajkumar, Ananth; Ferrás, Luís Lima; Fernandes, Célio; Sacramento, Alberto; Nóbrega, João Miguel

    2017-05-01

    In this work we present a detailed description of how to use open source based computer codes to aid the design of complex profile extrusion dies, aiming to improve its flow distribution. The work encompasses the description of the overall open-source die design methodology, the implementation of the energy conservation equation in an existing OpenFOAM® solver, which will be then capable of simulating the steady non-isothermal flow of an incompressible generalized Newtonian fluid, and two case studies to illustrate the capabilities and practical usefulness of the developed methodology. The results obtained with these case studies, used to solve real industrial problems, demonstrate that the computational design aid is an excellent alternative, from economical and technical points of view, to the experimental trial-and-error procedure commonly used in industry.

  13. Large Scale Evolution of Convolutional Neural Networks Using Volunteer Computing

    OpenAIRE

    Desell, Travis

    2017-01-01

    This work presents a new algorithm called evolutionary exploration of augmenting convolutional topologies (EXACT), which is capable of evolving the structure of convolutional neural networks (CNNs). EXACT is in part modeled after the neuroevolution of augmenting topologies (NEAT) algorithm, with notable exceptions to allow it to scale to large scale distributed computing environments and evolve networks with convolutional filters. In addition to multithreaded and MPI versions, EXACT has been ...

  14. Memristor-Based Computing Architecture: Design Methodologies and Circuit Techniques

    Science.gov (United States)

    2013-03-01

    MEMRISTOR-BASED COMPUTING ARCHITECTURE: DESIGN METHODOLOGIES AND CIRCUIT TECHNIQUES POLYTECHNIC INSTITUTE OF NEW YORK UNIVERSITY...5d. PROJECT NUMBER T2NC 5e. TASK NUMBER PO 5f. WORK UNIT NUMBER LY 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Polytechnic Institute of...Prospects,” AAPPS Bulletin , vol. 18, December 2008, pp. 33. [15] P. Krzysteczko, G. Reiss, and A. Thomas, “Memristive switching of MgO based magnetic

  15. Regional Computation of TEC Using a Neural Network Model

    Science.gov (United States)

    Leandro, R. F.; Santos, M. C.

    2004-05-01

    One of the main sources of errors of GPS measurements is the ionosphere refraction. As a dispersive medium, the ionosphere allow its influence to be computed by using dual frequency receivers. In the case of single frequency receivers it is necessary to use models that tell us how big the ionospheric refraction is. The GPS broadcast message carries parameters of this model, namely Klobuchar model. Dual frequency receivers allow to estimate the influence of ionosphere in the GPS signal by the computation of TEC (Total Electron Content) values, that have a direct relationship with the magnitude of the delay caused by the ionosphere. One alternative is to create a regional model based on a network of dual frequency receivers. In this case, the regional behaviour of ionosphere is modelled in a way that it is possible to estimate the TEC values into or near this region. This regional model can be based on polynomials, for example. In this work we will present a Neural Network-based model to the regional computation of TEC. The advantage of using a Neural Network is that it is not necessary to have a great knowledge on the behaviour of the modelled surface due to the adaptation capability of neural networks training process, that is an iterative adjust of the synaptic weights in function of residuals, using the training parameters. Therefore, the previous knowledge of the modelled phenomena is important to define what kind of and how many parameters are needed to train the neural network so that reasonable results are obtained from the estimations. We have used data from the GPS tracking network in Brazil, and we have tested the accuracy of the new model to all locations where there is a station, accessing the efficiency of the model everywhere. TEC values were computed for each station of the network. After that the training parameters data set for the test station was formed, with the TEC values of all others (all stations, except the test one). The Neural Network was

  16. A modular architecture for transparent computation in recurrent neural networks.

    Science.gov (United States)

    Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim

    2017-01-01

    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Evolutionary Computation and Its Applications in Neural and Fuzzy Systems

    Directory of Open Access Journals (Sweden)

    Biaobiao Zhang

    2011-01-01

    Full Text Available Neural networks and fuzzy systems are two soft-computing paradigms for system modelling. Adapting a neural or fuzzy system requires to solve two optimization problems: structural optimization and parametric optimization. Structural optimization is a discrete optimization problem which is very hard to solve using conventional optimization techniques. Parametric optimization can be solved using conventional optimization techniques, but the solution may be easily trapped at a bad local optimum. Evolutionary computation is a general-purpose stochastic global optimization approach under the universally accepted neo-Darwinian paradigm, which is a combination of the classical Darwinian evolutionary theory, the selectionism of Weismann, and the genetics of Mendel. Evolutionary algorithms are a major approach to adaptation and optimization. In this paper, we first introduce evolutionary algorithms with emphasis on genetic algorithms and evolutionary strategies. Other evolutionary algorithms such as genetic programming, evolutionary programming, particle swarm optimization, immune algorithm, and ant colony optimization are also described. Some topics pertaining to evolutionary algorithms are also discussed, and a comparison between evolutionary algorithms and simulated annealing is made. Finally, the application of EAs to the learning of neural networks as well as to the structural and parametric adaptations of fuzzy systems is also detailed.

  18. Review On Applications Of Neural Network To Computer Vision

    Science.gov (United States)

    Li, Wei; Nasrabadi, Nasser M.

    1989-03-01

    Neural network models have many potential applications to computer vision due to their parallel structures, learnability, implicit representation of domain knowledge, fault tolerance, and ability of handling statistical data. This paper demonstrates the basic principles, typical models and their applications in this field. Variety of neural models, such as associative memory, multilayer back-propagation perceptron, self-stabilized adaptive resonance network, hierarchical structured neocognitron, high order correlator, network with gating control and other models, can be applied to visual signal recognition, reinforcement, recall, stereo vision, motion, object tracking and other vision processes. Most of the algorithms have been simulated on com-puters. Some have been implemented with special hardware. Some systems use features, such as edges and profiles, of images as the data form for input. Other systems use raw data as input signals to the networks. We will present some novel ideas contained in these approaches and provide a comparison of these methods. Some unsolved problems are mentioned, such as extracting the intrinsic properties of the input information, integrating those low level functions to a high-level cognitive system, achieving invariances and other problems. Perspectives of applications of some human vision models and neural network models are analyzed.

  19. Application of artificial neural networks in computer-aided diagnosis.

    Science.gov (United States)

    Liu, Bei

    2015-01-01

    Computer-aided diagnosis is a diagnostic procedure in which a radiologist uses the outputs of computer analysis of medical images as a second opinion in the interpretation of medical images, either to help with lesion detection or to help determine if the lesion is benign or malignant. Artificial neural networks (ANNs) are usually employed to formulate the statistical models for computer analysis. Receiver operating characteristic curves are used to evaluate the performance of the ANN alone, as well as the diagnostic performance of radiologists who take into account the ANN output as a second opinion. In this chapter, we use mammograms to illustrate how an ANN model is trained, tested, and evaluated, and how a radiologist should use the ANN output as a second opinion in CAD.

  20. Computational modeling of neural activities for statistical inference

    CERN Document Server

    Kolossa, Antonio

    2016-01-01

    This authored monograph supplies empirical evidence for the Bayesian brain hypothesis by modeling event-related potentials (ERP) of the human electroencephalogram (EEG) during successive trials in cognitive tasks. The employed observer models are useful to compute probability distributions over observable events and hidden states, depending on which are present in the respective tasks. Bayesian model selection is then used to choose the model which best explains the ERP amplitude fluctuations. Thus, this book constitutes a decisive step towards a better understanding of the neural coding and computing of probabilities following Bayesian rules. The target audience primarily comprises research experts in the field of computational neurosciences, but the book may also be beneficial for graduate students who want to specialize in this field. .

  1. A computer simulator for development of engineering system design methodologies

    Science.gov (United States)

    Padula, S. L.; Sobieszczanski-Sobieski, J.

    1987-01-01

    A computer program designed to simulate and improve engineering system design methodology is described. The simulator mimics the qualitative behavior and data couplings occurring among the subsystems of a complex engineering system. It eliminates the engineering analyses in the subsystems by replacing them with judiciously chosen analytical functions. With the cost of analysis eliminated, the simulator is used for experimentation with a large variety of candidate algorithms for multilevel design optimization to choose the best ones for the actual application. Thus, the simulator serves as a development tool for multilevel design optimization strategy. The simulator concept, implementation, and status are described and illustrated with examples.

  2. A computational analysis of the neural bases of Bayesian inference.

    Science.gov (United States)

    Kolossa, Antonio; Kopp, Bruno; Fingscheidt, Tim

    2015-02-01

    Empirical support for the Bayesian brain hypothesis, although of major theoretical importance for cognitive neuroscience, is surprisingly scarce. This hypothesis posits simply that neural activities code and compute Bayesian probabilities. Here, we introduce an urn-ball paradigm to relate event-related potentials (ERPs) such as the P300 wave to Bayesian inference. Bayesian model comparison is conducted to compare various models in terms of their ability to explain trial-by-trial variation in ERP responses at different points in time and over different regions of the scalp. Specifically, we are interested in dissociating specific ERP responses in terms of Bayesian updating and predictive surprise. Bayesian updating refers to changes in probability distributions given new observations, while predictive surprise equals the surprise about observations under current probability distributions. Components of the late positive complex (P3a, P3b, Slow Wave) provide dissociable measures of Bayesian updating and predictive surprise. Specifically, the updating of beliefs about hidden states yields the best fit for the anteriorly distributed P3a, whereas the updating of predictions of observations accounts best for the posteriorly distributed Slow Wave. In addition, parietally distributed P3b responses are best fit by predictive surprise. These results indicate that the three components of the late positive complex reflect distinct neural computations. As such they are consistent with the Bayesian brain hypothesis, but these neural computations seem to be subject to nonlinear probability weighting. We integrate these findings with the free-energy principle that instantiates the Bayesian brain hypothesis. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Statistical Methodologies to Integrate Experimental and Computational Research

    Science.gov (United States)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  4. Development of a computational methodology for internal dose calculations

    CERN Document Server

    Yoriyaz, H

    2000-01-01

    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body and a more precise tool for the radiation transport simulation. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. In order to utilize the segmented human anatomy as a computational model for the simulation of radiation transport, an interface program, SCMS, was developed to build the geometric configurations for the phantom through the use of tomographic images. This procedure allows to calculate not only average dose values but also spatial distribution of dose in regions of interest. With the present methodology absorbed fractions for photons and electrons in various organs of the Zubal segmented phantom were calculated and compared to those reported for the mathematical phanto...

  5. Computational methodology for ChIP-seq analysis

    Science.gov (United States)

    Shin, Hyunjin; Liu, Tao; Duan, Xikun; Zhang, Yong; Liu, X. Shirley

    2015-01-01

    Chromatin immunoprecipitation coupled with massive parallel sequencing (ChIP-seq) is a powerful technology to identify the genome-wide locations of DNA binding proteins such as transcription factors or modified histones. As more and more experimental laboratories are adopting ChIP-seq to unravel the transcriptional and epigenetic regulatory mechanisms, computational analyses of ChIP-seq also become increasingly comprehensive and sophisticated. In this article, we review current computational methodology for ChIP-seq analysis, recommend useful algorithms and workflows, and introduce quality control measures at different analytical steps. We also discuss how ChIP-seq could be integrated with other types of genomic assays, such as gene expression profiling and genome-wide association studies, to provide a more comprehensive view of gene regulatory mechanisms in important physiological and pathological processes. PMID:25741452

  6. The neural and computational bases of semantic cognition.

    Science.gov (United States)

    Ralph, Matthew A Lambon; Jefferies, Elizabeth; Patterson, Karalyn; Rogers, Timothy T

    2017-01-01

    Semantic cognition refers to our ability to use, manipulate and generalize knowledge that is acquired over the lifespan to support innumerable verbal and non-verbal behaviours. This Review summarizes key findings and issues arising from a decade of research into the neurocognitive and neurocomputational underpinnings of this ability, leading to a new framework that we term controlled semantic cognition (CSC). CSC offers solutions to long-standing queries in philosophy and cognitive science, and yields a convergent framework for understanding the neural and computational bases of healthy semantic cognition and its dysfunction in brain disorders.

  7. A design methodology for portable software on parallel computers

    Science.gov (United States)

    Nicol, David M.; Miller, Keith W.; Chrisman, Dan A.

    1993-01-01

    This final report for research that was supported by grant number NAG-1-995 documents our progress in addressing two difficulties in parallel programming. The first difficulty is developing software that will execute quickly on a parallel computer. The second difficulty is transporting software between dissimilar parallel computers. In general, we expect that more hardware-specific information will be included in software designs for parallel computers than in designs for sequential computers. This inclusion is an instance of portability being sacrificed for high performance. New parallel computers are being introduced frequently. Trying to keep one's software on the current high performance hardware, a software developer almost continually faces yet another expensive software transportation. The problem of the proposed research is to create a design methodology that helps designers to more precisely control both portability and hardware-specific programming details. The proposed research emphasizes programming for scientific applications. We completed our study of the parallelizability of a subsystem of the NASA Earth Radiation Budget Experiment (ERBE) data processing system. This work is summarized in section two. A more detailed description is provided in Appendix A ('Programming Practices to Support Eventual Parallelism'). Mr. Chrisman, a graduate student, wrote and successfully defended a Ph.D. dissertation proposal which describes our research associated with the issues of software portability and high performance. The list of research tasks are specified in the proposal. The proposal 'A Design Methodology for Portable Software on Parallel Computers' is summarized in section three and is provided in its entirety in Appendix B. We are currently studying a proposed subsystem of the NASA Clouds and the Earth's Radiant Energy System (CERES) data processing system. This software is the proof-of-concept for the Ph.D. dissertation. We have implemented and measured

  8. Eye tracking using artificial neural networks for human computer interaction.

    Science.gov (United States)

    Demjén, E; Aboši, V; Tomori, Z

    2011-01-01

    This paper describes an ongoing project that has the aim to develop a low cost application to replace a computer mouse for people with physical impairment. The application is based on an eye tracking algorithm and assumes that the camera and the head position are fixed. Color tracking and template matching methods are used for pupil detection. Calibration is provided by neural networks as well as by parametric interpolation methods. Neural networks use back-propagation for learning and bipolar sigmoid function is chosen as the activation function. The user's eye is scanned with a simple web camera with backlight compensation which is attached to a head fixation device. Neural networks significantly outperform parametric interpolation techniques: 1) the calibration procedure is faster as they require less calibration marks and 2) cursor control is more precise. The system in its current stage of development is able to distinguish regions at least on the level of desktop icons. The main limitation of the proposed method is the lack of head-pose invariance and its relative sensitivity to illumination (especially to incidental pupil reflections).

  9. Computational models of the neural control of breathing.

    Science.gov (United States)

    Molkov, Yaroslav I; Rubin, Jonathan E; Rybak, Ilya A; Smith, Jeffrey C

    2017-03-01

    The ongoing process of breathing underlies the gas exchange essential for mammalian life. Each respiratory cycle ensues from the activity of rhythmic neural circuits in the brainstem, shaped by various modulatory signals, including mechanoreceptor feedback sensitive to lung inflation and chemoreceptor feedback dependent on gas composition in blood and tissues. This paper reviews a variety of computational models designed to reproduce experimental findings related to the neural control of breathing and generate predictions for future experimental testing. The review starts from the description of the core respiratory network in the brainstem, representing the central pattern generator (CPG) responsible for producing rhythmic respiratory activity, and progresses to encompass additional complexities needed to simulate different metabolic challenges, closed-loop feedback control including the lungs, and interactions between the respiratory and autonomic nervous systems. The integrated models considered in this review share a common framework including a distributed CPG core network responsible for generating the baseline three-phase pattern of rhythmic neural activity underlying normal breathing. WIREs Syst Biol Med 2017, 9:e1371. doi: 10.1002/wsbm.1371 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  10. A taxonomy of Deep Convolutional Neural Nets for Computer Vision

    Directory of Open Access Journals (Sweden)

    Suraj eSrinivas

    2016-01-01

    Full Text Available Traditional architectures for solving computer vision problems and the degree of success they enjoyed have been heavily reliant on hand-crafted features. However, of late, deep learning techniques have offered a compelling alternative -- that of automatically learning problem-specific features. With this new paradigm, every problem in computer vision is now being re-examined from a deep learning perspective. Therefore, it has become important to understand what kind of deep networks are suitable for a given problem. Although general surveys of this fast-moving paradigm (i.e. deep-networks exist, a survey specific to computer vision is missing. We specifically consider one form of deep networks widely used in computer vision - convolutional neural networks (CNNs. We start with AlexNet'' as our base CNN and then examine the broad variations proposed over time to suit different applications. We hope that our recipe-style survey will serve as a guide, particularly for novice practitioners intending to use deep-learning techniques for computer vision.

  11. Artificial intelligence in pharmaceutical product formulation: neural computing

    Directory of Open Access Journals (Sweden)

    Svetlana Ibrić

    2009-10-01

    Full Text Available The properties of a formulation are determined not only by the ratios in which the ingredients are combined but also by the processing conditions. Although the relationships between the ingredient levels, processing conditions, and product performance may be known anecdotally, they can rarely be quantified. In the past, formulators tended to use statistical techniques to model their formulations, relying on response surfaces to provide a mechanism for optimazation. However, the optimization by such a method can be misleading, especially if the formulation is complex. More recently, advances in mathematics and computer science have led to the development of alternative modeling and data mining techniques which work with a wider range of data sources: neural networks (an attempt to mimic the processing of the human brain; genetic algorithms (an attempt to mimic the evolutionary process by which biological systems self-organize and adapt, and fuzzy logic (an attempt to mimic the ability of the human brain to draw conclusions and generate responses based on incomplete or imprecise information. In this review the current technology will be examined, as well as its application in pharmaceutical formulation and processing. The challenges, benefits and future possibilities of neural computing will be discussed.

  12. Utilizing neural networks in magnetic media modeling and field computation: A review.

    Science.gov (United States)

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2014-11-01

    Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper.

  13. Utilizing neural networks in magnetic media modeling and field computation: A review

    Directory of Open Access Journals (Sweden)

    Amr A. Adly

    2014-11-01

    Full Text Available Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper.

  14. A computational methodology to screen activities of enzyme variants.

    Directory of Open Access Journals (Sweden)

    Martin R Hediger

    Full Text Available We present a fast computational method to efficiently screen enzyme activity. In the presented method, the effect of mutations on the barrier height of an enzyme-catalysed reaction can be computed within 24 hours on roughly 10 processors. The methodology is based on the PM6 and MOZYME methods as implemented in MOPAC2009, and is tested on the first step of the amide hydrolysis reaction catalyzed by the Candida Antarctica lipase B (CalB enzyme. The barrier heights are estimated using adiabatic mapping and shown to give barrier heights to within 3 kcal/mol of B3LYP/6-31G(d//RHF/3-21G results for a small model system. Relatively strict convergence criteria (0.5 kcal/(molÅ, long NDDO cutoff distances within the MOZYME method (15 Å and single point evaluations using conventional PM6 are needed for reliable results. The generation of mutant structures and subsequent setup of the semiempirical calculations are automated so that the effect on barrier heights can be estimated for hundreds of mutants in a matter of weeks using high performance computing.

  15. Electricity market price forecasting by grid computing optimizing artificial neural networks

    OpenAIRE

    Niimura, T.; Ozawa, K.; Sakamoto, N.

    2007-01-01

    This paper presents a grid computing approach to parallel-process a neural network time-series model for forecasting electricity market prices. A grid computing environment introduced in a university computing laboratory provides access to otherwise underused computing resources. The grid computing of the neural network model not only processes several times faster than a single iterative process, but also provides chances of improving forecasting accuracy. Results of numerical tests using re...

  16. Instrumentation for Scientific Computing in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics.

    Science.gov (United States)

    1987-10-01

    include Security Classification) Instrumentation for scientific computing in neural networks, information science, artificial intelligence, and...instrumentation grant to purchase equipment for support of research in neural networks, information science, artificail intellignece , and applied mathematics...in Neural Networks, Information Science, Artificial Intelligence, and Applied Mathematics Contract AFOSR 86-0282 Principal Investigator: Stephen

  17. A Computational Methodology to Screen Activities of Enzyme Variants

    CERN Document Server

    Hediger, Martin R; Svendsen, Allan; Besenmatter, Werner; Jensen, Jan H

    2012-01-01

    We present a fast computational method to efficiently screen enzyme activity. In the presented method, the effect of mutations on the barrier height of an enzyme-catalysed reaction can be computed within 24 hours on roughly 10 processors. The methodology is based on the PM6 and MOZYME methods as implemented in MOPAC2009, and is tested on the first step of the amide hydrolysis reaction catalyzed by Candida Antarctica lipase B (CalB) enzyme. The barrier heights are estimated using adiabatic mapping and are shown to give barrier heights to within 3kcal/mol of B3LYP/6-31G(d)//RHF/3-21G results for a small model system. Relatively strict convergence criteria (0.5kcal/(mol{\\AA})), long NDDO cutoff distances within the MOZYME method (15{\\AA}) and single point evaluations using conventional PM6 are needed for reliable results. The generation of mutant structure and subsequent setup of the semiempirical calculations are automated so that the effect on barrier heights can be estimated for hundreds of mutants in a matte...

  18. Neural Cognition and Affective Computing on Cyber Language.

    Science.gov (United States)

    Huang, Shuang; Zhou, Xuan; Xue, Ke; Wan, Xiqiong; Yang, Zhenyi; Xu, Duo; Ivanović, Mirjana; Yu, Xueer

    2015-01-01

    Characterized by its customary symbol system and simple and vivid expression patterns, cyber language acts as not only a tool for convenient communication but also a carrier of abundant emotions and causes high attention in public opinion analysis, internet marketing, service feedback monitoring, and social emergency management. Based on our multidisciplinary research, this paper presents a classification of the emotional symbols in cyber language, analyzes the cognitive characteristics of different symbols, and puts forward a mechanism model to show the dominant neural activities in that process. Through the comparative study of Chinese, English, and Spanish, which are used by the largest population in the world, this paper discusses the expressive patterns of emotions in international cyber languages and proposes an intelligent method for affective computing on cyber language in a unified PAD (Pleasure-Arousal-Dominance) emotional space.

  19. Neural Cognition and Affective Computing on Cyber Language

    Science.gov (United States)

    Huang, Shuang; Zhou, Xuan; Xue, Ke; Wan, Xiqiong; Yang, Zhenyi; Xu, Duo; Ivanović, Mirjana

    2015-01-01

    Characterized by its customary symbol system and simple and vivid expression patterns, cyber language acts as not only a tool for convenient communication but also a carrier of abundant emotions and causes high attention in public opinion analysis, internet marketing, service feedback monitoring, and social emergency management. Based on our multidisciplinary research, this paper presents a classification of the emotional symbols in cyber language, analyzes the cognitive characteristics of different symbols, and puts forward a mechanism model to show the dominant neural activities in that process. Through the comparative study of Chinese, English, and Spanish, which are used by the largest population in the world, this paper discusses the expressive patterns of emotions in international cyber languages and proposes an intelligent method for affective computing on cyber language in a unified PAD (Pleasure-Arousal-Dominance) emotional space. PMID:26491431

  20. Neural Cognition and Affective Computing on Cyber Language

    Directory of Open Access Journals (Sweden)

    Shuang Huang

    2015-01-01

    Full Text Available Characterized by its customary symbol system and simple and vivid expression patterns, cyber language acts as not only a tool for convenient communication but also a carrier of abundant emotions and causes high attention in public opinion analysis, internet marketing, service feedback monitoring, and social emergency management. Based on our multidisciplinary research, this paper presents a classification of the emotional symbols in cyber language, analyzes the cognitive characteristics of different symbols, and puts forward a mechanism model to show the dominant neural activities in that process. Through the comparative study of Chinese, English, and Spanish, which are used by the largest population in the world, this paper discusses the expressive patterns of emotions in international cyber languages and proposes an intelligent method for affective computing on cyber language in a unified PAD (Pleasure-Arousal-Dominance emotional space.

  1. Neural computation and particle accelerators research, technology and applications

    CERN Document Server

    D'Arras, Horace

    2010-01-01

    This book discusses neural computation, a network or circuit of biological neurons and relatedly, particle accelerators, a scientific instrument which accelerates charged particles such as protons, electrons and deuterons. Accelerators have a very broad range of applications in many industrial fields, from high energy physics to medical isotope production. Nuclear technology is one of the fields discussed in this book. The development that has been reached by particle accelerators in energy and particle intensity has opened the possibility to a wide number of new applications in nuclear technology. This book reviews the applications in the nuclear energy field and the design features of high power neutron sources are explained. Surface treatments of niobium flat samples and superconducting radio frequency cavities by a new technique called gas cluster ion beam are also studied in detail, as well as the process of electropolishing. Furthermore, magnetic devises such as solenoids, dipoles and undulators, which ...

  2. Evolution of Neural Computations: Mantis Shrimp and Human Color Decoding

    Directory of Open Access Journals (Sweden)

    Qasim Zaidi

    2014-10-01

    Full Text Available Mantis shrimp and primates both possess good color vision, but the neural implementation in the two species is very different, a reflection of the largely unrelated evolutionary lineages of these creatures. Mantis shrimp have scanning compound eyes with 12 classes of photoreceptors, and have evolved a system to decode color information at the front-end of the sensory stream. Primates have image-focusing eyes with three classes of cones, and decode color further along the visual-processing hierarchy. Despite these differences, we report a fascinating parallel between the computational strategies at the color-decoding stage in the brains of stomatopods and primates. Both species appear to use narrowly tuned cells that support interval decoding color identification.

  3. Computer simulations of neural mechanisms explaining upper and lower limb excitatory neural coupling

    Directory of Open Access Journals (Sweden)

    Ferris Daniel P

    2010-12-01

    Full Text Available Abstract Background When humans perform rhythmic upper and lower limb locomotor-like movements, there is an excitatory effect of upper limb exertion on lower limb muscle recruitment. To investigate potential neural mechanisms for this behavioral observation, we developed computer simulations modeling interlimb neural pathways among central pattern generators. We hypothesized that enhancement of muscle recruitment from interlimb spinal mechanisms was not sufficient to explain muscle enhancement levels observed in experimental data. Methods We used Matsuoka oscillators for the central pattern generators (CPG and determined parameters that enhanced amplitudes of rhythmic steady state bursts. Potential mechanisms for output enhancement were excitatory and inhibitory sensory feedback gains, excitatory and inhibitory interlimb coupling gains, and coupling geometry. We first simulated the simplest case, a single CPG, and then expanded the model to have two CPGs and lastly four CPGs. In the two and four CPG models, the lower limb CPGs did not receive supraspinal input such that the only mechanisms available for enhancing output were interlimb coupling gains and sensory feedback gains. Results In a two-CPG model with inhibitory sensory feedback gains, only excitatory gains of ipsilateral flexor-extensor/extensor-flexor coupling produced reciprocal upper-lower limb bursts and enhanced output up to 26%. In a two-CPG model with excitatory sensory feedback gains, excitatory gains of contralateral flexor-flexor/extensor-extensor coupling produced reciprocal upper-lower limb bursts and enhanced output up to 100%. However, within a given excitatory sensory feedback gain, enhancement due to excitatory interlimb gains could only reach levels up to 20%. Interconnecting four CPGs to have ipsilateral flexor-extensor/extensor-flexor coupling, contralateral flexor-flexor/extensor-extensor coupling, and bilateral flexor-extensor/extensor-flexor coupling could enhance

  4. On the Computational Power of Spiking Neural P Systems with Self-Organization

    Science.gov (United States)

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-06-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  5. Computer-Aided Methodology for Syndromic Strabismus Diagnosis.

    Science.gov (United States)

    Sousa de Almeida, João Dallyson; Silva, Aristófanes Corrêa; Teixeira, Jorge Antonio Meireles; Paiva, Anselmo Cardoso; Gattass, Marcelo

    2015-08-01

    Strabismus is a pathology that affects approximately 4 % of the population, causing aesthetic problems reversible at any age and irreversible sensory alterations that modify the vision mechanism. The Hirschberg test is one type of examination for detecting this pathology. Computer-aided detection/diagnosis is being used with relative success to aid health professionals. Nevertheless, the routine use of high-tech devices for aiding ophthalmological diagnosis and therapy is not a reality within the subspecialty of strabismus. Thus, this work presents a methodology to aid in diagnosis of syndromic strabismus through digital imaging. Two hundred images belonging to 40 patients previously diagnosed by an specialist were tested. The method was demonstrated to be 88 % accurate in esotropias identification (ET), 100 % for exotropias (XT), 80.33 % for hypertropias (HT), and 83.33 % for hypotropias (HoT). The overall average error was 5.6Δ and 3.83Δ for horizontal and vertical deviations, respectively, against the measures presented by the specialist.

  6. Hardware Neural Networks Modeling for Computing Different Performance Parameters of Rectangular, Circular, and Triangular Microstrip Antennas

    Directory of Open Access Journals (Sweden)

    Taimoor Khan

    2014-01-01

    Full Text Available In the last one decade, neural networks-based modeling has been used for computing different performance parameters of microstrip antennas because of learning and generalization features. Most of the created neural models are based on software simulation. As the neural networks show massive parallelism inherently, a parallel hardware needs to be created for creating faster computing machine by taking the advantages of the parallelism of the neural networks. This paper demonstrates a generalized neural networks model created on field programmable gate array- (FPGA- based reconfigurable hardware platform for computing different performance parameters of microstrip antennas. Thus, the proposed approach provides a platform for developing low-cost neural network-based FPGA simulators for microwave applications. Also, the results obtained by this approach are in very good agreement with the measured results available in the literature.

  7. The neural correlates of problem states: testing FMRI predictions of a computational model of multitasking.

    Directory of Open Access Journals (Sweden)

    Jelmer P Borst

    Full Text Available BACKGROUND: It has been shown that people can only maintain one problem state, or intermediate mental representation, at a time. When more than one problem state is required, for example in multitasking, performance decreases considerably. This effect has been explained in terms of a problem state bottleneck. METHODOLOGY: In the current study we use the complimentary methodologies of computational cognitive modeling and neuroimaging to investigate the neural correlates of this problem state bottleneck. In particular, an existing computational cognitive model was used to generate a priori fMRI predictions for a multitasking experiment in which the problem state bottleneck plays a major role. Hemodynamic responses were predicted for five brain regions, corresponding to five cognitive resources in the model. Most importantly, we predicted the intraparietal sulcus to show a strong effect of the problem state manipulations. CONCLUSIONS: Some of the predictions were confirmed by a subsequent fMRI experiment, while others were not matched by the data. The experiment supported the hypothesis that the problem state bottleneck is a plausible cause of the interference in the experiment and that it could be located in the intraparietal sulcus.

  8. Independent Neural Computation of Value from Other People's Confidence.

    Science.gov (United States)

    Campbell-Meiklejohn, Daniel; Simonsen, Arndis; Frith, Chris D; Daw, Nathaniel D

    2017-01-18

    Expectation of reward can be shaped by the observation of actions and expressions of other people in one's environment. A person's apparent confidence in the likely reward of an action, for instance, makes qualities of their evidence, not observed directly, socially accessible. This strategy is computationally distinguished from associative learning methods that rely on direct observation, by its use of inference from indirect evidence. In twenty-three healthy human subjects, we isolated effects of first-hand experience, other people's choices, and the mediating effect of their confidence, on decision-making and neural correlates of value within ventromedial prefrontal cortex (vmPFC). Value derived from first-hand experience and other people's choices (regardless of confidence) were indiscriminately represented across vmPFC. However, value computed from agent choices weighted by their associated confidence was represented with specificity for ventromedial area 10. This pattern corresponds to shifts of connectivity and overlapping cognitive processes along a posterior-anterior vmPFC axis. Task behavior and self-reported self-reliance for decision-making in other social contexts correlated. The tendency to conform in other social contexts corresponded to increased activation in cortical regions previously shown to respond to social conflict in proportion to subsequent conformity (Campbell-Meiklejohn et al., 2010). The tendency to self-monitor predicted a selectively enhanced response to accordance with others in the right temporoparietal junction (rTPJ). The findings anatomically decompose vmPFC value representations according to computational requirements and provide biological insight into the social transmission of preference and reassurance gained from the confidence of others. Decades of research have provided evidence that the ventromedial prefrontal cortex (vmPFC) signals the satisfaction we expect from imminent actions. However, we have a surprisingly modest

  9. Honey characterization using computer vision system and artificial neural networks.

    Science.gov (United States)

    Shafiee, Sahameh; Minaei, Saeid; Moghaddam-Charkari, Nasrollah; Barzegar, Mohsen

    2014-09-15

    This paper reports the development of a computer vision system (CVS) for non-destructive characterization of honey based on colour and its correlated chemical attributes including ash content (AC), antioxidant activity (AA), and total phenolic content (TPC). Artificial neural network (ANN) models were applied to transform RGB values of images to CIE L*a*b* colourimetric measurements and to predict AC, TPC and AA from colour features of images. The developed ANN models were able to convert RGB values to CIE L*a*b* colourimetric parameters with low generalization error of 1.01±0.99. In addition, the developed models for prediction of AC, TPC and AA showed high performance based on colour parameters of honey images, as the R(2) values for prediction were 0.99, 0.98, and 0.87, for AC, AA and TPC, respectively. The experimental results show the effectiveness and possibility of applying CVS for non-destructive honey characterization by the industry. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Advanced neural network-based computational schemes for robust fault diagnosis

    CERN Document Server

    Mrugalski, Marcin

    2014-01-01

    The present book is devoted to problems of adaptation of artificial neural networks to robust fault diagnosis schemes. It presents neural networks-based modelling and estimation techniques used for designing robust fault diagnosis schemes for non-linear dynamic systems. A part of the book focuses on fundamental issues such as architectures of dynamic neural networks, methods for designing of neural networks and fault diagnosis schemes as well as the importance of robustness. The book is of a tutorial value and can be perceived as a good starting point for the new-comers to this field. The book is also devoted to advanced schemes of description of neural model uncertainty. In particular, the methods of computation of neural networks uncertainty with robust parameter estimation are presented. Moreover, a novel approach for system identification with the state-space GMDH neural network is delivered. All the concepts described in this book are illustrated by both simple academic illustrative examples and practica...

  11. 3-D components of a biological neural network visualized in computer generated imagery. II - Macular neural network organization

    Science.gov (United States)

    Ross, Muriel D.; Meyer, Glenn; Lam, Tony; Cutler, Lynn; Vaziri, Parshaw

    1990-01-01

    Computer-assisted reconstructions of small parts of the macular neural network show how the nerve terminals and receptive fields are organized in 3-dimensional space. This biological neural network is anatomically organized for parallel distributed processing of information. Processing appears to be more complex than in computer-based neural network, because spatiotemporal factors figure into synaptic weighting. Serial reconstruction data show anatomical arrangements which suggest that (1) assemblies of cells analyze and distribute information with inbuilt redundancy, to improve reliability; (2) feedforward/feedback loops provide the capacity for presynaptic modulation of output during processing; (3) constrained randomness in connectivities contributes to adaptability; and (4) local variations in network complexity permit differing analyses of incoming signals to take place simultaneously. The last inference suggests that there may be segregation of information flow to central stations subserving particular functions.

  12. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  13. Neural Computations for Biosonar Imaging in the Big Brown Bat

    Science.gov (United States)

    Saillant, Prestor Augusto

    1995-11-01

    The study of the intimate relationship between space and time has taken many forms, ranging from the Theory of Relativity down to the problem of avoiding traffic jams. However, nowhere has this relationship been more fully developed and exploited than in dolphins and bats, which have the ability to utilize biosonar. This thesis describes research on the behavioral and computational basis of echolocation carried out in order to explore the neural mechanisms which may account for the space-time constructs which are of psychological importance to the big brown bat. The SCAT (Spectrogram Correlation and Transformation) computational model was developed to provide a framework for understanding the computational requirements of FM echolocation as determined from psychophysical experiments (i.e., high resolution imaging) and neurobiological constraints (Saillant et al., 1993). The second part of the thesis consisted in developing a new behavioral paradigm for simultaneously studying acoustic behavior and flight behavior of big brown bats in pursuit of stationary or moving targets. In the third part of the thesis a complete acoustic "artificial bat" was constructed, making use of the SCAT process. The development of the artificial bat allowed us to begin experimentation with real world echoes from various targets, in order to gain a better appreciation for the additional complexities and sources of information encountered by bats in flight. Finally, the continued development of the SCAT model has allowed a deeper understanding of the phenomenon of "time expansion" and of the phenomenon of phase sensitivity in the ultrasonic range. Time expansion, first predicted through the use of the SCAT model, and later found in auditory local evoked potential recordings, opens up a new realm of information processing and representation in the brain which as of yet has not been considered. It seems possible, from the work in the auditory system, that time expansion may provide a novel

  14. Teaching and Learning Methodologies Supported by ICT Applied in Computer Science

    Science.gov (United States)

    Capacho, Jose

    2016-01-01

    The main objective of this paper is to show a set of new methodologies applied in the teaching of Computer Science using ICT. The methodologies are framed in the conceptual basis of the following sciences: Psychology, Education and Computer Science. The theoretical framework of the research is supported by Behavioral Theory, Gestalt Theory.…

  15. A Neural Information Field Approach to Computational Cognition

    Science.gov (United States)

    2016-11-18

    of irrelevant information) during the task. The spiking neural model accounts for the probability of first recall, recency effects , primacy effects ...neuron models, allowing the simulated testing of drug effects on cognitive performance; demonstrated a scalable neural model of motor planning... effects of distraction in working memory; shown a hippocampal model able to perform context sensitive sequence encoding and retrieval; proposed what is

  16. Fractality and a wavelet-chaos-neural network methodology for EEG-based diagnosis of autistic spectrum disorder.

    Science.gov (United States)

    Ahmadlou, Mehran; Adeli, Hojjat; Adeli, Amir

    2010-10-01

    A method is presented for investigation of EEG of children with autistic spectrum disorder using complexity and chaos theory with the goal of discovering a nonlinear feature space. Fractal Dimension is proposed for investigation of complexity and dynamical changes in autistic spectrum disorder in brain. Two methods are investigated for computation of fractal dimension: Higuchi's Fractal Dimension and Katz's Fractal Dimension. A wavelet-chaos-neural network methodology is presented for automated EEG-based diagnosis of autistic spectrum disorder. The model is tested on a database of eyes-closed EEG data obtained from two groups: nine autistic spectrum disorder children, 6 to 13 years old, and eight non-autistic spectrum disorder children, 7 to 13 years old. Using a radial basis function classifier, an accuracy of 90% was achieved based on the most significant features discovered via analysis of variation statistical test, which are three Katz's Fractal Dimensions in delta (of loci Fp2 and C3) and gamma (of locus T6) EEG sub-bands with P < 0.001.

  17. Neural network computation for the evaluation of process rendering: application to thermally sprayed coatings

    Directory of Open Access Journals (Sweden)

    Guessasma Sofiane

    2017-01-01

    Full Text Available In this work, neural network computation is attempted to relate alumina and titania phase changes of a coating microstructure with respect to energetic parameters of atmospheric plasma straying (APS process. Experimental results were analysed using standard fitting routines and neural computation to quantify the effect of arc current, hydrogen ratio and total plasma flow rate. For a large parameter domain, phase changes were 10% for alumina and 8% for titania with a significant control of titania phase.

  18. A Methodology for Teaching Computer Programming: first year students’ perspective

    OpenAIRE

    Bassey Isong

    2014-01-01

    The teaching of computer programming is one of the greatest challenges that have remained for years in Computer Science Education. A particular case is computer programming course for the beginners. While the traditional objectivist lecture-based approaches do not actively engage students to achieve their learning outcome, we believe that integrating some cutting-edge processes and practices like agile method into the teaching approaches will be leverage. Agile software development has gained...

  19. Biological modelling of a computational spiking neural network with neuronal avalanches

    Science.gov (United States)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  20. Experiences with Efficient Methodologies for Teaching Computer Programming to Geoscientists

    Science.gov (United States)

    Jacobs, Christian T.; Gorman, Gerard J.; Rees, Huw E.; Craig, Lorraine E.

    2016-01-01

    Computer programming was once thought of as a skill required only by professional software developers. But today, given the ubiquitous nature of computation and data science it is quickly becoming necessary for all scientists and engineers to have at least a basic knowledge of how to program. Teaching how to program, particularly to those students…

  1. A methodological review of computer science education research

    National Research Council Canada - National Science Library

    Randolph, Justus; Sutinen, Erkki; Julnes, George; Lehman, Steve

    2008-01-01

    ..., Guzdial, & Petre, 2005). (In this methodological review, we use the term behavioral research as a synonym for what Guzdzial, in Almstrum et al. (2005, p. 192), calls "education, cognitive science, and learning sciences research.") Addressing this lack of connection with behavioral research, Guzdial, in Almstrum and colleagues (2005) wrote, The real challeng...

  2. Artificial Neural Network and Response Surface Methodology Modeling in Ionic Conductivity Predictions of Phthaloylchitosan-Based Gel Polymer Electrolyte

    Directory of Open Access Journals (Sweden)

    Ahmad Danial Azzahari

    2016-01-01

    Full Text Available A gel polymer electrolyte system based on phthaloylchitosan was prepared. The effects of process variables, such as lithium iodide, caesium iodide, and 1-butyl-3-methylimidazolium iodide were investigated using a distance-based ternary mixture experimental design. A comparative approach was made between response surface methodology (RSM and artificial neural network (ANN to predict the ionic conductivity. The predictive capabilities of the two methodologies were compared in terms of coefficient of determination R2 based on the validation data set. It was shown that the developed ANN model had better predictive outcome as compared to the RSM model.

  3. Railroad classification yard technology : computer system methodology : case study : Potomac Yard

    Science.gov (United States)

    1981-08-01

    This report documents the application of the railroad classification yard computer system methodology to Potomac Yard of the Richmond, Fredericksburg, and Potomac Railroad Company (RF&P). This case study entailed evaluation of the yard traffic capaci...

  4. Toward a computer-aided methodology for discourse analysis

    African Journals Online (AJOL)

    Abstract. This paper describes and outlines a new project entitled “Applying computer-aided methods to discourse analysis”. This project aims to develop an e-learning environment dedicated to documenting, evaluating and teaching the use of corpus linguistic tools suitable for interpretative text analysis. Even though its ...

  5. A New Computational Methodology for Structural Dynamics Problems

    Science.gov (United States)

    2008-04-01

    with p- or h-refinements, as should be expected. Actually, an exact study on the asymptotic behaviour of mixed finite elements based on least-squares...1986. [13] Duan, H.-Y. and Liang, G.-P., Mixed and nonconforming finite element approximations of Reissner-Mindlin plates, Computer Methods in Applied

  6. Validation of artificial neural networks as a methodology for donor-recipient matching for liver transplantation.

    Science.gov (United States)

    Ayllón, María Dolores; Ciria, Rubén; Cruz-Ramírez, Manuel; Pérez-Ortiz, María; Valente, Roberto; O'Grady, John; de la Mata, Manuel; Hervás-Martínez, César; Heaton, Nigel D; Briceño, Javier

    2017-09-16

    In 2014, we reported a model for Donor-Recipient matching (D-R) in liver transplantation (LT) based on artificial-neural-networks (ANN) from a Spanish multicentre study (MADR-E: Model for Allocation of Donor and Recipient in España). The aim is to test the ANN-based methodology in a different European-healthcare system in order to validate it. An ANN model was designed using a cohort of patients from King's College Hospital (KCH) (N=822). The ANN was trained and tested using KCH pairs for both 3- and 12-months survival models. Endpoints were probability of graft survival (CCR) and non-survival (MS). The final model is a rule-based-system for facilitating the decision about the most appropriate D-R matching. Models designed for KCH had excellent prediction capabilities for both 3-months (CCR-AUC=0.94; MS-AUC=0.94) and 12-months (CCR-AUC=0.78; MS-AUC=0.82), almost 15% higher than the best obtained by other known scores such as MELD and BAR. Moreover, these results improve the previously reported ones in the multicentric MADR-E database. The use of ANN for D-R matching in LT in other healthcare systems achieved excellent prediction capabilities supporting the validation of these tools. It should be considered as the most advanced, objective and useful tool to date for the management of waiting lists. This article is protected by copyright. All rights reserved. © 2017 by the American Association for the Study of Liver Diseases.

  7. Modeling and optimization of ethanol fermentation using Saccharomyces cerevisiae: Response surface methodology and artificial neural network

    Directory of Open Access Journals (Sweden)

    Esfahanian Mehri

    2013-01-01

    Full Text Available In this study, the capabilities of response surface methodology (RSM and artificial neural networks (ANN for modeling and optimization of ethanol production from glucoseusing Saccharomyces cerevisiae in batch fermentation process were investigated. Effect of three independent variables in a defined range of pH (4.2-5.8, temperature (20-40ºC and glucose concentration (20-60 g/l on the cell growth and ethanol production was evaluated. Results showed that prediction accuracy of ANN was apparently similar to RSM. At optimum condition of temperature (32°C, pH (5.2 and glucose concentration (50 g/l suggested by the statistical methods, the maximum cell dry weight and ethanol concentration obtained from RSM were 12.06 and 16.2 g/l whereas experimental values were 12.09 and 16.53 g/l, respectively. The present study showed that using ANN as fitness function, the maximum cell dry weight and ethanol concentration were 12.05 and 16.16 g/l, respectively. Also, the coefficients of determination for biomass and ethanol concentration obtained from RSM were 0.9965 and 0.9853 and from ANN were 0.9975 and 0.9936, respectively. The process parameters optimization was successfully conducted using RSM and ANN; however prediction by ANN was slightly more precise than RSM. Based on experimental data maximum yield of ethanol production of 0.5 g ethanol/g substrate (97 % of theoretical yield was obtained.

  8. A Methodology for the Analysis of Patterns of Participation within Computer Mediated Communication Courses.

    Science.gov (United States)

    Howell-Richardson, Christina; Mellar, Harvey

    1996-01-01

    Proposes a methodology for the analysis of text-based interchanges on computer-mediated conferences used in distance education courses which is based on speech act theory and which takes the illocutionary act as its unit of analysis. This methodology is used to compare messages from two conferences and to show differing patterns of interaction.…

  9. Computationally efficient locally-recurrent neural networks for online signal processing

    CERN Document Server

    Hussain, A; Shim, I

    1999-01-01

    A general class of computationally efficient locally recurrent networks (CERN) is described for real-time adaptive signal processing. The structure of the CERN is based on linear-in-the- parameters single-hidden-layered feedforward neural networks such as the radial basis function (RBF) network, the Volterra neural network (VNN) and the functionally expanded neural network (FENN), adapted to employ local output feedback. The corresponding learning algorithms are derived and key structural and computational complexity comparisons are made between the CERN and conventional recurrent neural networks. Two case studies are performed involving the real- time adaptive nonlinear prediction of real-world chaotic, highly non- stationary laser time series and an actual speech signal, which show that a recurrent FENN based adaptive CERN predictor can significantly outperform the corresponding feedforward FENN and conventionally employed linear adaptive filtering models. (13 refs).

  10. Quantum perceptron over a field and neural network architecture selection in a quantum computer.

    Science.gov (United States)

    da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa

    2016-04-01

    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Space-time system architecture for the neural optical computing

    Science.gov (United States)

    Lo, Yee-Man V.

    1991-02-01

    The brain can perform the tasks of associative recall detection recognition and optimization. In this paper space-time system field models of the brain are introduced. They are called the space-time maximum likelihood associative memory system (ST-ML-AMS) and the space-time adaptive learning system (ST-ALS). Performance of the system is analyzed using the probability of error in memory recall (PEMR) and the space-time neural capacity (ST-NC). 1.

  12. Cellular computational platform and neurally inspired elements thereof

    Energy Technology Data Exchange (ETDEWEB)

    Okandan, Murat

    2016-11-22

    A cellular computational platform is disclosed that includes a multiplicity of functionally identical, repeating computational hardware units that are interconnected electrically and optically. Each computational hardware unit includes a reprogrammable local memory and has interconnections to other such units that have reconfigurable weights. Each computational hardware unit is configured to transmit signals into the network for broadcast in a protocol-less manner to other such units in the network, and to respond to protocol-less broadcast messages that it receives from the network. Each computational hardware unit is further configured to reprogram the local memory in response to incoming electrical and/or optical signals.

  13. Analysis of Introducing Active Learning Methodologies in a Basic Computer Architecture Course

    Science.gov (United States)

    Arbelaitz, Olatz; José I. Martín; Muguerza, Javier

    2015-01-01

    This paper presents an analysis of introducing active methodologies in the Computer Architecture course taught in the second year of the Computer Engineering Bachelor's degree program at the University of the Basque Country (UPV/EHU), Spain. The paper reports the experience from three academic years, 2011-2012, 2012-2013, and 2013-2014, in which…

  14. Discrete Differential Forms: A Novel Methodology for Robust Computational Electromagnetics

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, P; Koning, J; Rieben, R; Stowell, M; White, D A

    2003-01-17

    This is the final report for the LLNL LDRD 01-LW-068. The Principle Investigator was Daniel White of the Center for Applied Scientific Computing (CASC). Collaborators included Paul Castillo and Mark Stowell of CASC, and Ph.D students Joe Koning and Rob Rieben of UC Davis. Some of the simulation results in this report were partially funded by a Defense Advanced Research Projects Agency research grant, and the two Ph.D. students were supported by the LLNL Student-Employee Graduate Research Fellow program. We begin with a short Administrative Overview which describes the motivation, scope, and deliverables of this research effort. Then follows the Technical section, which introduces the theory behind our Discrete Differential Forms approach, provides an overview of our FEMSTER C++ class library, and concludes with example simulations.

  15. The super-Turing computational power of plastic recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  16. Localizing Protein in 3D Neural Stem Cell Culture: a Hybrid Visualization Methodology

    Science.gov (United States)

    Fai, Stephen; Bennett, Steffany A.L.

    2010-01-01

    The importance of 3-dimensional (3D) topography in influencing neural stem and progenitor cell (NPC) phenotype is widely acknowledged yet challenging to study. When dissociated from embryonic or post-natal brain, single NPCs will proliferate in suspension to form neurospheres. Daughter cells within these cultures spontaneously adopt distinct developmental lineages (neurons, oligodendrocytes, and astrocytes) over the course of expansion despite being exposed to the same extracellular milieu. This progression recapitulates many of the stages observed over the course of neurogenesis and gliogenesis in post-natal brain and is often used to study basic NPC biology within a controlled environment. Assessing the full impact of 3D topography and cellular positioning within these cultures on NPC fate is, however, difficult. To localize target proteins and identify NPC lineages by immunocytochemistry, free-floating neurospheres must be plated on a substrate or serially sectioned. This processing is required to ensure equivalent cell permeabilization and antibody access throughout the sphere. As a result, 2D epifluorescent images of cryosections or confocal reconstructions of 3D Z-stacks can only provide spatial information about cell position within discrete physical or digital 3D slices and do not visualize cellular position in the intact sphere. Here, to reiterate the topography of the neurosphere culture and permit spatial analysis of protein expression throughout the entire culture, we present a protocol for isolation, expansion, and serial sectioning of post-natal hippocampal neurospheres suitable for epifluorescent or confocal immunodetection of target proteins. Connexin29 (Cx29) is analyzed as an example. Next, using a hybrid of graphic editing and 3D modelling softwares rigorously applied to maintain biological detail, we describe how to re-assemble the 3D structural positioning of these images and digitally map labelled cells within the complete neurosphere. This

  17. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    Energy Technology Data Exchange (ETDEWEB)

    Vineyard, Craig Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilize memory.

  18. Modelling and simulation of information systems on computer: methodological advantages.

    Science.gov (United States)

    Huet, B; Martin, J

    1980-01-01

    Modelling and simulation of information systems by the means of miniatures on computer aim at two general objectives: (a) as an aid to design and realization of information systems; and (b) a tool to improve the dialogue between the designer and the users. An operational information system has two components bound by a dynamic relationship, an information system and a behavioural system. Thanks to the behaviour system, modelling and simulation allow the designer to integrate into the projects a large proportion of the system's implicit specification. The advantages of modelling to the information system relate to: (a) The conceptual phase: initial objectives are compared with the results of simulation and sometimes modified. (b) The external specifications: simulation is particularly useful for personalising man-machine relationships in each application. (c) The internal specifications: if the miniatures are built on the concept of process, the global design and the software are tested and also the simulation refines the configuration and directs the choice of hardware. (d) The implementation: stimulation reduces costs, time and allows testing. Progress in modelling techniques will undoubtedly lead to better information systems.

  19. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    Directory of Open Access Journals (Sweden)

    Daniel Durstewitz

    2017-06-01

    Full Text Available The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast maximum-likelihood estimation framework for PLRNNs that may enable to recover

  20. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    Science.gov (United States)

    Durstewitz, Daniel

    2017-06-01

    The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects

  1. Computing with Biologically Inspired Neural Oscillators: Application to Colour Image Segmentation

    Directory of Open Access Journals (Sweden)

    Ammar Belatreche

    2010-01-01

    Full Text Available This paper investigates the computing capabilities and potential applications of neural oscillators, a biologically inspired neural model, to grey scale and colour image segmentation, an important task in image understanding and object recognition. A proposed neural system that exploits the synergy between neural oscillators and Kohonen self-organising maps (SOMs is presented. It consists of a two-dimensional grid of neural oscillators which are locally connected through excitatory connections and globally connected to a common inhibitor. Each neuron is mapped to a pixel of the input image and existing objects, represented by homogenous areas, are temporally segmented through synchronisation of the activity of neural oscillators that are mapped to pixels of the same object. Self-organising maps form the basis of a colour reduction system whose output is fed to a 2D grid of neural oscillators for temporal correlation-based object segmentation. Both chromatic and local spatial features are used. The system is simulated in Matlab and its demonstration on real world colour images shows promising results and the emergence of a new bioinspired approach for colour image segmentation. The paper concludes with a discussion of the performance of the proposed system and its comparison with traditional image segmentation approaches.

  2. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  3. Optimization of extraction of linarin from Flos chrysanthemi indici by response surface methodology and artificial neural network.

    Science.gov (United States)

    Pan, Hongye; Zhang, Qing; Cui, Keke; Chen, Guoquan; Liu, Xuesong; Wang, Longhu

    2017-05-01

    The extraction of linarin from Flos chrysanthemi indici by ethanol was investigated. Two modeling techniques, response surface methodology and artificial neural network, were adopted to optimize the process parameters, such as, ethanol concentration, extraction period, extraction frequency, and solvent to material ratio. We showed that both methods provided good predictions, but artificial neural network provided a better and more accurate result. The optimum process parameters include, ethanol concentration of 74%, extraction period of 2 h, extraction three times, solvent to material ratio of 12 mL/g. The experiment yield of linarin was 90.5% that deviated less than 1.6% from that obtained by predicted result. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Hybrid response surface methodology-artificial neural network optimization of drying process of banana slices in a forced convective dryer.

    Science.gov (United States)

    Taheri-Garavand, Amin; Karimi, Fatemeh; Karimi, Mahmoud; Lotfi, Valiullah; Khoobbakht, Golmohammad

    2017-01-01

    The aim of the study is to fit models for predicting surfaces using the response surface methodology and the artificial neural network to optimize for obtaining the maximum acceptability using desirability functions methodology in a hot air drying process of banana slices. The drying air temperature, air velocity, and drying time were chosen as independent factors and moisture content, drying rate, energy efficiency, and exergy efficiency were dependent variables or responses in the mentioned drying process. A rotatable central composite design as an adequate method was used to develop models for the responses in the response surface methodology. Moreover, isoresponse contour plots were useful to predict the results by performing only a limited set of experiments. The optimum operating conditions obtained from the artificial neural network models were moisture content 0.14 g/g, drying rate 1.03 g water/g h, energy efficiency 0.61, and exergy efficiency 0.91, when the air temperature, air velocity, and drying time values were equal to -0.42 (74.2 ℃), 1.00 (1.50 m/s), and -0.17 (2.50 h) in the coded units, respectively.

  5. Social influence modulates the neural computation of value.

    Science.gov (United States)

    Zaki, Jamil; Schirmer, Jessica; Mitchell, Jason P

    2011-07-01

    Social influence--individuals' tendency to conform to the beliefs and attitudes of others--has interested psychologists for decades. However, it has traditionally been difficult to distinguish true modification of attitudes from mere public compliance with social norms; this study addressed this challenge using functional neuroimaging. Participants rated the attractiveness of faces and subsequently learned how their peers ostensibly rated each face. Participants were then scanned using functional MRI while they rated each face a second time. The second ratings were influenced by social norms: Participants changed their ratings to conform to those of their peers. This social influence was accompanied by modulated engagement of two brain regions associated with coding subjective value--the nucleus accumbens and orbitofrontal cortex--a finding suggesting that exposure to social norms affected participants' neural representations of value assigned to stimuli. These findings document the utility of neuroimaging to demonstrate the private acceptance of social norms.

  6. Human Inspired Self-developmental Model of Neural Network (HIM): Introducing Content/Form Computing

    Science.gov (United States)

    Krajíček, Jiří

    This paper presents cross-disciplinary research between medical/psychological evidence on human abilities and informatics needs to update current models in computer science to support alternative methods for computation and communication. In [10] we have already proposed hypothesis introducing concept of human information model (HIM) as cooperative system. Here we continue on HIM design in detail. In our design, first we introduce Content/Form computing system which is new principle of present methods in evolutionary computing (genetic algorithms, genetic programming). Then we apply this system on HIM (type of artificial neural network) model as basic network self-developmental paradigm. Main inspiration of our natural/human design comes from well known concept of artificial neural networks, medical/psychological evidence and Sheldrake theory of "Nature as Alive" [22].

  7. Object-oriented analysis and design: a methodology for modeling the computer-based patient record.

    Science.gov (United States)

    Egyhazy, C J; Eyestone, S M; Martino, J; Hodgson, C L

    1998-08-01

    The article highlights the importance of an object-oriented analysis and design (OOAD) methodology for the computer-based patient record (CPR) in the military environment. Many OOAD methodologies do not adequately scale up, allow for efficient reuse of their products, or accommodate legacy systems. A methodology that addresses these issues is formulated and used to demonstrate its applicability in a large-scale health care service system. During a period of 6 months, a team of object modelers and domain experts formulated an OOAD methodology tailored to the Department of Defense Military Health System and used it to produce components of an object model for simple order processing. This methodology and the lessons learned during its implementation are described. This approach is necessary to achieve broad interoperability among heterogeneous automated information systems.

  8. Minimalist social-affective value for use in joint action: A neural-computational hypothesis

    DEFF Research Database (Denmark)

    Lowe, Robert; Almér, Alexander; Lindblad, Gustaf

    2016-01-01

    Joint Action is typically described as social interaction that requires coordination among two or more co-actors in order to achieve a common goal. In this article, we put forward a hypothesis for the existence of a neural-computational mechanism of affective valuation that may be critically expl...

  9. Artificial neural networks and support vector machine in banking computer systems

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2013-12-01

    Full Text Available In this paper, some artificial neural networks as well as a support vector machines have been studied due to bank computer system development. These approaches with the contact-less microprocessor technologies can upsurge the bank competitiveness by adding new functionalities. Moreover, some financial crisis influences can be declines.

  10. Distributed dynamical computation in neural circuits with propagating coherent activity patterns.

    Directory of Open Access Journals (Sweden)

    Pulin Gong

    2009-12-01

    Full Text Available Activity in neural circuits is spatiotemporally organized. Its spatial organization consists of multiple, localized coherent patterns, or patchy clusters. These patterns propagate across the circuits over time. This type of collective behavior has ubiquitously been observed, both in spontaneous activity and evoked responses; its function, however, has remained unclear. We construct a spatially extended, spiking neural circuit that generates emergent spatiotemporal activity patterns, thereby capturing some of the complexities of the patterns observed empirically. We elucidate what kind of fundamental function these patterns can serve by showing how they process information. As self-sustained objects, localized coherent patterns can signal information by propagating across the neural circuit. Computational operations occur when these emergent patterns interact, or collide with each other. The ongoing behaviors of these patterns naturally embody both distributed, parallel computation and cascaded logical operations. Such distributed computations enable the system to work in an inherently flexible and efficient way. Our work leads us to propose that propagating coherent activity patterns are the underlying primitives with which neural circuits carry out distributed dynamical computation.

  11. Computer Aided Methodology for Simultaneous Synthesis, Design & Analysis of Chemical Products-Processes

    DEFF Research Database (Denmark)

    d'Anterroches, Loïc; Gani, Rafiqul

    2006-01-01

    together with their corresponding flowsheet property models. To represent the process flowsheets in the same way as molecules, a unique but simple notation system has been developed. The methodology has been converted into a prototype software, which has been tested with several case studies covering......A new combined methodology for computer aided molecular design and process flowsheet design is presented. The methodology is based on the group contribution approach for prediction of molecular properties and design of molecules. Using the same principles, process groups have been developed...

  12. Neural computation of visual imaging based on Kronecker product in the primary visual cortex

    Directory of Open Access Journals (Sweden)

    Guozheng Yao

    2010-03-01

    Full Text Available Abstract Background What kind of neural computation is actually performed by the primary visual cortex and how is this represented mathematically at the system level? It is an important problem in the visual information processing, but has not been well answered. In this paper, according to our understanding of retinal organization and parallel multi-channel topographical mapping between retina and primary visual cortex V1, we divide an image into orthogonal and orderly array of image primitives (or patches, in which each patch will evoke activities of simple cells in V1. From viewpoint of information processing, this activated process, essentially, involves optimal detection and optimal matching of receptive fields of simple cells with features contained in image patches. For the reconstruction of the visual image in the visual cortex V1 based on the principle of minimum mean squares error, it is natural to use the inner product expression in neural computation, which then is transformed into matrix form. Results The inner product is carried out by using Kronecker product between patches and function architecture (or functional column in localized and oriented neural computing. Compared with Fourier Transform, the mathematical description of Kronecker product is simple and intuitive, so is the algorithm more suitable for neural computation of visual cortex V1. Results of computer simulation based on two-dimensional Gabor pyramid wavelets show that the theoretical analysis and the proposed model are reasonable. Conclusions Our results are: 1. The neural computation of the retinal image in cortex V1 can be expressed to Kronecker product operation and its matrix form, this algorithm is implemented by the inner operation between retinal image primitives and primary visual cortex's column. It has simple, efficient and robust features, which is, therefore, such a neural algorithm, which can be completed by biological vision. 2. It is more suitable

  13. Artificial neural network and response surface methodology modeling in mass transfer parameters predictions during osmotic dehydration of Carica papaya L.

    Directory of Open Access Journals (Sweden)

    J. Prakash Maran

    2013-09-01

    Full Text Available In this study, a comparative approach was made between artificial neural network (ANN and response surface methodology (RSM to predict the mass transfer parameters of osmotic dehydration of papaya. The effects of process variables such as temperature, osmotic solution concentration and agitation speed on water loss, weight reduction, and solid gain during osmotic dehydration were investigated using a three-level three-factor Box-Behnken experimental design. Same design was utilized to train a feed-forward multilayered perceptron (MLP ANN with back-propagation algorithm. The predictive capabilities of the two methodologies were compared in terms of root mean square error (RMSE, mean absolute error (MAE, standard error of prediction (SEP, model predictive error (MPE, chi square statistic (χ2, and coefficient of determination (R2 based on the validation data set. The results showed that properly trained ANN model is found to be more accurate in prediction as compared to RSM model.

  14. Application of response surface methodology and artificial neural network methods in modelling and optimization of biosorption process.

    Science.gov (United States)

    Witek-Krowiak, Anna; Chojnacka, Katarzyna; Podstawczyk, Daria; Dawiec, Anna; Pokomeda, Karol

    2014-05-01

    A review on the application of response surface methodology (RSM) and artificial neural networks (ANN) in biosorption modelling and optimization is presented. The theoretical background of the discussed methods with the application procedure is explained. The paper describes most frequently used experimental designs, concerning their limitations and typical applications. The paper also presents ways to determine the accuracy and the significance of model fitting for both methodologies described herein. Furthermore, recent references on biosorption modelling and optimization with the use of RSM and the ANN approach are shown. Special attention was paid to the selection of factors and responses, as well as to statistical analysis of the modelling results. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Biological neural networks as model systems for designing future parallel processing computers

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    One of the more interesting debates of the present day centers on whether human intelligence can be simulated by computer. The author works under the premise that neurons individually are not smart at all. Rather, they are physical units which are impinged upon continuously by other matter that influences the direction of voltage shifts across the units membranes. It is only the action of a great many neurons, billions in the case of the human nervous system, that intelligent behavior emerges. What is required to understand even the simplest neural system is painstaking analysis, bit by bit, of the architecture and the physiological functioning of its various parts. The biological neural network studied, the vestibular utricular and saccular maculas of the inner ear, are among the most simple of the mammalian neural networks to understand and model. While there is still a long way to go to understand even this most simple neural network in sufficient detail for extrapolation to computers and robots, a start was made. Moreover, the insights obtained and the technologies developed help advance the understanding of the more complex neural networks that underlie human intelligence.

  16. Neuromechanic: a computational platform for simulation and analysis of the neural control of movement

    Science.gov (United States)

    Bunderson, Nathan E.; Bingham, Jeffrey T.; Sohn, M. Hongchul; Ting, Lena H.; Burkholder, Thomas J.

    2015-01-01

    Neuromusculoskeletal models solve the basic problem of determining how the body moves under the influence of external and internal forces. Existing biomechanical modeling programs often emphasize dynamics with the goal of finding a feed-forward neural program to replicate experimental data or of estimating force contributions or individual muscles. The computation of rigid-body dynamics, muscle forces, and activation of the muscles are often performed separately. We have developed an intrinsically forward computational platform (Neuromechanic, www.neuromechanic.com) that explicitly represents the interdependencies among rigid body dynamics, frictional contact, muscle mechanics, and neural control modules. This formulation has significant advantages for optimization and forward simulation, particularly with application to neural controllers with feedback or regulatory features. Explicit inclusion of all state dependencies allows calculation of system derivatives with respect to kinematic states as well as muscle and neural control states, thus affording a wealth of analytical tools, including linearization, stability analyses and calculation of initial conditions for forward simulations. In this review, we describe our algorithm for generating state equations and explain how they may be used in integration, linearization and stability analysis tools to provide structural insights into the neural control of movement. PMID:23027632

  17. Association of Computers and Research Methodology: An Inference of Google Forms in Context of Behavioral Finance

    OpenAIRE

    Thakral, Charu; Dosajh, Dr. Babita; Aggarwal, Dr. Vimal

    2014-01-01

    Computers are rapidly becoming an integral part of our life. The human survival in todays life is hard without the use of technology. The human effort needed in particular area is reduced up to great extent due to technology. Likewise, in the area of research methodology also researchers are using latest and updated software which are making their work easier and in less stipulated time. Most complex situations require a combination of human and computer control, where humans provide intellig...

  18. Just-in-Time Compilation-Inspired Methodology for Parallelization of Compute Intensive Java Code

    OpenAIRE

    GHULAM MUSTAFA; WAQAR MAHMOOD; MUHAMMAD USMAN GHANI

    2017-01-01

    Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time) compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots ...

  19. Internal models and neural computation in the vestibular system

    OpenAIRE

    Green, Andrea M.; Dora E. Angelaki

    2010-01-01

    The vestibular system is vital for motor control and spatial self-motion perception. Afferents from the otolith organs and the semicircular canals converge with optokinetic, somatosensory and motor-related signals in the vestibular nuclei, which are reciprocally interconnected with the vestibulocerebellar cortex and deep cerebellar nuclei. Here, we review the properties of the many cell types in the vestibular nuclei, as well as some fundamental computations implemented within this brainstem–...

  20. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Lars Buesing

    2011-11-01

    Full Text Available The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  1. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  2. Depth perception in frogs and toads a study in neural computing

    CERN Document Server

    House, Donald

    1989-01-01

    Depth Perception in Frogs and Toads provides a comprehensive exploration of the phenomenon of depth perception in frogs and toads, as seen from a neuro-computational point of view. Perhaps the most important feature of the book is the development and presentation of two neurally realizable depth perception algorithms that utilize both monocular and binocular depth cues in a cooperative fashion. One of these algorithms is specialized for computation of depth maps for navigation, and the other for the selection and localization of a single prey for prey catching. The book is also unique in that it thoroughly reviews the known neuroanatomical, neurophysiological and behavioral data, and then synthesizes, organizes and interprets that information to explain a complex sensory-motor task. The book will be of special interest to that segment of the neural computing community interested in understanding natural neurocomputational structures, particularly to those working in perception and sensory-motor coordination. ...

  3. Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Ruchi D. Chande

    2017-01-01

    Full Text Available Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.

  4. Behavioral, Neural, and Computational Principles of Bodily Self-Consciousness.

    Science.gov (United States)

    Blanke, Olaf; Slater, Mel; Serino, Andrea

    2015-10-07

    Recent work in human cognitive neuroscience has linked self-consciousness to the processing of multisensory bodily signals (bodily self-consciousness [BSC]) in fronto-parietal cortex and more posterior temporo-parietal regions. We highlight the behavioral, neurophysiological, neuroimaging, and computational laws that subtend BSC in humans and non-human primates. We propose that BSC includes body-centered perception (hand, face, and trunk), based on the integration of proprioceptive, vestibular, and visual bodily inputs, and involves spatio-temporal mechanisms integrating multisensory bodily stimuli within peripersonal space (PPS). We develop four major constraints of BSC (proprioception, body-related visual information, PPS, and embodiment) and argue that the fronto-parietal and temporo-parietal processing of trunk-centered multisensory signals in PPS is of particular relevance for theoretical models and simulations of BSC and eventually of self-consciousness. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task

    Science.gov (United States)

    Revechkis, Boris; Aflalo, Tyson NS; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A.

    2014-12-01

    Objective. To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. Approach. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like ‘Face in a Crowd’ task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the ‘Crowd’) using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a ‘Crowd Off’ condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Main results. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Significance. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet

  6. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    Directory of Open Access Journals (Sweden)

    Lukas Falat

    2016-01-01

    Full Text Available This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  7. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    Science.gov (United States)

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  8. Computing single step operators of logic programming in radial basis function neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)

    2014-07-10

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  9. The application of a neural network methodology to the analysis of a dyeing operation

    Energy Technology Data Exchange (ETDEWEB)

    Hench, K.W.; Al-Ghanim, A.M. [New Mexico State Univ., Las Cruces, NM (United States). Dept. of Industrial Engineering

    1995-09-01

    The purpose of a dyeing process is to impart into a fiber a color that has desirable qualities through the use of a bath solution. The success of an operation is dependent on a variety of factors including fiber content, dye composition, dyebath pH, time, and temperature. In the event of a failed run, the addition of a correct amount of each dye that will move the dye run from a fail condition to a pass condition is a subjective judgement of an experienced operator. This paper presents a neural network approach for analyzing the dyeing process. Predictions of dye additions were obtained with promising results.

  10. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    Science.gov (United States)

    Rusakov, Dmitri A.; Savtchenko, Leonid P.

    2017-01-01

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877

  11. Computational Methodologies for Developing Structure–Morphology–Performance Relationships in Organic Solar Cells: A Protocol Review

    KAUST Repository

    Do, Khanh

    2016-09-08

    We outline a step-by-step protocol that incorporates a number of theoretical and computational methodologies to evaluate the structural and electronic properties of pi-conjugated semiconducting materials in the condensed phase. Our focus is on methodologies appropriate for the characterization, at the molecular level, of the morphology in blend systems consisting of an electron donor and electron acceptor, of importance for understanding the performance properties of bulk-heterojunction organic solar cells. The protocol is formulated as an introductory manual for investigators who aim to study the bulk-heterojunction morphology in molecular details, thereby facilitating the development of structure morphology property relationships when used in tandem with experimental results.

  12. An experimental study on nonlinear function computation for neural/fuzzy hardware design.

    Science.gov (United States)

    Basterretxea, Koldo; Tarela, José Manuel; del Campo, Inés; Bosque, Guillermo

    2007-01-01

    An experimental study on the influence of the computation of basic nodal nonlinear functions on the performance of (NFSs) is described in this paper. Systems' architecture size, their approximation capability, and the smoothness of provided mappings are used as performance indexes for this comparative paper. Two widely used kernel functions, the sigmoid-logistic function and the Gaussian function, are analyzed by their computation through an accuracy-controllable approximation algorithm designed for hardware implementation. Two artificial neural network (ANN) paradigms are selected for the analysis: backpropagation neural networks (BPNNs) with one hidden layer and radial basis function (RBF) networks. Extensive simulation of simple benchmark approximation problems is used in order to achieve generalizable conclusions. For the performance analysis of fuzzy systems, a functional equivalence theorem is used to extend obtained results to fuzzy inference systems (FISs). Finally, the adaptive neurofuzzy inference system (ANFIS) paradigm is used to observe the behavior of neurofuzzy systems with learning capabilities.

  13. Modeling and computing of stock index forecasting based on neural network and Markov chain.

    Science.gov (United States)

    Dai, Yonghui; Han, Dongmei; Dai, Weihui

    2014-01-01

    The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP) neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market.

  14. A Computational Estimation of Cyclic Material Properties Using Artificial Neural Networks

    OpenAIRE

    Tomasella, A.; Dsoki, C. el; H. Hanselka; Kaufmann, H.

    2011-01-01

    The structural durability design of components requires the knowledge of cyclic material properties. These parameters are strongly dependent on environmental conditions and manufacturing processes, and require many experimental tests to be correctly determined. Considering time and costs, it is not possible to include in the tests all the variables that influence the material behaviour. For this reason, the computational method of the Artificial Neural Network (ANN) can be implemented to supp...

  15. Utilizing neural networks in magnetic media modeling and field computation: A review

    OpenAIRE

    Amr A. Adly; Abd-El-Hafiz, Salwa K.

    2013-01-01

    Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in ma...

  16. Parametric optimization for floating drum anaerobic bio-digester using Response Surface Methodology and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    S. Sathish

    2016-12-01

    Full Text Available The main purpose of this study to increase the optimal conditions for biogas yield from anaerobic digestion of agricultural waste (Rice Straw using Response Surface Methodology (RSM and Artificial Neural Network (ANN. In the development of predictive models temperature, pH, substrate concentration and agitation time are conceived as model variables. The experimental results show that the liner model terms of temperature, substrate concentration and pH, agitation time have significance of interactive effects (p < 0.05. The results manifest that the optimum process parameters affected on biogas yield increase from the ANN model when compared to RSM model. The ANN model indicates that it is much more accurate and reckons the values of maximum biogas yield when compared to RSM model.

  17. Comparison of response surface methodology and artificial neural network approach towards efficient ultrasound-assisted biodiesel production from muskmelon oil.

    Science.gov (United States)

    Maran, J Prakash; Priya, B

    2015-03-01

    The present study is to evaluate and compare the prediction and simulating efficiencies of response surface methodology (RSM) and artificial neural network (ANN) based models on fatty acid methyl esters (FAME) yield achieved from muskmelon oil (MMO) under ultrasonication by two step in situ process. In first in situ process, free fatty acid content of MMO was reduced from 6.43% to 0.91% using H2SO4 as acid catalyst and organic phase in the first step was subjected to second reaction by adding KOH in methanol as basic catalyst. The influence of process variables (methanol to oil molar ratio, catalyst concentration, reaction temperature and reaction time) on conversion of FAME (second step) was investigated by central composite rotatable design (CCRD) of RSM and Multi-Layer Perceptron (MLP) neural network with the topology of 4-7-1. Both (RSM and ANN) were statistically compared by the coefficient of determination, root mean square error and absolute average deviation, based on the validation data set. The coefficient of determination (R(2)) calculated from the validation data for RSM and ANN models were 0.869 and 0.991 respectively. While both models showed good predictions in this study. But, the ANN model was more precise compared to the RSM model and it showed that, ANN is to be a powerful tool for modeling and optimizing FAME production. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Low-cost autonomous perceptron neural network inspired by quantum computation

    Science.gov (United States)

    Zidan, Mohammed; Abdel-Aty, Abdel-Haleem; El-Sadek, Alaa; Zanaty, E. A.; Abdel-Aty, Mahmoud

    2017-11-01

    Achieving low cost learning with reliable accuracy is one of the important goals to achieve intelligent machines to save time, energy and perform learning process over limited computational resources machines. In this paper, we propose an efficient algorithm for a perceptron neural network inspired by quantum computing composite from a single neuron to classify inspirable linear applications after a single training iteration O(1). The algorithm is applied over a real world data set and the results are outer performs the other state-of-the art algorithms.

  19. DEVELOPMENT OF A COMPUTER SYSTEM FOR IDENTITY AUTHENTICATION USING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Timur Kartbayev

    2017-03-01

    Full Text Available The aim of the study is to increase the effectiveness of automated face recognition to authenticate identity, considering features of change of the face parameters over time. The improvement of the recognition accuracy, as well as consideration of the features of temporal changes in a human face can be based on the methodology of artificial neural networks. Hybrid neural networks, combining the advantages of classical neural networks and fuzzy logic systems, allow using the network learnability along with the explanation of the findings. The structural scheme of intelligent system for identification based on artificial neural networks is proposed in this work. It realizes the principles of digital information processing and identity recognition taking into account the forecast of key characteristics’ changes over time (e.g., due to aging. The structural scheme has a three-tier architecture and implements preliminary processing, recognition and identification of images obtained as a result of monitoring. On the basis of expert knowledge, the fuzzy base of products is designed. It allows assessing possible changes in key characteristics, used to authenticate identity based on the image. To take this possibility into consideration, a neuro-fuzzy network of ANFIS type was used, which implements the algorithm of Tagaki-Sugeno. The conducted experiments showed high efficiency of the developed neural network and a low value of learning errors, which allows recommending this approach for practical implementation. Application of the developed system of fuzzy production rules that allow predicting changes in individuals over time, will improve the recognition accuracy, reduce the number of authentication failures and improve the efficiency of information processing and decision-making in applications, such as authentication of bank customers, users of mobile applications, or in video monitoring systems of sensitive sites.

  20. A neutron spectrum unfolding computer code based on artificial neural networks

    Science.gov (United States)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2014-02-01

    The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding in

  1. An Intelligent Gear Fault Diagnosis Methodology Using a Complex Wavelet Enhanced Convolutional Neural Network.

    Science.gov (United States)

    Sun, Weifang; Yao, Bin; Zeng, Nianyin; Chen, Binqiang; He, Yuchao; Cao, Xincheng; He, Wangpeng

    2017-07-12

    As a typical example of large and complex mechanical systems, rotating machinery is prone to diversified sorts of mechanical faults. Among these faults, one of the prominent causes of malfunction is generated in gear transmission chains. Although they can be collected via vibration signals, the fault signatures are always submerged in overwhelming interfering contents. Therefore, identifying the critical fault's characteristic signal is far from an easy task. In order to improve the recognition accuracy of a fault's characteristic signal, a novel intelligent fault diagnosis method is presented. In this method, a dual-tree complex wavelet transform (DTCWT) is employed to acquire the multiscale signal's features. In addition, a convolutional neural network (CNN) approach is utilized to automatically recognise a fault feature from the multiscale signal features. The experiment results of the recognition for gear faults show the feasibility and effectiveness of the proposed method, especially in the gear's weak fault features.

  2. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    Directory of Open Access Journals (Sweden)

    Shi Qiang Liu

    2016-01-01

    Full Text Available Errors compensation of micromachined-inertial-measurement-units (MIMU is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3 possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively.

  3. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit.

    Science.gov (United States)

    Liu, Shi Qiang; Zhu, Rong

    2016-01-29

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm³) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ± 10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ± 1 g, respectively.

  4. Computational Intelligence Applications in Smart Grids: Enabling Methodologies for Proactive and Self Organizing Power Systems

    OpenAIRE

    Zobaa, AF; Vaccaro, A.

    2015-01-01

    This book considers the emerging technologies and methodologies of the application of computational intelligence to smart grids. From a conceptual point of view, the smart grid is the convergence of information and operational technologies applied to the electric grid, allowing sustainable options to customers and improved levels of security. Smart grid technologies include advanced sensing systems, two-way high-speed communications, monitoring and enterprise analysis software, and relate...

  5. Characterizing Deep Brain Stimulation effects in computationally efficient neural network models.

    Science.gov (United States)

    Latteri, Alberta; Arena, Paolo; Mazzone, Paolo

    2011-04-15

    Recent studies on the medical treatment of Parkinson's disease (PD) led to the introduction of the so called Deep Brain Stimulation (DBS) technique. This particular therapy allows to contrast actively the pathological activity of various Deep Brain structures, responsible for the well known PD symptoms. This technique, frequently joined to dopaminergic drugs administration, replaces the surgical interventions implemented to contrast the activity of specific brain nuclei, called Basal Ganglia (BG). This clinical protocol gave the possibility to analyse and inspect signals measured from the electrodes implanted into the deep brain regions. The analysis of these signals led to the possibility to study the PD as a specific case of dynamical synchronization in biological neural networks, with the advantage to apply the theoretical analysis developed in such scientific field to find efficient treatments to face with this important disease. Experimental results in fact show that the PD neurological diseases are characterized by a pathological signal synchronization in BG. Parkinsonian tremor, for example, is ascribed to be caused by neuron populations of the Thalamic and Striatal structures that undergo an abnormal synchronization. On the contrary, in normal conditions, the activity of the same neuron populations do not appear to be correlated and synchronized. To study in details the effect of the stimulation signal on a pathological neural medium, efficient models of these neural structures were built, which are able to show, without any external input, the intrinsic properties of a pathological neural tissue, mimicking the BG synchronized dynamics.We start considering a model already introduced in the literature to investigate the effects of electrical stimulation on pathologically synchronized clusters of neurons. This model used Morris Lecar type neurons. This neuron model, although having a high level of biological plausibility, requires a large computational effort

  6. A methodology for the design of experiments in computational intelligence with multiple regression models

    Directory of Open Access Journals (Sweden)

    Carlos Fernandez-Lozano

    2016-12-01

    Full Text Available The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  7. A methodology for the design of experiments in computational intelligence with multiple regression models

    Science.gov (United States)

    Gestal, Marcos; Munteanu, Cristian R.; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable. PMID:27920952

  8. A methodology for the design of experiments in computational intelligence with multiple regression models.

    Science.gov (United States)

    Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  9. Nutrients interaction investigation to improve Monascus purpureus FTC5391 growth rate using Response Surface Methodology and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Mohamad, R.

    2013-01-01

    Full Text Available Aims: Two vital factors, certain environmental conditions and nutrients as a source of energy are entailed for successful growth and reproduction of microorganisms. Manipulation of nutritional requirement is the simplest and most effectual strategy to stimulate and enhance the activity of microorganisms. Methodology and Results: In this study, response surface methodology (RSM and artificial neural network (ANN were employed to optimize the carbon and nitrogen sources in order to improve growth rate of Monascus purpureus FTC5391,a new local isolate. The best models for optimization of growth rate were a multilayer full feed-forward incremental back propagation network, and a modified response surface model using backward elimination. The optimum condition for cell mass production was: sucrose 2.5%, yeast extract 0.045%, casamino acid 0.275%, sodium nitrate 0.48%, potato starch 0.045%, dextrose 1%, potassium nitrate 0.57%. The experimental cell mass production using this optimal condition was 21 mg/plate/12days, which was 2.2-fold higher than the standard condition (sucrose 5%, yeast extract 0.15%, casamino acid 0.25%, sodium nitrate 0.3%, potato starch 0.2%, dextrose 1%, potassium nitrate 0.3%. Conclusion, significance and impact of study: The results of RSM and ANN showed that all carbon and nitrogen sources tested had significant effect on growth rate (P-value < 0.05. In addition the use of RSM and ANN alongside each other provided a proper growth prediction model.

  10. Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images

    Directory of Open Access Journals (Sweden)

    Wei Li

    2016-01-01

    Full Text Available Computer aided detection (CAD systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO. Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.

  11. Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images

    Science.gov (United States)

    Li, Wei; Zhao, Dazhe; Wang, Junbo

    2016-01-01

    Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods. PMID:28070212

  12. Convolutional Neural Network on Embedded Linux System-on-Chip: A Methodology and Performance Benchmark

    Science.gov (United States)

    2016-05-01

    on a specific dataset with minimal concern of compute resources. What are the Pareto -optimal points and trade-offs for an energy-efficient CNN, when...system function calls. CNNs operate in two phases or modes: training and testing. The goal of training is to determine the optimal parameter values in...accuracy performance. The research question that follows is mostly unanswered: what are the Pareto -optimal points and trade-offs for an energy

  13. The prediction in computer color matching of dentistry based on GA+BP neural network.

    Science.gov (United States)

    Li, Haisheng; Lai, Long; Chen, Li; Lu, Cheng; Cai, Qiang

    2015-01-01

    Although the use of computer color matching can reduce the influence of subjective factors by technicians, matching the color of a natural tooth with a ceramic restoration is still one of the most challenging topics in esthetic prosthodontics. Back propagation neural network (BPNN) has already been introduced into the computer color matching in dentistry, but it has disadvantages such as unstable and low accuracy. In our study, we adopt genetic algorithm (GA) to optimize the initial weights and threshold values in BPNN for improving the matching precision. To our knowledge, we firstly combine the BPNN with GA in computer color matching in dentistry. Extensive experiments demonstrate that the proposed method improves the precision and prediction robustness of the color matching in restorative dentistry.

  14. Artificial Neural Networks for Reducing Computational Effort in Active Truncated Model Testing of Mooring Lines

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan Becker

    2015-01-01

    simultaneously, this method is very demanding in terms of numerical efficiency and computational power. Therefore, this method has not yet proved to be feasible. It has recently been shown how a hybrid method combining classical numerical models and artificial neural networks (ANN) can provide a dramatic...... model. Hence, in principal it is possible to achieve reliable experimental data for much larger water depths than what the actual depth of the test basin would suggest. However, since the computations must be faster than real time, as the numerical simulations and the physical experiment run...... reduction in computational effort when performing time domain simulation of mooring lines. The hybrid method uses a classical numerical model to generate simulation data, which are then subsequently used to train the ANN. After successful training the ANN is able to take over the simulation at a speed two...

  15. Just-in-Time Compilation-Inspired Methodology for Parallelization of Compute Intensive Java Code

    Directory of Open Access Journals (Sweden)

    GHULAM MUSTAFA

    2017-01-01

    Full Text Available Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system

  16. Optimization of total flavonoid compound extraction from camellia sinesis using the artificial neural network and response surface methodology

    Directory of Open Access Journals (Sweden)

    Savić Ivan M.

    2013-01-01

    Full Text Available The aim of this paper was to model and optimize the process of total flavonoid extraction from the green tea using the artificial neural network and response surface methodology, as well as the comparation of these optimization techniques. The extraction time, ethanol concentration and solid-to-liquid ratio were identified as the independent variables, while the yield of total flavonoid was selected as the dependent variable. Central composite design (CCD, using second-order polynomial model and multilayer perceptron (MLP were used for fitting the obtained experimental data. The values of root mean square error, cross-validated correlation coefficient and normal correlation coefficient for both models indicate that the artificial neural network is better in prediction of total flavonoid yield than CCD. The optimal conditions using the desirability function at CCD model was achieved for the extraction time of 32.5 min, ethanol concentration of 100% (v/v and solid-to-liquid ratio of 1:32.5 (m/v. The predicted yield at these conditions was 2.11 g/100 g of the dried extract (d.e., while the experimentally obtained was 2.39 g/100 g d.e. The extraction process was optimized by the use of simplex method at MLP model. The optimal value of total flavonoid yield (2.80 g/100 g d.e. was achieved after the extraction time of 27.2 min using ethanol concentration of 100% (v/v at solid-to-liquid ratio of 1:20.7 (m/v. The predicted value of response under optimal conditions for MLP model was also experimentally confirmed (2.71 g/100 g d.e..

  17. Computational benefits using artificial intelligent methodologies for the solution of an environmental design problem: saltwater intrusion.

    Science.gov (United States)

    Papadopoulou, Maria P; Nikolos, Ioannis K; Karatzas, George P

    2010-01-01

    Artificial Neural Networks (ANNs) comprise a powerful tool to approximate the complicated behavior and response of physical systems allowing considerable reduction in computation time during time-consuming optimization runs. In this work, a Radial Basis Function Artificial Neural Network (RBFN) is combined with a Differential Evolution (DE) algorithm to solve a water resources management problem, using an optimization procedure. The objective of the optimization scheme is to cover the daily water demand on the coastal aquifer east of the city of Heraklion, Crete, without reducing the subsurface water quality due to seawater intrusion. The RBFN is utilized as an on-line surrogate model to approximate the behavior of the aquifer and to replace some of the costly evaluations of an accurate numerical simulation model which solves the subsurface water flow differential equations. The RBFN is used as a local approximation model in such a way as to maintain the robustness of the DE algorithm. The results of this procedure are compared to the corresponding results obtained by using the Simplex method and by using the DE procedure without the surrogate model. As it is demonstrated, the use of the surrogate model accelerates the convergence of the DE optimization procedure and additionally provides a better solution at the same number of exact evaluations, compared to the original DE algorithm.

  18. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia

    Science.gov (United States)

    Kim, Sung-Phil; Simeral, John D.; Hochberg, Leigh R.; Donoghue, John P.; Black, Michael J.

    2008-12-01

    Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. Disclosure. JPD is the Chief Scientific Officer and a director of Cyberkinetics Neurotechnology Systems (CYKN); he holds stock and receives compensation. JDS has been a consultant for CYKN. LRH receives clinical trial support from CYKN.

  19. A computational methodology for formulating gasoline surrogate fuels with accurate physical and chemical kinetic properties

    KAUST Repository

    Ahmed, Ahfaz

    2015-03-01

    Gasoline is the most widely used fuel for light duty automobile transportation, but its molecular complexity makes it intractable to experimentally and computationally study the fundamental combustion properties. Therefore, surrogate fuels with a simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating surrogates for FACE (fuels for advanced combustion engines) gasolines A and C by combining regression modeling with physical and chemical kinetics simulations. The computational methodology integrates simulation tools executed across different software platforms. Initially, the palette of surrogate species and carbon types for the target fuels were determined from a detailed hydrocarbon analysis (DHA). A regression algorithm implemented in MATLAB was linked to REFPROP for simulation of distillation curves and calculation of physical properties of surrogate compositions. The MATLAB code generates a surrogate composition at each iteration, which is then used to automatically generate CHEMKIN input files that are submitted to homogeneous batch reactor simulations for prediction of research octane number (RON). The regression algorithm determines the optimal surrogate composition to match the fuel properties of FACE A and C gasoline, specifically hydrogen/carbon (H/C) ratio, density, distillation characteristics, carbon types, and RON. The optimal surrogate fuel compositions obtained using the present computational approach was compared to the real fuel properties, as well as with surrogate compositions available in the literature. Experiments were conducted within a Cooperative Fuels Research (CFR) engine operating under controlled autoignition (CAI) mode to compare the formulated surrogates against the real fuels. Carbon monoxide measurements indicated that the proposed surrogates

  20. Quantification of aortic annulus in computed tomography angiography: Validation of a fully automatic methodology.

    Science.gov (United States)

    Gao, Xinpei; Boccalini, Sara; Kitslaar, Pieter H; Budde, Ricardo P J; Attrach, Mohamed; Tu, Shengxian; de Graaf, Michiel A; Ondrus, Tomas; Penicka, Martin; Scholte, Arthur J H A; Lelieveldt, Boudewijn P F; Dijkstra, Jouke; Reiber, Johan H C

    2017-08-01

    Automatic accurate measuring of the aortic annulus and determination of the optimal angulation of X-ray projection are important for the trans-catheter aortic valve replacement (TAVR) procedure. The objective of this study was to present a novel fully automatic methodology for the quantification of the aortic annulus in computed tomography angiography (CTA) images. CTA datasets of 26 patients were analyzed retrospectively with the proposed methodology, which consists of a knowledge-based segmentation of the aortic root and detection of the orientation and size of the aortic annulus. The accuracy of the methodology was determined by comparing the automatically derived results with the reference standard obtained by semi-automatic delineation of the aortic root and manual definition of the annulus plane. The difference between the automatic annulus diameter and the reference standard by observer 1 was 0.2±1.0mm, with an inter-observer variability of 1.2±0.6mm. The Pearson correlation coefficient for the diameter was good (0.92 for observer 1). For the first time, a fully automatic tool to assess the optimal projection curves was presented and validated. The mean difference between the optimal projection curves calculated based on the automatically defined annulus plane and the reference standard was 6.4° in the cranial/caudal (CRA/CAU) direction. The mean computation time was short with around 60s per dataset. The new fully automatic and fast methodology described in this manuscript not only provided precise measurements about the aortic annulus size with results comparable to experienced observers, but also predicted optimal X-ray projection curves from CTA images. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Complexity, chaos and human physiology: the justification for non-linear neural computational analysis.

    Science.gov (United States)

    Baxt, W G

    1994-03-15

    Background is presented to suggest that a great many biologic processes are chaotic. It is well known that chaotic processes can be accurately characterized by non-linear technologies. Evidence is presented that an artificial neural network, which is a known method for the application of non-linear statistics, is able to perform more accurately in identifying patients with and without myocardial infarction than either physicians or other computer paradigms. It is suggested that the improved performance may be due to the network's better ability to characterize what is a chaotic process imbedded in the problem of the clinical diagnosis of this entity.

  2. Simulation of Neurocomputing Based on Photophobic Reactions of Euglena: Toward Microbe-Based Neural Network Computing

    Science.gov (United States)

    Ozasa, Kazunari; Aono, Masashi; Maeda, Mizuo; Hara, Masahiko

    In order to develop an adaptive computing system, we investigate microscopic optical feedback to a group of microbes (Euglena gracilis in this study) with a neural network algorithm, expecting that the unique characteristics of microbes, especially their strategies to survive/adapt against unfavorable environmental stimuli, will explicitly determine the temporal evolution of the microbe-based feedback system. The photophobic reactions of Euglena are extracted from experiments, and built in the Monte-Carlo simulation of a microbe-based neurocomputing. The simulation revealed a good performance of Euglena-based neurocomputing. Dynamic transition among the solutions is discussed from the viewpoint of feedback instability.

  3. Smart learning objects for smart education in computer science theory, methodology and robot-based implementation

    CERN Document Server

    Stuikys, Vytautas

    2015-01-01

    This monograph presents the challenges, vision and context to design smart learning objects (SLOs) through Computer Science (CS) education modelling and feature model transformations. It presents the latest research on the meta-programming-based generative learning objects (the latter with advanced features are treated as SLOs) and the use of educational robots in teaching CS topics. The introduced methodology includes the overall processes to develop SLO and smart educational environment (SEE) and integrates both into the real education setting to provide teaching in CS using constructivist a

  4. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    Science.gov (United States)

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.

  5. Convective drying of regular mint leaves: analysis based on fitting empirical correlations, response surface methodology and neural networks

    Directory of Open Access Journals (Sweden)

    Ariany Binda Silva Costa

    2014-04-01

    Full Text Available In the present work, an analysis of drying of peppermint (Menta x villosa H. leaves has been made using empirical correlations, response surface models and a neural network model. The main goal was to apply different modeling approaches to predict moisture content and drying rates in the drying of leaves, and obtaining an overview on the subject. Experiments were carried out in a convective horizontal flow dryer in which samples were placed parallel to the air stream under operating conditions of air temperatures from 36 to 64°C, air velocities from 1.0 to 2.0 m s-1 and sample loads from 18 to 42 g, corresponding to sample heights of 1.4, 1.7 and 3.5 cm respectively. A complete 33 experimental design was used. Results have shown that the three methodologies employed in this work were complementary in the sense that they simultaneously provided a better understanding of leaves drying.

  6. Growth Characteristics Modeling of Mixed Culture of Bifidobacterium bifidum and Lactobacillus acidophilus using Response Surface Methodology and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Ganga Sahay Meena

    2014-12-01

    Full Text Available Different culture conditions viz. additional carbon and nitrogen content, inoculum size and age, temperature and pH of the mixed culture of Bifidobacterium bifidum and Lactobacillus acidophilus were optimized using response surface methodology (RSM and artificial neural network (ANN. Kinetic growth models were fitted for the cultivations using a Fractional Factorial (FF design experiments for different variables. This novel concept of combining the optimization and modeling presented different optimal conditions for the mixture of B. bifidum and L. acidophilus growth from their one variable at-a-time (OVAT optimization study. Through these statistical tools, the product yield (cell mass of the mixture of B. bifidum and L. acidophilus was increased. Regression coefficients (R2 of both the statistical tools predicted that ANN was better than RSM and the regression equation was solved with the help of genetic algorithms (GA. The normalized percentage mean squared error obtained from the ANN and RSM models were 0.08 and 0.3%, respectively. The optimum conditions for the maximum biomass yield were at temperature 38°C, pH 6.5, inoculum volume 1.60 mL, inoculum age 30 h, carbon content 42.31% (w/v, and nitrogen content 14.20% (w/v. The results demonstrated a higher prediction accuracy of ANN compared to RSM.

  7. Optimization of controlled release nanoparticle formulation of verapamil hydrochloride using artificial neural networks with genetic algorithm and response surface methodology.

    Science.gov (United States)

    Li, Yongqiang; Abbaspour, Mohammadreza R; Grootendorst, Paul V; Rauth, Andrew M; Wu, Xiao Yu

    2015-08-01

    This study was performed to optimize the formulation of polymer-lipid hybrid nanoparticles (PLN) for the delivery of an ionic water-soluble drug, verapamil hydrochloride (VRP) and to investigate the roles of formulation factors. Modeling and optimization were conducted based on a spherical central composite design. Three formulation factors, i.e., weight ratio of drug to lipid (X1), and concentrations of Tween 80 (X2) and Pluronic F68 (X3), were chosen as independent variables. Drug loading efficiency (Y1) and mean particle size (Y2) of PLN were selected as dependent variables. The predictive performance of artificial neural networks (ANN) and the response surface methodology (RSM) were compared. As ANN was found to exhibit better recognition and generalization capability over RSM, multi-objective optimization of PLN was then conducted based upon the validated ANN models and continuous genetic algorithms (GA). The optimal PLN possess a high drug loading efficiency (92.4%, w/w) and a small mean particle size (∼100nm). The predicted response variables matched well with the observed results. The three formulation factors exhibited different effects on the properties of PLN. ANN in coordination with continuous GA represent an effective and efficient approach to optimize the PLN formulation of VRP with desired properties. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A methodology to develop computational phantoms with adjustable posture for WBC calibration.

    Science.gov (United States)

    Fonseca, T C Ferreira; Bogaerts, R; Hunt, John; Vanhavere, F

    2014-11-21

    A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.

  9. Software development methodology for computer based I&C systems of prototype fast breeder reactor

    Energy Technology Data Exchange (ETDEWEB)

    Manimaran, M., E-mail: maran@igcar.gov.in; Shanmugam, A.; Parimalam, P.; Murali, N.; Satya Murty, S.A.V.

    2015-10-15

    Highlights: • Software development methodology adopted for computer based I&C systems of PFBR is detailed. • Constraints imposed as part of software requirements and coding phase are elaborated. • Compliance to safety and security requirements are described. • Usage of CASE (Computer Aided Software Engineering) tools during software design, analysis and testing phase are explained. - Abstract: Prototype Fast Breeder Reactor (PFBR) is sodium cooled reactor which is in the advanced stage of construction in Kalpakkam, India. Versa Module Europa bus based Real Time Computer (RTC) systems are deployed for Instrumentation & Control of PFBR. RTC systems have to perform safety functions within the stipulated time which calls for highly dependable software. Hence, well defined software development methodology is adopted for RTC systems starting from the requirement capture phase till the final validation of the software product. V-model is used for software development. IEC 60880 standard and AERB SG D-25 guideline are followed at each phase of software development. Requirements documents and design documents are prepared as per IEEE standards. Defensive programming strategies are followed for software development using C language. Verification and validation (V&V) of documents and software are carried out at each phase by independent V&V committee. Computer aided software engineering tools are used for software modelling, checking for MISRA C compliance and to carry out static and dynamic analysis. Various software metrics such as cyclomatic complexity, nesting depth and comment to code are checked. Test cases are generated using equivalence class partitioning, boundary value analysis and cause and effect graphing techniques. System integration testing is carried out wherein functional and performance requirements of the system are monitored.

  10. Neural Computation via Neural Geometry: A Place Code for Inter-whisker Timing in the Barrel Cortex?

    Science.gov (United States)

    Wilson, Stuart P.; Bednar, James A.; Prescott, Tony J.; Mitchinson, Ben

    2011-01-01

    The place theory proposed by Jeffress (1948) is still the dominant model of how the brain represents the movement of sensory stimuli between sensory receptors. According to the place theory, delays in signalling between neurons, dependent on the distances between them, compensate for time differences in the stimulation of sensory receptors. Hence the location of neurons, activated by the coincident arrival of multiple signals, reports the stimulus movement velocity. Despite its generality, most evidence for the place theory has been provided by studies of the auditory system of auditory specialists like the barn owl, but in the study of mammalian auditory systems the evidence is inconclusive. We ask to what extent the somatosensory systems of tactile specialists like rats and mice use distance dependent delays between neurons to compute the motion of tactile stimuli between the facial whiskers (or ‘vibrissae’). We present a model in which synaptic inputs evoked by whisker deflections arrive at neurons in layer 2/3 (L2/3) somatosensory ‘barrel’ cortex at different times. The timing of synaptic inputs to each neuron depends on its location relative to sources of input in layer 4 (L4) that represent stimulation of each whisker. Constrained by the geometry and timing of projections from L4 to L2/3, the model can account for a range of experimentally measured responses to two-whisker stimuli. Consistent with that data, responses of model neurons located between the barrels to paired stimulation of two whiskers are greater than the sum of the responses to either whisker input alone. The model predicts that for neurons located closer to either barrel these supralinear responses are tuned for longer inter-whisker stimulation intervals, yielding a topographic map for the inter-whisker deflection interval across the surface of L2/3. This map constitutes a neural place code for the relative timing of sensory stimuli. PMID:22022245

  11. Neural computation via neural geometry: a place code for inter-whisker timing in the barrel cortex?

    Directory of Open Access Journals (Sweden)

    Stuart P Wilson

    2011-10-01

    Full Text Available The place theory proposed by Jeffress (1948 is still the dominant model of how the brain represents the movement of sensory stimuli between sensory receptors. According to the place theory, delays in signalling between neurons, dependent on the distances between them, compensate for time differences in the stimulation of sensory receptors. Hence the location of neurons, activated by the coincident arrival of multiple signals, reports the stimulus movement velocity. Despite its generality, most evidence for the place theory has been provided by studies of the auditory system of auditory specialists like the barn owl, but in the study of mammalian auditory systems the evidence is inconclusive. We ask to what extent the somatosensory systems of tactile specialists like rats and mice use distance dependent delays between neurons to compute the motion of tactile stimuli between the facial whiskers (or 'vibrissae'. We present a model in which synaptic inputs evoked by whisker deflections arrive at neurons in layer 2/3 (L2/3 somatosensory 'barrel' cortex at different times. The timing of synaptic inputs to each neuron depends on its location relative to sources of input in layer 4 (L4 that represent stimulation of each whisker. Constrained by the geometry and timing of projections from L4 to L2/3, the model can account for a range of experimentally measured responses to two-whisker stimuli. Consistent with that data, responses of model neurons located between the barrels to paired stimulation of two whiskers are greater than the sum of the responses to either whisker input alone. The model predicts that for neurons located closer to either barrel these supralinear responses are tuned for longer inter-whisker stimulation intervals, yielding a topographic map for the inter-whisker deflection interval across the surface of L2/3. This map constitutes a neural place code for the relative timing of sensory stimuli.

  12. Computer vision system for egg volume prediction using backpropagation neural network

    Science.gov (United States)

    Siswantoro, J.; Hilman, M. Y.; Widiasri, M.

    2017-11-01

    Volume is one of considered aspects in egg sorting process. A rapid and accurate volume measurement method is needed to develop an egg sorting system. Computer vision system (CVS) provides a promising solution for volume measurement problem. Artificial neural network (ANN) has been used to predict the volume of egg in several CVSs. However, volume prediction from ANN could have less accuracy due to inappropriate input features or inappropriate ANN structure. This paper proposes a CVS for predicting the volume of egg using ANN. The CVS acquired an image of egg from top view and then processed the image to extract its 1D and 2 D size features. The features were used as input for ANN in predicting the volume of egg. The experiment results show that the proposed CSV can predict the volume of egg with a good accuracy and less computation time.

  13. Characterization of physiological networks in sleep apnea patients using artificial neural networks for Granger causality computation

    Science.gov (United States)

    Cárdenas, Jhon; Orjuela-Cañón, Alvaro D.; Cerquera, Alexander; Ravelo, Antonio

    2017-11-01

    Different studies have used Transfer Entropy (TE) and Granger Causality (GC) computation to quantify interconnection between physiological systems. These methods have disadvantages in parametrization and availability in analytic formulas to evaluate the significance of the results. Other inconvenience is related with the assumptions in the distribution of the models generated from the data. In this document, the authors present a way to measure the causality that connect the Central Nervous System (CNS) and the Cardiac System (CS) in people diagnosed with obstructive sleep apnea syndrome (OSA) before and during treatment with continuous positive air pressure (CPAP). For this purpose, artificial neural networks were used to obtain models for GC computation, based on time series of normalized powers calculated from electrocardiography (EKG) and electroencephalography (EEG) signals recorded in polysomnography (PSG) studies.

  14. Adaptiveness in monotone pseudo-Boolean optimization and stochastic neural computation.

    Science.gov (United States)

    Grossi, Giuliano

    2009-08-01

    Hopfield neural network (HNN) is a nonlinear computational model successfully applied in finding near-optimal solutions of several difficult combinatorial problems. In many cases, the network energy function is obtained through a learning procedure so that its minima are states falling into a proper subspace (feasible region) of the search space. However, because of the network nonlinearity, a number of undesirable local energy minima emerge from the learning procedure, significantly effecting the network performance. In the neural model analyzed here, we combine both a penalty and a stochastic process in order to enhance the performance of a binary HNN. The penalty strategy allows us to gradually lead the search towards states representing feasible solutions, so avoiding oscillatory behaviors or asymptotically instable convergence. Presence of stochastic dynamics potentially prevents the network to fall into shallow local minima of the energy function, i.e., quite far from global optimum. Hence, for a given fixed network topology, the desired final distribution on the states can be reached by carefully modulating such process. The model uses pseudo-Boolean functions both to express problem constraints and cost function; a combination of these two functions is then interpreted as energy of the neural network. A wide variety of NP-hard problems fall in the class of problems that can be solved by the model at hand, particularly those having a monotonic quadratic pseudo-Boolean function as constraint function. That is, functions easily derived by closed algebraic expressions representing the constraint structure and easy (polynomial time) to maximize. We show the asymptotic convergence properties of this model characterizing its state space distribution at thermal equilibrium in terms of Markov chain and give evidence of its ability to find high quality solutions on benchmarks and randomly generated instances of two specific problems taken from the computational graph

  15. A proposed methodology for computational fluid dynamics code verification, calibration, and validation

    Science.gov (United States)

    Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.

    Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

  16. A proposed methodology for computational fluid dynamics code verification, calibration, and validation

    Energy Technology Data Exchange (ETDEWEB)

    Aeschliman, D.P.; Oberkampf, W.L.; Blottner, F.G.

    1995-07-01

    Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

  17. Lexical organization and competition in first and second languages: computational and neural mechanisms.

    Science.gov (United States)

    Li, Ping

    2009-06-01

    How does a child rapidly acquire and develop a structured mental organization for the vast number of words in the first years of life? How does a bilingual individual deal with the even more complicated task of learning and organizing two lexicons? It is only until recently have we started to examine the lexicon as a dynamical system with regard to its acquisition, representation, and organization. In this article, I outline a proposal based on our research that takes the dynamical approach to the lexicon, and I discuss how this proposal can be applied to account for lexical organization, structural representation, and competition within and between languages. In particular, I provide computational evidence based on the DevLex model, a self-organizing neural network model, and neuroimaging evidence based on functional magnetic resonance imaging (fMRI) studies, to illustrate how children and adults learn and represent the lexicon in their first and second languages. In the computational research, our goal has been to identify, through linguistically and developmentally realistic models, detailed cognitive mechanisms underlying the dynamic self-organizing processes in monolingual and bilingual lexical development; in the neuroimaging research, our goal has been to identify the neural substrates that subserve lexical organization and competition in the monolingual and the bilingual brain. In both cases, our findings lead to a better understanding of the interactive dynamics involved in the acquisition and representation of one or multiple languages. Copyright © 2009 Cognitive Science Society, Inc.

  18. Quantum neural network-based EEG filtering for a brain-computer interface.

    Science.gov (United States)

    Gandhi, Vaibhav; Prasad, Girijesh; Coyle, Damien; Behera, Laxmidhar; McGinnity, Thomas Martin

    2014-02-01

    A novel neural information processing architecture inspired by quantum mechanics and incorporating the well-known Schrodinger wave equation is proposed in this paper. The proposed architecture referred to as recurrent quantum neural network (RQNN) can characterize a nonstationary stochastic signal as time-varying wave packets. A robust unsupervised learning algorithm enables the RQNN to effectively capture the statistical behavior of the input signal and facilitates the estimation of signal embedded in noise with unknown characteristics. The results from a number of benchmark tests show that simple signals such as dc, staircase dc, and sinusoidal signals embedded within high noise can be accurately filtered and particle swarm optimization can be employed to select model parameters. The RQNN filtering procedure is applied in a two-class motor imagery-based brain-computer interface where the objective was to filter electroencephalogram (EEG) signals before feature extraction and classification to increase signal separability. A two-step inner-outer fivefold cross-validation approach is utilized to select the algorithm parameters subject-specifically for nine subjects. It is shown that the subject-specific RQNN EEG filtering significantly improves brain-computer interface performance compared to using only the raw EEG or Savitzky-Golay filtered EEG across multiple sessions.

  19. Modeling and Computing of Stock Index Forecasting Based on Neural Network and Markov Chain

    Directory of Open Access Journals (Sweden)

    Yonghui Dai

    2014-01-01

    Full Text Available The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market.

  20. Computational implementation of a systems prioritization methodology for the Waste Isolation Pilot Plant: A preliminary example

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C. [Arizona State Univ., Tempe, AZ (United States). Dept. of Mathematics; Anderson, D.R. [Sandia National Labs., Albuquerque, NM (United States). WIPP Performance Assessments Departments; Baker, B.L. [Technadyne Engineering Consultants, Albuquerque, NM (United States)] [and others

    1996-04-01

    A systems prioritization methodology (SPM) is under development to provide guidance to the US DOE on experimental programs and design modifications to be supported in the development of a successful licensing application for the Waste Isolation Pilot Plant (WIPP) for the geologic disposal of transuranic (TRU) waste. The purpose of the SPM is to determine the probabilities that the implementation of different combinations of experimental programs and design modifications, referred to as activity sets, will lead to compliance. Appropriate tradeoffs between compliance probability, implementation cost and implementation time can then be made in the selection of the activity set to be supported in the development of a licensing application. Descriptions are given for the conceptual structure of the SPM and the manner in which this structure determines the computational implementation of an example SPM application. Due to the sophisticated structure of the SPM and the computational demands of many of its components, the overall computational structure must be organized carefully to provide the compliance probabilities for the large number of activity sets under consideration at an acceptable computational cost. Conceptually, the determination of each compliance probability is equivalent to a large numerical integration problem. 96 refs., 31 figs., 36 tabs.

  1. Computational methodology to determine fluid related parameters of non regular three-dimensional scaffolds.

    Science.gov (United States)

    Acosta Santamaría, Víctor Andrés; Malvè, M; Duizabo, A; Mena Tobar, A; Gallego Ferrer, G; García Aznar, J M; Doblaré, M; Ochoa, I

    2013-11-01

    The application of three-dimensional (3D) biomaterials to facilitate the adhesion, proliferation, and differentiation of cells has been widely studied for tissue engineering purposes. The fabrication methods used to improve the mechanical response of the scaffold produce complex and non regular structures. Apart from the mechanical aspect, the fluid behavior in the inner part of the scaffold should also be considered. Parameters such as permeability (k) or wall shear stress (WSS) are important aspects in the provision of nutrients, the removal of metabolic waste products or the mechanically-induced differentiation of cells attached in the trabecular network of the scaffolds. Experimental measurements of these parameters are not available in all labs. However, fluid parameters should be known prior to other types of experiments. The present work compares an experimental study with a computational fluid dynamics (CFD) methodology to determine the related fluid parameters (k and WSS) of complex non regular poly(L-lactic acid) scaffolds based only on the treatment of microphotographic images obtained with a microCT (μCT). The CFD analysis shows similar tendencies and results with low relative difference compared to those of the experimental study, for high flow rates. For low flow rates the accuracy of this prediction reduces. The correlation between the computational and experimental results validates the robustness of the proposed methodology.

  2. COMPUTATIONAL ANALYSIS BASED ON ARTIFICIAL NEURAL NETWORKS FOR AIDING IN DIAGNOSING OSTEOARTHRITIS OF THE LUMBAR SPINE.

    Science.gov (United States)

    Veronezi, Carlos Cassiano Denipotti; de Azevedo Simões, Priscyla Waleska Targino; Dos Santos, Robson Luiz; da Rocha, Edroaldo Lummertz; Meláo, Suelen; de Mattos, Merisandra Côrtes; Cechinel, Cristian

    2011-01-01

    To ascertain the advantages of applying artificial neural networks to recognize patterns on lumbar spine radiographies in order to aid in the process of diagnosing primary osteoarthritis. This was a cross-sectional descriptive analytical study with a quantitative approach and an emphasis on diagnosis. The training set was composed of images collected between January and July 2009 from patients who had undergone lateral-view digital radiographies of the lumbar spine, which were provided by a radiology clinic located in the municipality of Criciúma (SC). Out of the total of 260 images gathered, those with distortions, those presenting pathological conditions that altered the architecture of the lumbar spine and those with patterns that were difficult to characterize were discarded, resulting in 206 images. The image data base (n = 206) was then subdivided, resulting in 68 radiographies for the training stage, 68 images for tests and 70 for validation. A hybrid neural network based on Kohonen self-organizing maps and on Multilayer Perceptron networks was used. After 90 cycles, the validation was carried out on the best results, achieving accuracy of 62.85%, sensitivity of 65.71% and specificity of 60%. Even though the effectiveness shown was moderate, this study is still innovative. The values show that the technique used has a promising future, pointing towards further studies on image and cycle processing methodology with a larger quantity of radiographies.

  3. Computer aided decision making for heart disease detection using hybrid neural network-Genetic algorithm.

    Science.gov (United States)

    Arabasadi, Zeinab; Alizadehsani, Roohallah; Roshanzamir, Mohamad; Moosaei, Hossein; Yarifard, Ali Asghar

    2017-04-01

    Cardiovascular disease is one of the most rampant causes of death around the world and was deemed as a major illness in Middle and Old ages. Coronary artery disease, in particular, is a widespread cardiovascular malady entailing high mortality rates. Angiography is, more often than not, regarded as the best method for the diagnosis of coronary artery disease; on the other hand, it is associated with high costs and major side effects. Much research has, therefore, been conducted using machine learning and data mining so as to seek alternative modalities. Accordingly, we herein propose a highly accurate hybrid method for the diagnosis of coronary artery disease. As a matter of fact, the proposed method is able to increase the performance of neural network by approximately 10% through enhancing its initial weights using genetic algorithm which suggests better weights for neural network. Making use of such methodology, we achieved accuracy, sensitivity and specificity rates of 93.85%, 97% and 92% respectively, on Z-Alizadeh Sani dataset. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. An original methodology to compute SWE of mountainous regions: insight from the Italian Eastern Alps

    Science.gov (United States)

    Cianfarra, Paola; Valt, Mauro

    2013-04-01

    In this work we present an original methodology for the evaluation of the Snow Water Equivalent (SWE) from regions covering an area of about 5000 km2. The methodology has been tuned and set up over the Italian Eastern Alps using MODIS satellite images (http://rapidfire.sci.gsfc.nasa.gov/realtime/) and data derived from the monitoring network of the local Snow Avalanche Services. The methodology includes: i) the identification of the Snow Covered Area (SCA) from satellite images; ii) the near real-time computation of the snow depth (Hs) mean values from the available monitoring networks; iii) the derivation of the mean snow density by season and by depth interval. Satellite image processing for the computation of the SCA has been tuned up specifically for the Eastern Alps region and includes the computation of the Normalised Difference Snow Index and a threshold value ad hoc for the investigated area; the use of a Decision Tree. The identification of the most effective (the best) threshold value is the most sensitive part of the image processing because this threshold depends on many factors such as the local physiographic setting, the altitude intervals, the shadows, and the vegetation. By comparing the obtained SCA map with the digital elevation model of the investigated region it is possible to derive the snow covered area by altitude intervals. Italian Snow Avalanche Services control networks for the monitoring of the Hs over their competence. Those networks are based on real time automatic measurement systems or snow field where manual measurement are daily performed every morning. From those measurements are then derived mean Hs values for altitude interval (every 300 m starting from 600 m elevation in the Eastern Italian Alps). The altitude intervals are chosen based on the physiographic setting and the local climate of the investigated region. Snow density values are derived from long time-series data base where measurements from the Italian Alps are

  5. Forecast and restoration of geomagnetic activity indices by using the software-computational neural network complex

    Science.gov (United States)

    Barkhatov, Nikolay; Revunov, Sergey

    2010-05-01

    It is known that currently used indices of geomagnetic activity to some extent reflect the physical processes occurring in the interaction of the perturbed solar wind with Earth's magnetosphere. Therefore, they are connected to each other and with the parameters of near-Earth space. The establishment of such nonlinear connections is interest. For such purposes when the physical problem is complex or has many parameters the technology of artificial neural networks is applied. Such approach for development of the automated forecast and restoration method of geomagnetic activity indices with the establishment of creative software-computational neural network complex is used. Each neural network experiments were carried out at this complex aims to search for a specific nonlinear relation between the analyzed indices and parameters. At the core of the algorithm work program a complex scheme of the functioning of artificial neural networks (ANN) of different types is contained: back propagation Elman network, feed forward network, fuzzy logic network and Kohonen layer classification network. Tools of the main window of the complex (the application) the settings used by neural networks allow you to change: the number of hidden layers, the number of neurons in the layer, the input and target data, the number of cycles of training. Process and the quality of training the ANN is a dynamic plot of changing training error. Plot of comparison of network response with the test sequence is result of the network training. The last-trained neural network with established nonlinear connection for repeated numerical experiments can be run. At the same time additional training is not executed and the previously trained network as a filter input parameters get through and output parameters with the test event are compared. At statement of the large number of different experiments provided the ability to run the program in a "batch" mode is stipulated. For this purpose the user a

  6. Multiscale approach including microfibril scale to assess elastic constants of cortical bone based on neural network computation and homogenization method

    CERN Document Server

    Barkaoui, Abdelwahed; Tarek, Merzouki; Hambli, Ridha; Ali, Mkaddem

    2014-01-01

    The complexity and heterogeneity of bone tissue require a multiscale modelling to understand its mechanical behaviour and its remodelling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network computation and homogenisation equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained neural network simulation. Finite element (FE) calculation is performed at nanoscopic levels to provide a database to train an in-house neural network program; (iii) in steps 2 to 10 from fibril to continuum cortical bone tissue, homogenisation equations are used to perform the computation at the higher s...

  7. Encoding neural and synaptic functionalities in electron spin: A pathway to efficient neuromorphic computing

    Science.gov (United States)

    Sengupta, Abhronil; Roy, Kaushik

    2017-12-01

    Present day computers expend orders of magnitude more computational resources to perform various cognitive and perception related tasks that humans routinely perform every day. This has recently resulted in a seismic shift in the field of computation where research efforts are being directed to develop a neurocomputer that attempts to mimic the human brain by nanoelectronic components and thereby harness its efficiency in recognition problems. Bridging the gap between neuroscience and nanoelectronics, this paper attempts to provide a review of the recent developments in the field of spintronic device based neuromorphic computing. Description of various spin-transfer torque mechanisms that can be potentially utilized for realizing device structures mimicking neural and synaptic functionalities is provided. A cross-layer perspective extending from the device to the circuit and system level is presented to envision the design of an All-Spin neuromorphic processor enabled with on-chip learning functionalities. Device-circuit-algorithm co-simulation framework calibrated to experimental results suggest that such All-Spin neuromorphic systems can potentially achieve almost two orders of magnitude energy improvement in comparison to state-of-the-art CMOS implementations.

  8. Evolution of teaching and evaluation methodologies: The experience in the computer programming course at the Universidad Nacional de Colombia

    Directory of Open Access Journals (Sweden)

    Jonatan Gomez Perdomo

    2014-05-01

    Full Text Available In this paper, we present the evolution of a computer-programming course at the Universidad Nacional de Colombia (UNAL. The teaching methodology has evolved from a linear and non-standardized methodology to a flexible, non-linear and student-centered methodology. Our methodology uses an e-learning platform that supports the learning process by offering students and professors custom navigation between the content and material in an interactive way (book chapters, exercises, videos. Moreover, the platform is open access, and approximately 900 students from the university take this course each term. However, our evaluation methodology has evolved from static evaluations based on paper tests to an online process based on computer adaptive testing (CAT that chooses the questions to ask a student and assigns the student a grade according to the student’s ability.

  9. Brain without mind: Computer simulation of neural networks with modifiable neuronal interactions

    Science.gov (United States)

    Clark, John W.; Rafelski, Johann; Winston, Jeffrey V.

    1985-07-01

    Aspects of brain function are examined in terms of a nonlinear dynamical system of highly interconnected neuron-like binary decision elements. The model neurons operate synchronously in discrete time, according to deterministic or probabilistic equations of motion. Plasticity of the nervous system, which underlies such cognitive collective phenomena as adaptive development, learning, and memory, is represented by temporal modification of interneuronal connection strengths depending on momentary or recent neural activity. A formal basis is presented for the construction of local plasticity algorithms, or connection-modification routines, spanning a large class. To build an intuitive understanding of the behavior of discrete-time network models, extensive computer simulations have been carried out (a) for nets with fixed, quasirandom connectivity and (b) for nets with connections that evolve under one or another choice of plasticity algorithm. From the former experiments, insights are gained concerning the spontaneous emergence of order in the form of cyclic modes of neuronal activity. In the course of the latter experiments, a simple plasticity routine (“brainwashing,” or “anti-learning”) was identified which, applied to nets with initially quasirandom connectivity, creates model networks which provide more felicitous starting points for computer experiments on the engramming of content-addressable memories and on learning more generally. The potential relevance of this algorithm to developmental neurobiology and to sleep states is discussed. The model considered is at the same time a synthesis of earlier synchronous neural-network models and an elaboration upon them; accordingly, the present article offers both a focused review of the dynamical properties of such systems and a selection of new findings derived from computer simulation.

  10. A Solution Methodology and Computer Program to Efficiently Model Thermodynamic and Transport Coefficients of Mixtures

    Science.gov (United States)

    Ferlemann, Paul G.

    2000-01-01

    A solution methodology has been developed to efficiently model multi-specie, chemically frozen, thermally perfect gas mixtures. The method relies on the ability to generate a single (composite) set of thermodynamic and transport coefficients prior to beginning a CFD solution. While not fundamentally a new concept, many applied CFD users are not aware of this capability nor have a mechanism to easily and confidently generate new coefficients. A database of individual specie property coefficients has been created for 48 species. The seven coefficient form of the thermodynamic functions is currently used rather then the ten coefficient form due to the similarity of the calculated properties, low temperature behavior and reduced CPU requirements. Sutherland laminar viscosity and thermal conductivity coefficients were computed in a consistent manner from available reference curves. A computer program has been written to provide CFD users with a convenient method to generate composite specie coefficients for any mixture. Mach 7 forebody/inlet calculations demonstrated nearly equivalent results and significant CPU time savings compared to a multi-specie solution approach. Results from high-speed combustor analysis also illustrate the ability to model inert test gas contaminants without additional computational expense.

  11. Metodología basada en redes neurales para interpretación de la resistividad del suelo en zonas urbanas. Methodology based on neural networks for earth resistivity interpretation in congested urban areas

    Directory of Open Access Journals (Sweden)

    Miguel Martínez Lozano

    2015-04-01

    Full Text Available Uno de los problemas que se afronta durante el diseño de un sistema de puesta a tierra (PAT en zonas urbanas, es la obtención de los parámetros eléctricos del suelo, ya que los métodos tradicionales de medición de resistividad no se pueden aplicar por problemas de espacio. En este trabajo, se plantea una alternativa para la obtención de la resistividad, mediante sondeo eléctrico introduciendo una barra vertical y registrando cómo varía la resistencia de PAT del electrodo en función de la profundidad; con esos resultados se desarrolla una metodología basada en redes neurales, para estimar los parámetros del suelo para un modelo biestratificado del mismo. Se estudia por tanto, el tema de las complicaciones de medición en zonas urbanas y se plantea en detalle la metodología empleando técnicas modernas de medición y procesamiento. Los resultados obtenidos, tanto en fase de simulación digital como con ensayos de campo, muestran la validez.  One of the main troubles for grounding system design in an electrical installation on congested urban areas is obtaining the soil parameters, since the traditional measurements techniques are not applicable due to the limited space. In the present work, an alternative procedure based on introducing a driven rod into the soil and registering the variation of ground resistance versus the depth, is presented; with the field measurements obtained, a procedure were evaluated to estimate the soil parameters in a simplified bi-stratified model (two vertical layers using a trained neural network to minimize the effort and time to obtain the respective results. The trouble about the measurement and estimation of electrical soil properties in congested urban areas is solved with the detailed methodology presented, based on non conventional measurement techniques and computational processing. The results obtained during both digital simulation and field measurements, demonstrates the validity of the

  12. Applications of response surface methodology and artificial neural network for decolorization of distillery spent wash by using activated Piper nigrum.

    Science.gov (United States)

    Arulmathi, P; Elangovan, G

    2016-11-01

    Ethanol production from sugarcane molasses yields large volume of highly colored spent wash as effluent. This color is imparted by the recalcitrant melanoidin pigment produced due to the Maillard reaction. In the present work, decolourization of melanoidin was carried out using activated carbon prepared from pepper stem (Piper nigrum). The interaction effect between parameters were studied by response surface methodology using central composite design and maximum decolourization of 75 % was obtained at pH 7.5, Melanoidin concentration of 32.5 mg l-1 with 1.63 g 100ml-1 of adsorbent for 2hr 75min. Artificial neural networks was also used to optimize the process parameters, giving 74 % decolourization for the same parameters. The Langmuir and Freundich isotherms were applied for describing the biosorption equilibrium. The process was represented by the Langmuir isotherm with a correlation coefficient of 0.94. The first-order, second-order models were implemented for demonstrating the biosorption mechanism and, as a result, Pseudo second order model kinetics fitted best to the experimental data. The estimated enthalpy change (DH) and entropy change (DS) of adsorption were 32.195 kJ mol-1 and 115.44 J mol-1 K which indicates that the adsorption of melanoidin was an endothermic process. Continuous adsorption studies were conducted under optimized condition. The breakthrough curve analysis was determined using the experimental data obtained from continuous adsorption. Continuous column studies gave a breakthrough at 182 mins and 176 ml. It was concluded that column packed with Piper nigrum based activated carbon can be used to remove color from distillery spent wash.

  13. Estimation of design space for an extrusion-spheronization process using response surface methodology and artificial neural network modelling.

    Science.gov (United States)

    Sovány, Tamás; Tislér, Zsófia; Kristó, Katalin; Kelemen, András; Regdon, Géza

    2016-09-01

    The application of the Quality by Design principles is one of the key issues of the recent pharmaceutical developments. In the past decade a lot of knowledge was collected about the practical realization of the concept, but there are still a lot of unanswered questions. The key requirement of the concept is the mathematical description of the effect of the critical factors and their interactions on the critical quality attributes (CQAs) of the product. The process design space (PDS) is usually determined by the use of design of experiment (DoE) based response surface methodologies (RSM), but inaccuracies in the applied polynomial models often resulted in the over/underestimation of the real trends and changes making the calculations uncertain, especially in the edge regions of the PDS. The completion of RSM with artificial neural network (ANN) based models is therefore a commonly used method to reduce the uncertainties. Nevertheless, since the different researches are focusing on the use of a given DoE, there is lack of comparative studies on different experimental layouts. Therefore, the aim of present study was to investigate the effect of the different DoE layouts (2 level full factorial, Central Composite, Box-Behnken, 3 level fractional and 3 level full factorial design) on the model predictability and to compare model sensitivities according to the organization of the experimental data set. It was revealed that the size of the design space could differ more than 40% calculated with different polynomial models, which was associated with a considerable shift in its position when higher level layouts were applied. The shift was more considerable when the calculation was based on RSM. The model predictability was also better with ANN based models. Nevertheless, both modelling methods exhibit considerable sensitivity to the organization of the experimental data set, and the use of design layouts is recommended, where the extreme values factors are more represented

  14. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  15. Dual Coding Theory Explains Biphasic Collective Computation in Neural Decision-Making.

    Science.gov (United States)

    Daniels, Bryan C; Flack, Jessica C; Krakauer, David C

    2017-01-01

    A central question in cognitive neuroscience is how unitary, coherent decisions at the whole organism level can arise from the distributed behavior of a large population of neurons with only partially overlapping information. We address this issue by studying neural spiking behavior recorded from a multielectrode array with 169 channels during a visual motion direction discrimination task. It is well known that in this task there are two distinct phases in neural spiking behavior. Here we show Phase I is a distributed or incompressible phase in which uncertainty about the decision is substantially reduced by pooling information from many cells. Phase II is a redundant or compressible phase in which numerous single cells contain all the information present at the population level in Phase I, such that the firing behavior of a single cell is enough to predict the subject's decision. Using an empirically grounded dynamical modeling framework, we show that in Phase I large cell populations with low redundancy produce a slow timescale of information aggregation through critical slowing down near a symmetry-breaking transition. Our model indicates that increasing collective amplification in Phase II leads naturally to a faster timescale of information pooling and consensus formation. Based on our results and others in the literature, we propose that a general feature of collective computation is a "coding duality" in which there are accumulation and consensus formation processes distinguished by different timescales.

  16. Computer-Aided Diagnosis of Parkinson's Disease Using Enhanced Probabilistic Neural Network.

    Science.gov (United States)

    Hirschauer, Thomas J; Adeli, Hojjat; Buford, John A

    2015-11-01

    Early and accurate diagnosis of Parkinson's disease (PD) remains challenging. Neuropathological studies using brain bank specimens have estimated that a large percentages of clinical diagnoses of PD may be incorrect especially in the early stages. In this paper, a comprehensive computer model is presented for the diagnosis of PD based on motor, non-motor, and neuroimaging features using the recently-developed enhanced probabilistic neural network (EPNN). The model is tested for differentiating PD patients from those with scans without evidence of dopaminergic deficit (SWEDDs) using the Parkinson's Progression Markers Initiative (PPMI) database, an observational, multi-center study designed to identify PD biomarkers for diagnosis and disease progression. The results are compared to four other commonly-used machine learning algorithms: the probabilistic neural network (PNN), support vector machine (SVM), k-nearest neighbors (k-NN) algorithm, and classification tree (CT). The EPNN had the highest classification accuracy at 92.5% followed by the PNN (91.6%), k-NN (90.8%) and CT (90.2%). The EPNN exhibited an accuracy of 98.6% when classifying healthy control (HC) versus PD, higher than any previous studies.

  17. Computational and methodological aspects of terrestrial surface analysis based on point clouds

    Science.gov (United States)

    Rychkov, Igor; Brasington, James; Vericat, Damià

    2012-05-01

    Processing of high-resolution terrestrial laser scanning (TLS) point clouds presents methodological and computational challenges before a geomorphological analysis can be carried out. We present a software library that effectively deals with billions of points and implements a simple methodology to study the surface profile and roughness. Adequate performance and scalability were achieved through the use of 64-bit memory mapped files, regular 2D grid sorting, and parallel processing. The plethora of the spatial scales found in a TLS dataset were grouped into the "ground" model at the grid scale and per cell, sub-grid surface roughness. We used centroid-thinning to build a piecewise linear ground model, and studied "detrended" standard deviation of relative elevations as a measure of surface roughness. Two applications to the point clouds from gravel river bed surveys are described. Linking empirically the standard deviation to the grain size allowed us to retrieve morphological and sedimentological models of channel topology evolution and movement of the gravel with richer quantitative results and deeper insights than the previous survey techniques.

  18. Attack Methodology Analysis: Emerging Trends in Computer-Based Attack Methodologies and Their Applicability to Control System Networks

    Energy Technology Data Exchange (ETDEWEB)

    Bri Rolston

    2005-06-01

    Threat characterization is a key component in evaluating the threat faced by control systems. Without a thorough understanding of the threat faced by critical infrastructure networks, adequate resources cannot be allocated or directed effectively to the defense of these systems. Traditional methods of threat analysis focus on identifying the capabilities and motivations of a specific attacker, assessing the value the adversary would place on targeted systems, and deploying defenses according to the threat posed by the potential adversary. Too many effective exploits and tools exist and are easily accessible to anyone with access to an Internet connection, minimal technical skills, and a significantly reduced motivational threshold to be able to narrow the field of potential adversaries effectively. Understanding how hackers evaluate new IT security research and incorporate significant new ideas into their own tools provides a means of anticipating how IT systems are most likely to be attacked in the future. This research, Attack Methodology Analysis (AMA), could supply pertinent information on how to detect and stop new types of attacks. Since the exploit methodologies and attack vectors developed in the general Information Technology (IT) arena can be converted for use against control system environments, assessing areas in which cutting edge exploit development and remediation techniques are occurring can provide significance intelligence for control system network exploitation, defense, and a means of assessing threat without identifying specific capabilities of individual opponents. Attack Methodology Analysis begins with the study of what exploit technology and attack methodologies are being developed in the Information Technology (IT) security research community within the black and white hat community. Once a solid understanding of the cutting edge security research is established, emerging trends in attack methodology can be identified and the gap between

  19. Selection of meteorological parameters affecting rainfall estimation using neuro-fuzzy computing methodology

    Science.gov (United States)

    Hashim, Roslan; Roy, Chandrabhushan; Motamedi, Shervin; Shamshirband, Shahaboddin; Petković, Dalibor; Gocic, Milan; Lee, Siew Cheng

    2016-05-01

    Rainfall is a complex atmospheric process that varies over time and space. Researchers have used various empirical and numerical methods to enhance estimation of rainfall intensity. We developed a novel prediction model in this study, with the emphasis on accuracy to identify the most significant meteorological parameters having effect on rainfall. For this, we used five input parameters: wet day frequency (dwet), vapor pressure (e̅a), and maximum and minimum air temperatures (Tmax and Tmin) as well as cloud cover (cc). The data were obtained from the Indian Meteorological Department for the Patna city, Bihar, India. Further, a type of soft-computing method, known as the adaptive-neuro-fuzzy inference system (ANFIS), was applied to the available data. In this respect, the observation data from 1901 to 2000 were employed for testing, validating, and estimating monthly rainfall via the simulated model. In addition, the ANFIS process for variable selection was implemented to detect the predominant variables affecting the rainfall prediction. Finally, the performance of the model was compared to other soft-computing approaches, including the artificial neural network (ANN), support vector machine (SVM), extreme learning machine (ELM), and genetic programming (GP). The results revealed that ANN, ELM, ANFIS, SVM, and GP had R2 of 0.9531, 0.9572, 0.9764, 0.9525, and 0.9526, respectively. Therefore, we conclude that the ANFIS is the best method among all to predict monthly rainfall. Moreover, dwet was found to be the most influential parameter for rainfall prediction, and the best predictor of accuracy. This study also identified sets of two and three meteorological parameters that show the best predictions.

  20. Learning to minimize efforts versus maximizing rewards: computational principles and neural correlates.

    Science.gov (United States)

    Skvortsova, Vasilisa; Palminteri, Stefano; Pessiglione, Mathias

    2014-11-19

    The mechanisms of reward maximization have been extensively studied at both the computational and neural levels. By contrast, little is known about how the brain learns to choose the options that minimize action cost. In principle, the brain could have evolved a general mechanism that applies the same learning rule to the different dimensions of choice options. To test this hypothesis, we scanned healthy human volunteers while they performed a probabilistic instrumental learning task that varied in both the physical effort and the monetary outcome associated with choice options. Behavioral data showed that the same computational rule, using prediction errors to update expectations, could account for both reward maximization and effort minimization. However, these learning-related variables were encoded in partially dissociable brain areas. In line with previous findings, the ventromedial prefrontal cortex was found to positively represent expected and actual rewards, regardless of effort. A separate network, encompassing the anterior insula, the dorsal anterior cingulate, and the posterior parietal cortex, correlated positively with expected and actual efforts. These findings suggest that the same computational rule is applied by distinct brain systems, depending on the choice dimension-cost or benefit-that has to be learned. Copyright © 2014 the authors 0270-6474/14/3415621-10$15.00/0.

  1. Convolutional neural networks for P300 detection with application to brain-computer interfaces.

    Science.gov (United States)

    Cecotti, Hubert; Gräser, Axel

    2011-03-01

    A Brain-Computer Interface (BCI) is a specific type of human-computer interface that enables the direct communication between human and computers by analyzing brain measurements. Oddball paradigms are used in BCI to generate event-related potentials (ERPs), like the P300 wave, on targets selected by the user. A P300 speller is based on this principle, where the detection of P300 waves allows the user to write characters. The P300 speller is composed of two classification problems. The first classification is to detect the presence of a P300 in the electroencephalogram (EEG). The second one corresponds to the combination of different P300 responses for determining the right character to spell. A new method for the detection of P300 waves is presented. This model is based on a convolutional neural network (CNN). The topology of the network is adapted to the detection of P300 waves in the time domain. Seven classifiers based on the CNN are proposed: four single classifiers with different features set and three multiclassifiers. These models are tested and compared on the Data set II of the third BCI competition. The best result is obtained with a multiclassifier solution with a recognition rate of 95.5 percent, without channel selection before the classification. The proposed approach provides also a new way for analyzing brain activities due to the receptive field of the CNN models.

  2. Can computational efficiency alone drive the evolution of modularity in neural networks?

    Science.gov (United States)

    Tosh, Colin R

    2016-08-30

    Some biologists have abandoned the idea that computational efficiency in processing multipart tasks or input sets alone drives the evolution of modularity in biological networks. A recent study confirmed that small modular (neural) networks are relatively computationally-inefficient but large modular networks are slightly more efficient than non-modular ones. The present study determines whether these efficiency advantages with network size can drive the evolution of modularity in networks whose connective architecture can evolve. The answer is no, but the reason why is interesting. All simulations (run in a wide variety of parameter states) involving gradualistic connective evolution end in non-modular local attractors. Thus while a high performance modular attractor exists, such regions cannot be reached by gradualistic evolution. Non-gradualistic evolutionary simulations in which multi-modularity is obtained through duplication of existing architecture appear viable. Fundamentally, this study indicates that computational efficiency alone does not drive the evolution of modularity, even in large biological networks, but it may still be a viable mechanism when networks evolve by non-gradualistic means.

  3. Computer-Aided Cobb Measurement Based on Automatic Detection of Vertebral Slopes Using Deep Neural Network.

    Science.gov (United States)

    Zhang, Junhua; Li, Hongjian; Lv, Liang; Zhang, Yufeng

    2017-01-01

    To develop a computer-aided method that reduces the variability of Cobb angle measurement for scoliosis assessment. A deep neural network (DNN) was trained with vertebral patches extracted from spinal model radiographs. The Cobb angle of the spinal curve was calculated automatically from the vertebral slopes predicted by the DNN. Sixty-five in vivo radiographs and 40 model radiographs were analyzed. An experienced surgeon performed manual measurements on the aforementioned radiographs. Two examiners used both the proposed and the manual measurement methods to analyze the aforementioned radiographs. For model radiographs, the intraclass correlation coefficients were greater than 0.98, and the mean absolute differences were less than 3°. This indicates that the proposed system showed high repeatability for measurements of model radiographs. For the in vivo radiographs, the reliabilities were lower than those from the model radiographs, and the differences between the computer-aided measurement and the manual measurement by the surgeon were higher than 5°. The variability of Cobb angle measurements can be reduced if the DNN system is trained with enough vertebral patches. Training data of in vivo radiographs must be included to improve the performance of DNN. Vertebral slopes can be predicted by DNN. The computer-aided system can be used to perform automatic measurements of Cobb angle, which is used to make reliable and objective assessments of scoliosis.

  4. Neural and computational processes underlying dynamic changes in self-esteem

    Science.gov (United States)

    Rutledge, Robb B; Moutoussis, Michael; Dolan, Raymond J

    2017-01-01

    Self-esteem is shaped by the appraisals we receive from others. Here, we characterize neural and computational mechanisms underlying this form of social influence. We introduce a computational model that captures fluctuations in self-esteem engendered by prediction errors that quantify the difference between expected and received social feedback. Using functional MRI, we show these social prediction errors correlate with activity in ventral striatum/subgenual anterior cingulate cortex, while updates in self-esteem resulting from these errors co-varied with activity in ventromedial prefrontal cortex (vmPFC). We linked computational parameters to psychiatric symptoms using canonical correlation analysis to identify an ‘interpersonal vulnerability’ dimension. Vulnerability modulated the expression of prediction error responses in anterior insula and insula-vmPFC connectivity during self-esteem updates. Our findings indicate that updating of self-evaluative beliefs relies on learning mechanisms akin to those used in learning about others. Enhanced insula-vmPFC connectivity during updating of those beliefs may represent a marker for psychiatric vulnerability. PMID:29061228

  5. Exact computation of the Maximum Entropy Potential of spiking neural networks models

    CERN Document Server

    Cofre, Rodrigo

    2014-01-01

    Understanding how stimuli and synaptic connectivity in uence the statistics of spike patterns in neural networks is a central question in computational neuroscience. Maximum Entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. But, in spite of good performance in terms of prediction, the ?tting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuro-mimetic models) provide a probabilistic mapping between stimulus, network architecture and spike patterns in terms of conditional proba- bilities. In this paper we build an exact analytical mapping between neuro-mimetic and Maximum Entropy models.

  6. Application of artificial neural networks to identify equilibration in computer simulations

    Science.gov (United States)

    Leibowitz, Mitchell H.; Miller, Evan D.; Henry, Michael M.; Jankowski, Eric

    2017-11-01

    Determining which microstates generated by a thermodynamic simulation are representative of the ensemble for which sampling is desired is a ubiquitous, underspecified problem. Artificial neural networks are one type of machine learning algorithm that can provide a reproducible way to apply pattern recognition heuristics to underspecified problems. Here we use the open-source TensorFlow machine learning library and apply it to the problem of identifying which hypothetical observation sequences from a computer simulation are “equilibrated” and which are not. We generate training populations and test populations of observation sequences with embedded linear and exponential correlations. We train a two-neuron artificial network to distinguish the correlated and uncorrelated sequences. We find that this simple network is good enough for > 98% accuracy in identifying exponentially-decaying energy trajectories from molecular simulations.

  7. Recurrent neural networks in computer-based clinical decision support for laryngopathies: an experimental study.

    Science.gov (United States)

    Szkoła, Jarosław; Pancerz, Krzysztof; Warchoł, Jan

    2011-01-01

    The main goal of this paper is to give the basis for creating a computer-based clinical decision support (CDS) system for laryngopathies. One of approaches which can be used in the proposed CDS is based on the speech signal analysis using recurrent neural networks (RNNs). RNNs can be used for pattern recognition in time series data due to their ability of memorizing some information from the past. The Elman networks (ENs) are a classical representative of RNNs. To improve learning ability of ENs, we may modify and combine them with another kind of RNNs, namely, with the Jordan networks. The modified Elman-Jordan networks (EJNs) manifest a faster and more exact achievement of the target pattern. Validation experiments were carried out on speech signals of patients from the control group and with two kinds of laryngopathies.

  8. Assessment of locomotion in chlorine exposed mice by computer vision and neural networks.

    Science.gov (United States)

    Filippidis, Aristotelis S; Zarogiannis, Sotirios G; Randich, Alan; Ness, Timothy J; Matalon, Sadis

    2012-03-01

    Assessment of locomotion following exposure of animals to noxious or painful stimuli can offer significant insights into underlying mechanisms of injury and the effectiveness of various treatments. We developed a novel method to track the movement of mice in two dimensions using computer vision and neural network algorithms. By using this system we demonstrated that mice exposed to chlorine (Cl(2)) gas developed impaired locomotion and increased immobility for up to 9 h postexposure. Postexposure administration of buprenorphine, a common analgesic agent, increased locomotion and decreased immobility times in Cl(2)- but not air-exposed mice, most likely by decreasing Cl(2)-induced pain. This method can be adapted to assess the effectiveness of various therapies following exposure to a variety of chemical and behavioral noxious stimuli.

  9. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography.

    Science.gov (United States)

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa

    2017-03-01

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.

  10. Goal-Directed Behavior and Instrumental Devaluation: A Neural System-Level Computational Model.

    Science.gov (United States)

    Mannella, Francesco; Mirolli, Marco; Baldassarre, Gianluca

    2016-01-01

    Devaluation is the key experimental paradigm used to demonstrate the presence of instrumental behaviors guided by goals in mammals. We propose a neural system-level computational model to address the question of which brain mechanisms allow the current value of rewards to control instrumental actions. The model pivots on and shows the computational soundness of the hypothesis for which the internal representation of instrumental manipulanda (e.g., levers) activate the representation of rewards (or "action-outcomes", e.g., foods) while attributing to them a value which depends on the current internal state of the animal (e.g., satiation for some but not all foods). The model also proposes an initial hypothesis of the integrated system of key brain components supporting this process and allowing the recalled outcomes to bias action selection: (a) the sub-system formed by the basolateral amygdala and insular cortex acquiring the manipulanda-outcomes associations and attributing the current value to the outcomes; (b) three basal ganglia-cortical loops selecting respectively goals, associative sensory representations, and actions; (c) the cortico-cortical and striato-nigro-striatal neural pathways supporting the selection, and selection learning, of actions based on habits and goals. The model reproduces and explains the results of several devaluation experiments carried out with control rats and rats with pre- and post-training lesions of the basolateral amygdala, the nucleus accumbens core, the prelimbic cortex, and the dorso-medial striatum. The results support the soundness of the hypotheses of the model and show its capacity to integrate, at the system-level, the operations of the key brain structures underlying devaluation. Based on its hypotheses and predictions, the model also represents an operational framework to support the design and analysis of new experiments on the motivational aspects of goal-directed behavior.

  11. Is Neural Activity Detected by ERP-Based Brain-Computer Interfaces Task Specific?

    Directory of Open Access Journals (Sweden)

    Markus A Wenzel

    Full Text Available Brain-computer interfaces (BCIs that are based on event-related potentials (ERPs can estimate to which stimulus a user pays particular attention. In typical BCIs, the user silently counts the selected stimulus (which is repeatedly presented among other stimuli in order to focus the attention. The stimulus of interest is then inferred from the electroencephalogram (EEG. Detecting attention allocation implicitly could be also beneficial for human-computer interaction (HCI, because it would allow software to adapt to the user's interest. However, a counting task would be inappropriate for the envisaged implicit application in HCI. Therefore, the question was addressed if the detectable neural activity is specific for silent counting, or if it can be evoked also by other tasks that direct the attention to certain stimuli.Thirteen people performed a silent counting, an arithmetic and a memory task. The tasks required the subjects to pay particular attention to target stimuli of a random color. The stimulus presentation was the same in all three tasks, which allowed a direct comparison of the experimental conditions.Classifiers that were trained to detect the targets in one task, according to patterns present in the EEG signal, could detect targets in all other tasks (irrespective of some task-related differences in the EEG.The neural activity detected by the classifiers is not strictly task specific but can be generalized over tasks and is presumably a result of the attention allocation or of the augmented workload. The results may hold promise for the transfer of classification algorithms from BCI research to implicit relevance detection in HCI.

  12. The Study of Learners' Preference for Visual Complexity on Small Screens of Mobile Computers Using Neural Networks

    Science.gov (United States)

    Wang, Lan-Ting; Lee, Kun-Chou

    2014-01-01

    The vision plays an important role in educational technologies because it can produce and communicate quite important functions in teaching and learning. In this paper, learners' preference for the visual complexity on small screens of mobile computers is studied by neural networks. The visual complexity in this study is divided into five…

  13. Data systems and computer science: Neural networks base R/T program overview

    Science.gov (United States)

    Gulati, Sandeep

    1991-01-01

    The research base, in the U.S. and abroad, for the development of neural network technology is discussed. The technical objectives are to develop and demonstrate adaptive, neural information processing concepts. The leveraging of external funding is also discussed.

  14. A hybrid fuzzy-neural system for computer-aided diagnosis of ultrasound kidney images using prominent features.

    Science.gov (United States)

    Bommanna Raja, K; Madheswaran, M; Thyagarajah, K

    2008-02-01

    The objective of this work is to develop and implement a computer-aided decision support system for an automated diagnosis and classification of ultrasound kidney images. The proposed method distinguishes three kidney categories namely normal, medical renal diseases and cortical cyst. For the each pre-processed ultrasound kidney image, 36 features are extracted. Two types of decision support systems, optimized multi-layer back propagation network and hybrid fuzzy-neural system have been developed with these features for classifying the kidney categories. The performance of the hybrid fuzzy-neural system is compared with the optimized multi-layer back propagation network in terms of classification efficiency, training and testing time. The results obtained show that fuzzy-neural system provides higher classification efficiency with minimum training and testing time. It has also been found that instead of using all 36 features, ranking the features enhance classification efficiency. The outputs of the decision support systems are validated with medical expert to measure the actual efficiency. The overall discriminating capability of the systems is accessed with performance evaluation measure, f-score. It has been observed that the performance of fuzzy-neural system is superior compared to optimized multi-layer back propagation network. Such hybrid fuzzy-neural system with feature extraction algorithms and pre-processing scheme helps in developing computer-aided diagnosis system for ultrasound kidney images and can be used as a secondary observer in clinical decision making.

  15. Corticostriatal response selection in sentence production: Insights from neural network simulation with reservoir computing.

    Science.gov (United States)

    Hinaut, Xavier; Lance, Florian; Droin, Colas; Petit, Maxime; Pointeau, Gregoire; Dominey, Peter Ford

    2015-11-01

    Language production requires selection of the appropriate sentence structure to accommodate the communication goal of the speaker - the transmission of a particular meaning. Here we consider event meanings, in terms of predicates and thematic roles, and we address the problem that a given event can be described from multiple perspectives, which poses a problem of response selection. We present a model of response selection in sentence production that is inspired by the primate corticostriatal system. The model is implemented in the context of reservoir computing where the reservoir - a recurrent neural network with fixed connections - corresponds to cortex, and the readout corresponds to the striatum. We demonstrate robust learning, and generalization properties of the model, and demonstrate its cross linguistic capabilities in English and Japanese. The results contribute to the argument that the corticostriatal system plays a role in response selection in language production, and to the stance that reservoir computing is a valid potential model of corticostriatal processing. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Computational connectionism within neurons: A model of cytoskeletal automata subserving neural networks

    Science.gov (United States)

    Rasmussen, Steen; Karampurwala, Hasnain; Vaidyanath, Rajesh; Jensen, Klaus S.; Hameroff, Stuart

    1990-06-01

    Neural network” models of brain function assume neurons and their synaptic connections to be the fundamental units of information processing, somewhat like switches within computers. However, neurons and synapses are extremely complex and resemble entire computers rather than switches. The interiors of the neurons (and other eucaryotic cells) are now known to contain highly ordered parallel networks of filamentous protein polymers collectively termed the cytoskeleton. Originally assumed to provide merely structural “bone-like” support, cytoskeletal structures such as microtubules are now recognized to organize cell interiors dynamically. The cytoskeleton is the internal communication network for the eucaryotic cell, both by means of simple transport and by means of coordinating extremely complicated events like cell division, growth and differentiation. The cytoskeleton may therefore be viewed as the cell's “nervous system”. Consequently the neuronal cytoskeleton may be involved in molecular level information processing which subserves higher, collective neuronal functions ultimately relating to cognition. Numerous models of information processing within the cytoskeleton (in particular, microtubules) have been proposed. We have utilized cellular automata as a means to model and demonstrate the potential for information processing in cytoskeletal microtubules. In this paper, we extend previous work and simulate associative learning in a cytoskeletal network as well as assembly and disassembly of microtubules. We also discuss possible relevance and implications of cytoskeletal information processing to cognition.

  17. Computer vision-based method for classification of wheat grains using artificial neural network.

    Science.gov (United States)

    Sabanci, Kadir; Kayabasi, Ahmet; Toktas, Abdurrahim

    2017-06-01

    A simplified computer vision-based application using artificial neural network (ANN) depending on multilayer perceptron (MLP) for accurately classifying wheat grains into bread or durum is presented. The images of 100 bread and 100 durum wheat grains are taken via a high-resolution camera and subjected to pre-processing. The main visual features of four dimensions, three colors and five textures are acquired using image-processing techniques (IPTs). A total of 21 visual features are reproduced from the 12 main features to diversify the input population for training and testing the ANN model. The data sets of visual features are considered as input parameters of the ANN model. The ANN with four different input data subsets is modelled to classify the wheat grains into bread or durum. The ANN model is trained with 180 grains and its accuracy tested with 20 grains from a total of 200 wheat grains. Seven input parameters that are most effective on the classifying results are determined using the correlation-based CfsSubsetEval algorithm to simplify the ANN model. The results of the ANN model are compared in terms of accuracy rate. The best result is achieved with a mean absolute error (MAE) of 9.8 × 10 -6 by the simplified ANN model. This shows that the proposed classifier based on computer vision can be successfully exploited to automatically classify a variety of grains. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  18. Classification of dried vegetables using computer image analysis and artificial neural networks

    Science.gov (United States)

    Koszela, K.; Łukomski, M.; Mueller, W.; Górna, K.; Okoń, P.; Boniecki, P.; Zaborowicz, M.; Wojcieszak, D.

    2017-07-01

    In the recent years, there has been a continuously increasing demand for vegetables and dried vegetables. This trend affects the growth of the dehydration industry in Poland helping to exploit excess production. More and more often dried vegetables are used in various sectors of the food industry, both due to their high nutritional qualities and changes in consumers' food preferences. As we observe an increase in consumer awareness regarding a healthy lifestyle and a boom in health food, there is also an increase in the consumption of such food, which means that the production and crop area can increase further. Among the dried vegetables, dried carrots play a strategic role due to their wide application range and high nutritional value. They contain high concentrations of carotene and sugar which is present in the form of crystals. Carrots are also the vegetables which are most often subjected to a wide range of dehydration processes; this makes it difficult to perform a reliable qualitative assessment and classification of this dried product. The many qualitative properties of dried carrots determining their positive or negative quality assessment include colour and shape. The aim of the research was to develop and implement the model of a computer system for the recognition and classification of freeze-dried, convection-dried and microwave vacuum dried products using the methods of computer image analysis and artificial neural networks.

  19. Ensemble of Neural Network Conditional Random Fields for Self-Paced Brain Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Hossein Bashashati

    2017-07-01

    Full Text Available Classification of EEG signals in self-paced Brain Computer Interfaces (BCI is an extremely challenging task. The main difficulty stems from the fact that start time of a control task is not defined. Therefore it is imperative to exploit the characteristics of the EEG data to the extent possible. In sensory motor self-paced BCIs, while performing the mental task, the user’s brain goes through several well-defined internal state changes. Applying appropriate classifiers that can capture these state changes and exploit the temporal correlation in EEG data can enhance the performance of the BCI. In this paper, we propose an ensemble learning approach for self-paced BCIs. We use Bayesian optimization to train several different classifiers on different parts of the BCI hyper- parameter space. We call each of these classifiers Neural Network Conditional Random Field (NNCRF. NNCRF is a combination of a neural network and conditional random field (CRF. As in the standard CRF, NNCRF is able to model the correlation between adjacent EEG samples. However, NNCRF can also model the nonlinear dependencies between the input and the output, which makes it more powerful than the standard CRF. We compare the performance of our algorithm to those of three popular sequence labeling algorithms (Hidden Markov Models, Hidden Markov Support Vector Machines and CRF, and to two classical classifiers (Logistic Regression and Support Vector Machines. The classifiers are compared for the two cases: when the ensemble learning approach is not used and when it is. The data used in our studies are those from the BCI competition IV and the SM2 dataset. We show that our algorithm is considerably superior to the other approaches in terms of the Area Under the Curve (AUC of the BCI system.

  20. Autocatalytic loop, amplification and diffusion: a mathematical and computational model of cell polarization in neural chemotaxis.

    Directory of Open Access Journals (Sweden)

    Paola Causin

    2009-08-01

    Full Text Available The chemotactic response of cells to graded fields of chemical cues is a complex process that requires the coordination of several intracellular activities. Fundamental steps to obtain a front vs. back differentiation in the cell are the localized distribution of internal molecules and the amplification of the external signal. The goal of this work is to develop a mathematical and computational model for the quantitative study of such phenomena in the context of axon chemotactic pathfinding in neural development. In order to perform turning decisions, axons develop front-back polarization in their distal structure, the growth cone. Starting from the recent experimental findings of the biased redistribution of receptors on the growth cone membrane, driven by the interaction with the cytoskeleton, we propose a model to investigate the significance of this process. Our main contribution is to quantitatively demonstrate that the autocatalytic loop involving receptors, cytoplasmic species and cytoskeleton is adequate to give rise to the chemotactic behavior of neural cells. We assess the fact that spatial bias in receptors is a precursory key event for chemotactic response, establishing the necessity of a tight link between upstream gradient sensing and downstream cytoskeleton dynamics. We analyze further crosslinked effects and, among others, the contribution to polarization of internal enzymatic reactions, which entail the production of molecules with a one-to-more factor. The model shows that the enzymatic efficiency of such reactions must overcome a threshold in order to give rise to a sufficient amplification, another fundamental precursory step for obtaining polarization. Eventually, we address the characteristic behavior of the attraction/repulsion of axons subjected to the same cue, providing a quantitative indicator of the parameters which more critically determine this nontrivial chemotactic response.

  1. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

    Science.gov (United States)

    Hoo-Chang, Shin; Roth, Holger R.; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel

    2016-01-01

    Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet) and the revival of deep convolutional neural networks (CNN). CNNs enable learning data-driven, highly representative, layered hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models (supervised) pre-trained from natural image dataset to medical image tasks (although domain transfer between two medical image datasets is also possible). In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computeraided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, with 85% sensitivity at 3 false positive per patient, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance

  2. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    Science.gov (United States)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions

  3. Computational Cognition and Robust Decision Making

    Science.gov (United States)

    2013-03-06

    methodology for identifying sensory neural circuits of the fruit fly brain. Technical approach: Dynamic signal processing systems; convex optimization...14 DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution Bio-inspired Computation J. Wiles (U. Queensland , ITEE) iRAT: Neurorobotic

  4. Computer-Aided Diagnosis Systems for Lung Cancer: Challenges and Methodologies

    Science.gov (United States)

    El-Baz, Ayman; Beache, Garth M.; Gimel'farb, Georgy; Suzuki, Kenji; Okada, Kazunori; Elnakib, Ahmed; Soliman, Ahmed; Abdollahi, Behnoush

    2013-01-01

    This paper overviews one of the most important, interesting, and challenging problems in oncology, the problem of lung cancer diagnosis. Developing an effective computer-aided diagnosis (CAD) system for lung cancer is of great clinical importance and can increase the patient's chance of survival. For this reason, CAD systems for lung cancer have been investigated in a huge number of research studies. A typical CAD system for lung cancer diagnosis is composed of four main processing steps: segmentation of the lung fields, detection of nodules inside the lung fields, segmentation of the detected nodules, and diagnosis of the nodules as benign or malignant. This paper overviews the current state-of-the-art techniques that have been developed to implement each of these CAD processing steps. For each technique, various aspects of technical issues, implemented methodologies, training and testing databases, and validation methods, as well as achieved performances, are described. In addition, the paper addresses several challenges that researchers face in each implementation step and outlines the strengths and drawbacks of the existing approaches for lung cancer CAD systems. PMID:23431282

  5. Baseline Computational Fluid Dynamics Methodology for Longitudinal-Mode Liquid-Propellant Rocket Combustion Instability

    Science.gov (United States)

    Litchford, R. J.

    2005-01-01

    A computational method for the analysis of longitudinal-mode liquid rocket combustion instability has been developed based on the unsteady, quasi-one-dimensional Euler equations where the combustion process source terms were introduced through the incorporation of a two-zone, linearized representation: (1) A two-parameter collapsed combustion zone at the injector face, and (2) a two-parameter distributed combustion zone based on a Lagrangian treatment of the propellant spray. The unsteady Euler equations in inhomogeneous form retain full hyperbolicity and are integrated implicitly in time using second-order, high-resolution, characteristic-based, flux-differencing spatial discretization with Roe-averaging of the Jacobian matrix. This method was initially validated against an analytical solution for nonreacting, isentropic duct acoustics with specified admittances at the inflow and outflow boundaries. For small amplitude perturbations, numerical predictions for the amplification coefficient and oscillation period were found to compare favorably with predictions from linearized small-disturbance theory as long as the grid exceeded a critical density (100 nodes/wavelength). The numerical methodology was then exercised on a generic combustor configuration using both collapsed and distributed combustion zone models with a short nozzle admittance approximation for the outflow boundary. In these cases, the response parameters were varied to determine stability limits defining resonant coupling onset.

  6. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    Science.gov (United States)

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  7. Analytical and computational methodology to assess the over pressures generated by a potential catastrophic failure of a cryogenic pressure vessel

    Energy Technology Data Exchange (ETDEWEB)

    Zamora, I.; Fradera, J.; Jaskiewicz, F.; Lopez, D.; Hermosa, B.; Aleman, A.; Izquierdo, J.; Buskop, J.

    2014-07-01

    Idom has participated in the risk evaluation of Safety Important Class (SIC) structures due to over pressures generated by a catastrophic failure of a cryogenic pressure vessel at ITER plant site. The evaluation implements both analytical and computational methodologies achieving consistent and robust results. (Author)

  8. The Structural Communication Methodology as a Means of Teaching George Orwell's "Animal Farm": Paper and Computer-Based Instruction.

    Science.gov (United States)

    Romiszowski, Alex; Abrahamson, Andrew

    1994-01-01

    Provides brief history of Structural Communication (SC) methodology and discusses its use for teaching eighth-grade literature. Examines strengths and weaknesses of SC in print and hypermedia formats, and discusses the computer-based implementation of this participatory method of instruction. (JKP)

  9. Computational Assessment of Neural Probe and Brain Tissue Interface under Transient Motion

    Directory of Open Access Journals (Sweden)

    Michael Polanco

    2016-06-01

    Full Text Available The functional longevity of a neural probe is dependent upon its ability to minimize injury risk during the insertion and recording period in vivo, which could be related to motion-related strain between the probe and surrounding tissue. A series of finite element analyses was conducted to study the extent of the strain induced within the brain in an area around a neural probe. This study focuses on the transient behavior of neural probe and brain tissue interface with a viscoelastic model. Different stages of the interface from initial insertion of neural probe to full bonding of the probe by astro-glial sheath formation are simulated utilizing analytical tools to investigate the effects of relative motion between the neural probe and the brain while friction coefficients and kinematic frequencies are varied. The analyses can provide an in-depth look at the quantitative benefits behind using soft materials for neural probes.

  10. Development of a computational model on the neural activity patterns of a visual working memory in a hierarchical feedforward Network

    Science.gov (United States)

    An, Soyoung; Choi, Woochul; Paik, Se-Bum

    2015-11-01

    Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.

  11. Learning Pitch with STDP: A Computational Model of Place and Temporal Pitch Perception Using Spiking Neural Networks.

    Directory of Open Access Journals (Sweden)

    Nafise Erfanian Saeedi

    2016-04-01

    Full Text Available Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons' action potentials (spikes as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.

  12. Computer vision-based limestone rock-type classification using probabilistic neural network

    Directory of Open Access Journals (Sweden)

    Ashok Kumar Patel

    2016-01-01

    Full Text Available Proper quality planning of limestone raw materials is an essential job of maintaining desired feed in cement plant. Rock-type identification is an integrated part of quality planning for limestone mine. In this paper, a computer vision-based rock-type classification algorithm is proposed for fast and reliable identification without human intervention. A laboratory scale vision-based model was developed using probabilistic neural network (PNN where color histogram features are used as input. The color image histogram-based features that include weighted mean, skewness and kurtosis features are extracted for all three color space red, green, and blue. A total nine features are used as input for the PNN classification model. The smoothing parameter for PNN model is selected judicially to develop an optimal or close to the optimum classification model. The developed PPN is validated using the test data set and results reveal that the proposed vision-based model can perform satisfactorily for classifying limestone rock-types. Overall the error of mis-classification is below 6%. When compared with other three classification algorithms, it is observed that the proposed method performs substantially better than all three classification algorithms.

  13. Minimalist Social-Affective Value for Use in Joint Action: A Neural-Computational Hypothesis

    Science.gov (United States)

    Lowe, Robert; Almér, Alexander; Lindblad, Gustaf; Gander, Pierre; Michael, John; Vesper, Cordula

    2016-01-01

    Joint Action is typically described as social interaction that requires coordination among two or more co-actors in order to achieve a common goal. In this article, we put forward a hypothesis for the existence of a neural-computational mechanism of affective valuation that may be critically exploited in Joint Action. Such a mechanism would serve to facilitate coordination between co-actors permitting a reduction of required information. Our hypothesized affective mechanism provides a value function based implementation of Associative Two-Process (ATP) theory that entails the classification of external stimuli according to outcome expectancies. This approach has been used to describe animal and human action that concerns differential outcome expectancies. Until now it has not been applied to social interaction. We describe our Affective ATP model as applied to social learning consistent with an “extended common currency” perspective in the social neuroscience literature. We contrast this to an alternative mechanism that provides an example implementation of the so-called social-specific value perspective. In brief, our Social-Affective ATP mechanism builds upon established formalisms for reinforcement learning (temporal difference learning models) nuanced to accommodate expectations (consistent with ATP theory) and extended to integrate non-social and social cues for use in Joint Action. PMID:27601989

  14. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa

    2017-01-24

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential forreducing or removingother artifacts caused by instrument instability, detector non-linearity,etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.

  15. Brain-Computer Interface for Control of Wheelchair Using Fuzzy Neural Networks

    Directory of Open Access Journals (Sweden)

    Rahib H. Abiyev

    2016-01-01

    Full Text Available The design of brain-computer interface for the wheelchair for physically disabled people is presented. The design of the proposed system is based on receiving, processing, and classification of the electroencephalographic (EEG signals and then performing the control of the wheelchair. The number of experimental measurements of brain activity has been done using human control commands of the wheelchair. Based on the mental activity of the user and the control commands of the wheelchair, the design of classification system based on fuzzy neural networks (FNN is considered. The design of FNN based algorithm is used for brain-actuated control. The training data is used to design the system and then test data is applied to measure the performance of the control system. The control of the wheelchair is performed under real conditions using direction and speed control commands of the wheelchair. The approach used in the paper allows reducing the probability of misclassification and improving the control accuracy of the wheelchair.

  16. Minimalist Social-Affective Value for Use in Joint Action: A Neural-Computational Hypothesis

    Directory of Open Access Journals (Sweden)

    Robert J Lowe

    2016-08-01

    Full Text Available Joint Action is typically described as social interaction that requires coordination among two or more co-actors in order to achieve a common goal. In this article, we put forward a hypothesis for the existence of a neural-computational mechanism of affective valuation that may be critically exploited in Joint Action. Such a mechanism would serve to facilitate coordination between co-actors permitting a reduction of required information. Our hypothesized affective mechanism provides a value function based implementation of Associative Two-Process theory that entails the classification of external stimuli according to outcome expectancies. This approach has been used to describe animal and human action that concerns differential outcome expectancies. Until now it has not been applied to social interaction. We describe our Affective Associative Two-Process (ATP model as applied to social learning consistent with an ‘extended common currency’ perspective in the social neuroscience literature. We contrast this to an alternative mechanism that provides an example implementation of the so-called social-specific value perspective. In brief, our Social-Affective ATP mechanism builds upon established formalisms for reinforcement learning (temporal difference learning models nuanced to accommodate expectations (consistent with ATP theory and extended to integrate non-social and social cues for use in Joint Action.

  17. Neural and cortisol responses during play with human and computer partners in children with autism.

    Science.gov (United States)

    Edmiston, Elliot Kale; Merkle, Kristen; Corbett, Blythe A

    2015-08-01

    Children with autism spectrum disorder (ASD) exhibit impairment in reciprocal social interactions, including play, which can manifest as failure to show social preference or discrimination between social and nonsocial stimuli. To explore mechanisms underlying these deficits, we collected salivary cortisol from 42 children 8-12 years with ASD or typical development during a playground interaction with a confederate child. Participants underwent functional MRI during a prisoner's dilemma game requiring cooperation or defection with a human (confederate) or computer partner. Search region of interest analyses were based on previous research (e.g. insula, amygdala, temporal parietal junction-TPJ). There were significant group differences in neural activation based on partner and response pattern. When playing with a human partner, children with ASD showed limited engagement of a social salience brain circuit during defection. Reduced insula activation during defection in the ASD children relative to TD children, regardless of partner type, was also a prominent finding. Insula and TPJ BOLD during defection was also associated with stress responsivity and behavior in the ASD group under playground conditions. Children with ASD engage social salience networks less than TD children during conditions of social salience, supporting a fundamental disturbance of social engagement. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  18. Abstract Computation in Schizophrenia Detection through Artificial Neural Network Based Systems

    Directory of Open Access Journals (Sweden)

    L. Cardoso

    2015-01-01

    Full Text Available Schizophrenia stands for a long-lasting state of mental uncertainty that may bring to an end the relation among behavior, thought, and emotion; that is, it may lead to unreliable perception, not suitable actions and feelings, and a sense of mental fragmentation. Indeed, its diagnosis is done over a large period of time; continuos signs of the disturbance persist for at least 6 (six months. Once detected, the psychiatrist diagnosis is made through the clinical interview and a series of psychic tests, addressed mainly to avoid the diagnosis of other mental states or diseases. Undeniably, the main problem with identifying schizophrenia is the difficulty to distinguish its symptoms from those associated to different untidiness or roles. Therefore, this work will focus on the development of a diagnostic support system, in terms of its knowledge representation and reasoning procedures, based on a blended of Logic Programming and Artificial Neural Networks approaches to computing, taking advantage of a novel approach to knowledge representation and reasoning, which aims to solve the problems associated in the handling (i.e., to stand for and reason of defective information.

  19. Enabling functional neural circuit simulations with distributed computing of neuromodulated plasticity

    Directory of Open Access Journals (Sweden)

    Wiebke ePotjans

    2010-11-01

    Full Text Available A major puzzle in the field of computational neuroscience is how to relate system-level learning in higher organisms to synaptic plasticity. Recently, plasticity rules depending not only on pre- and post-synaptic activity but also on a third, non-local neuromodulatory signal have emerged as key candidates to bridge the gap between the macroscopic and the microscopic level of learning. Crucial insights into this topic are expected to be gained from simulations of neural systems, as these allow the simultaneous study of the multiple spatial and temporal scales that are involved in the problem. In particular, synaptic plasticity can be studied during the whole learning process, i.e. on a time scale of minutes to hours and across multiple brain areas. Implementing neuromodulated plasticity in large-scale network simulations where the neuromodulatory signal is dynamically generated by the network itself is challenging, because the network structure is commonly defined purely by the connectivity graph without explicit reference to the embedding of the nodes in physical space. Furthermore, the simulation of networks with realistic connectivity entails the use of distributed computing. A neuromodulated synapse must therefore be informed in an efficient way about the neuromodulatory signal, which is typically generated by a population of neurons located on different machines than either the pre- or post-synaptic neuron. Here, we develop a general framework to solve the problem of implementing neuromodulated plasticity in a time-driven distributed simulation, without reference to a particular implementation language, neuromodulator or neuromodulated plasticity mechanism. We implement our framework in the simulator NEST and demonstrate excellent scaling up to 1024 processors for simulations of a recurrent network incorporating neuromodulated spike-timing dependent plasticity.

  20. Building bridges between perceptual and economic decision-making: neural and computational mechanisms

    Directory of Open Access Journals (Sweden)

    Christopher eSummerfield

    2012-05-01

    Full Text Available Investigation into the neural and computational bases of decision-making has proceeded in two parallel but distinct streams. Perceptual decision making (PDM is concerned with how observers detect, discriminate and categorise noisy sensory information. Economic decision making (EDM explores how options are selected on the basis of their reinforcement history. Traditionally, the subfields of PDM and EDM have employed different paradigms, proposed different mechanistic models, explored different brain regions, disagreed about whether decisions approach optimality. Nevertheless, we argue that there is a common framework for understanding decisions made in both domains, under which an agent has to combine sensory information (what is the stimulus with value information (what is it worth. We review computational models of the decision process typically used in PDM, based around the idea that decisions involve a serial integration of evidence, and assess their applicability to decisions between good and gambles. Subsequently, we consider the contribution of three key brain regions – the parietal cortex, the basal ganglia, and the orbitofrontal cortex – to perceptual and economic decision-making, with a focus on the mechanisms by which sensory and reward information are integrated during choice. We find that although the parietal cortex is often implicated in the integration of sensory evidence, there is evidence for its role in encoding the expected value of a decision. Similarly, although much research has emphasised the role of the striatum and orbitofrontal cortex in value-guided choices, they may play an important role in categorisation of perceptual information. In conclusion, we consider how findings from the two fields might be brought together, in order to move towards a general framework for understanding decision-making in humans and other primates.

  1. Unsupervised statistical learning underpins computational, behavioural, and neural manifestations of musical expectation.

    Science.gov (United States)

    Pearce, Marcus T; Ruiz, María Herrojo; Kapasi, Selina; Wiggins, Geraint A; Bhattacharya, Joydeep

    2010-03-01

    The ability to anticipate forthcoming events has clear evolutionary advantages, and predictive successes or failures often entail significant psychological and physiological consequences. In music perception, the confirmation and violation of expectations are critical to the communication of emotion and aesthetic effects of a composition. Neuroscientific research on musical expectations has focused on harmony. Although harmony is important in Western tonal styles, other musical traditions, emphasizing pitch and melody, have been rather neglected. In this study, we investigated melodic pitch expectations elicited by ecologically valid musical stimuli by drawing together computational, behavioural, and electrophysiological evidence. Unlike rule-based models, our computational model acquires knowledge through unsupervised statistical learning of sequential structure in music and uses this knowledge to estimate the conditional probability (and information content) of musical notes. Unlike previous behavioural paradigms that interrupt a stimulus, we devised a new paradigm for studying auditory expectation without compromising ecological validity. A strong negative correlation was found between the probability of notes predicted by our model and the subjectively perceived degree of expectedness. Our electrophysiological results showed that low-probability notes, as compared to high-probability notes, elicited a larger (i) negative ERP component at a late time period (400-450 ms), (ii) beta band (14-30 Hz) oscillation over the parietal lobe, and (iii) long-range phase synchronization between multiple brain regions. Altogether, the study demonstrated that statistical learning produces information-theoretic descriptions of musical notes that are proportional to their perceived expectedness and are associated with characteristic patterns of neural activity. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  2. Engineering Applications of Neural Computing: A State-of-the-Art Survey

    Science.gov (United States)

    1991-05-01

    Papachristou. C. A., "Training of a Neural Network for Pattern Classifi- cation Based on Entropy Measure," Proceedings of 1988 IEEE International Conference...diction and System Modeling," Technical Report LA-UR-87-2662, Los Alamos National Lab- oratoiy, 1987. 10. Levy, W. B., "Maximum Entropy Prediction in Neural...Lawrence Erlbaum, Hillsdale, NJ, 1990. 53. MacGregor, R. J., Neural and Brain Modeling, The Academic Press, New York, 1987. 54. Marr , D., Vision, San

  3. Predicting Diameter Distributions of Longleaf Pine Plantations: A Comparison Between Artificial Neural Networks and Other Accepted Methodologies

    Science.gov (United States)

    Daniel J. Leduc; Thomas G. Matney; Keith L. Belli; V. Clark Baldwin

    2001-01-01

    Artificial neural networks (NN) are becoming a popular estimation tool. Because they require no assumptions about the form of a fitting function, they can free the modeler from reliance on parametric approximating functions that may or may not satisfactorily fit the observed data. To date there have been few applications in forestry science, but as better NN software...

  4. The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction.

    Science.gov (United States)

    Casey, M

    1996-08-15

    Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.

  5. ANALYSIS OF EFFECTIVENESS OF METHODOLOGICAL SYSTEM FOR PROBABILITY AND STOCHASTIC PROCESSES COMPUTER-BASED LEARNING FOR PRE-SERVICE ENGINEERS

    Directory of Open Access Journals (Sweden)

    E. Chumak

    2015-04-01

    Full Text Available The author substantiates that only methodological training systems of mathematical disciplines with implementation of information and communication technologies (ICT can meet the requirements of modern educational paradigm and make possible to increase the educational efficiency. Due to this fact, the necessity of developing the methodology of theory of probability and stochastic processes computer-based learning for pre-service engineers is underlined in the paper. The results of the experimental study for analysis of the efficiency of methodological system of theory of probability and stochastic processes computer-based learning for pre-service engineers are shown. The analysis includes three main stages: ascertaining, searching and forming. The key criteria of the efficiency of designed methodological system are the level of probabilistic and stochastic skills of students and their learning motivation. The effect of implementing the methodological system of probability theory and stochastic processes computer-based learning on the level of students’ IT literacy is shown in the paper. The expanding of the range of objectives of ICT applying by students is described by author. The level of formation of students’ learning motivation on the ascertaining and forming stages of the experiment is analyzed. The level of intrinsic learning motivation for pre-service engineers is defined on these stages of the experiment. For this purpose, the methodology of testing the students’ learning motivation in the chosen specialty is presented in the paper. The increasing of intrinsic learning motivation of the experimental group students (E group against the control group students (C group is demonstrated.

  6. COMPUTER-SIMULATED NEURAL NETWORKS - AN APPROPRIATE MODEL FOR MOTOR DEVELOPMENT

    NARCIS (Netherlands)

    VOS, JE; SCHEEPSTRA, KA

    The idea of an artificial neural network is introduced in a historical context, and the essential aspect of it, viz., the modifiable synapse, is compared to the aspect of plasticity in the natural nervous system. Based on such an artificial neural network, a model is presented for the way in which

  7. Dificultades en los métodos de estudio de exposiciones ambientales y defectos del tubo neural Methodological challenges to assess environmental exposures related to neural tube defects

    Directory of Open Access Journals (Sweden)

    Víctor Hugo Borja-Aburto

    1999-11-01

    Full Text Available Objetivo. Discutir las actitudes en la evaluación de las exposiciones ambientales como factores de riesgo para defectos de riesgo del tubo neural, al tiempo que se presentan los principales factores estudiados hasta la fecha. Resultados. Las exposiciones ambientales se citan muy a menudo como causa de malformaciones congénitas; sin embargo, ha sido difícil establecer esta asociación en los estudios de poblaciones humanas, debido a problemas en su diseño y conducción. Lo anterior es particularmente marcado en el caso del estudio de los defectos del cierre del tubo neural (DTN, que es una de las principales malformaciones y que incluye anencefalia, espina bífida y encefalocele, y su asociación con exposiciones ambientales. Las dificultades en los métodos surgen de: a la medida de frecuencia para realizar comparaciones espacio-temporales; b la clasificación y heterogeneidad de las malformaciones; c la consideración de los factores relacionados con la madre, el padre y el producto, de manera conjunta, y d la evaluación de las exposiciones ambientales. Conclusiones. Hipotéticamente las exposiciones ambientales tanto del padre como de la madre pueden producir daño genético antes y/o después de la concepción por la acción directa sobre el embrión o sobre el complejo fetoplacentario, de tal manera que en la evaluación de exposiciones ambientales: a deben tomarse en cuenta las exposiciones maternas y paternas; b debe considerarse el periodo crítico de exposición, esto es, tres meses anteriores a la concepción para el padre y un mes alrededor de la concepción para la madre; c en la medida de lo posible, la evaluación de la exposición deberá ser cuantitativa, evitando clasificar a los grupos únicamente como expuestos y no expuestos, y d es recomendable emplear marcadores biológicos de exposición siempre que sea posible, así como utilizar marcadores biológicos que permitan clasificar a la población en grupos con distinta

  8. Methodology of Computer-Aided Design of Variable Guide Vanes of Aircraft Engines

    Science.gov (United States)

    Falaleev, Sergei V.; Melentjev, Vladimir S.; Gvozdev, Alexander S.

    2016-01-01

    The paper presents a methodology which helps to avoid a great amount of costly experimental research. This methodology includes thermo-gas dynamic design of an engine and its mounts, the profiling of compressor flow path and cascade design of guide vanes. Employing a method elaborated by Howell, we provide a theoretical solution to the task of…

  9. MODELING AND STRUCTURING OF ENTERPRISE MANAGEMENT SYSTEM RESORT SPHERE BASED ON ELEMENTS OF NEURAL NETWORK THEORY: THE METHODOLOGICAL BASIS

    Directory of Open Access Journals (Sweden)

    Rena R. Timirualeeva

    2015-01-01

    Full Text Available The article describes the methodology of modeling andstructuring of business networks theory. Accounting ofenvironmental factors mega-, macro- and mesolevels, theinternal state of the managed system and the error management command execution by control system implemented inthis. The proposed methodology can improve the quality of enterprise management of resort complex through a moreflexible response to changes in the parameters of the internaland external environments.

  10. A computational neural model of orientation detection based on multiple guesses: comparison of geometrical and algebraic models.

    Science.gov (United States)

    Wei, Hui; Ren, Yuan; Wang, Zi Yan

    2013-10-01

    The implementation of Hubel-Wiesel hypothesis that orientation selectivity of a simple cell is based on ordered arrangement of its afferent cells has some difficulties. It requires the receptive fields (RFs) of those ganglion cells (GCs) and LGN cells to be similar in size and sub-structure and highly arranged in a perfect order. It also requires an adequate number of regularly distributed simple cells to match ubiquitous edges. However, the anatomical and electrophysiological evidence is not strong enough to support this geometry-based model. These strict regularities also make the model very uneconomical in both evolution and neural computation. We propose a new neural model based on an algebraic method to estimate orientations. This approach synthesizes the guesses made by multiple GCs or LGN cells and calculates local orientation information subject to a group of constraints. This algebraic model need not obey the constraints of Hubel-Wiesel hypothesis, and is easily implemented with a neural network. By using the idea of a satisfiability problem with constraints, we also prove that the precision and efficiency of this model are mathematically practicable. The proposed model makes clear several major questions which Hubel-Wiesel model does not account for. Image-rebuilding experiments are conducted to check whether this model misses any important boundary in the visual field because of the estimation strategy. This study is significant in terms of explaining the neural mechanism of orientation detection, and finding the circuit structure and computational route in neural networks. For engineering applications, our model can be used in orientation detection and as a simulation platform for cell-to-cell communications to develop bio-inspired eye chips.

  11. Computer-Aided Diagnosis Based on Convolutional Neural Network System for Colorectal Polyp Classification: Preliminary Experience.

    Science.gov (United States)

    Komeda, Yoriaki; Handa, Hisashi; Watanabe, Tomohiro; Nomura, Takanobu; Kitahashi, Misaki; Sakurai, Toshiharu; Okamoto, Ayana; Minami, Tomohiro; Kono, Masashi; Arizumi, Tadaaki; Takenaka, Mamoru; Hagiwara, Satoru; Matsui, Shigenaga; Nishida, Naoshi; Kashida, Hiroshi; Kudo, Masatoshi

    2017-01-01

    Computer-aided diagnosis (CAD) is becoming a next-generation tool for the diagnosis of human disease. CAD for colon polyps has been suggested as a particularly useful tool for trainee colonoscopists, as the use of a CAD system avoids the complications associated with endoscopic resections. In addition to conventional CAD, a convolutional neural network (CNN) system utilizing artificial intelligence (AI) has been developing rapidly over the past 5 years. We attempted to generate a unique CNN-CAD system with an AI function that studied endoscopic images extracted from movies obtained with colonoscopes used in routine examinations. Here, we report our preliminary results of this novel CNN-CAD system for the diagnosis of colon polyps. A total of 1,200 images from cases of colonoscopy performed between January 2010 and December 2016 at Kindai University Hospital were used. These images were extracted from the video of actual endoscopic examinations. Additional video images from 10 cases of unlearned processes were retrospectively assessed in a pilot study. They were simply diagnosed as either an adenomatous or nonadenomatous polyp. The number of images used by AI to learn to distinguish adenomatous from nonadenomatous was 1,200:600. These images were extracted from the videos of actual endoscopic examinations. The size of each image was adjusted to 256 × 256 pixels. A 10-hold cross-validation was carried out. The accuracy of the 10-hold cross-validation is 0.751, where the accuracy is the ratio of the number of correct answers over the number of all the answers produced by the CNN. The decisions by the CNN were correct in 7 of 10 cases. A CNN-CAD system using routine colonoscopy might be useful for the rapid diagnosis of colorectal polyp classification. Further prospective studies in an in vivo setting are required to confirm the effectiveness of a CNN-CAD system in routine colonoscopy. © 2017 S. Karger AG, Basel.

  12. Deep neural network-based computer-assisted detection of cerebral aneurysms in MR angiography.

    Science.gov (United States)

    Nakao, Takahiro; Hanaoka, Shouhei; Nomura, Yukihiro; Sato, Issei; Nemoto, Mitsutaka; Miki, Soichiro; Maeda, Eriko; Yoshikawa, Takeharu; Hayashi, Naoto; Abe, Osamu

    2017-08-24

    The usefulness of computer-assisted detection (CAD) for detecting cerebral aneurysms has been reported; therefore, the improved performance of CAD will help to detect cerebral aneurysms. To develop a CAD system for intracranial aneurysms on unenhanced magnetic resonance angiography (MRA) images based on a deep convolutional neural network (CNN) and a maximum intensity projection (MIP) algorithm, and to demonstrate the usefulness of the system by training and evaluating it using a large dataset. Retrospective study. There were 450 cases with intracranial aneurysms. The diagnoses of brain aneurysms were made on the basis of MRA, which was performed as part of a brain screening program. Noncontrast-enhanced 3D time-of-flight (TOF) MRA on 3T MR scanners. In our CAD, we used a CNN classifier that predicts whether each voxel is inside or outside aneurysms by inputting MIP images generated from a volume of interest (VOI) around the voxel. The CNN was trained in advance using manually inputted labels. We evaluated our method using 450 cases with intracranial aneurysms, 300 of which were used for training, 50 for parameter tuning, and 100 for the final evaluation. Free-response receiver operating characteristic (FROC) analysis. Our CAD system detected 94.2% (98/104) of aneurysms with 2.9 false positives per case (FPs/case). At a sensitivity of 70%, the number of FPs/case was 0.26. We showed that the combination of a CNN and an MIP algorithm is useful for the detection of intracranial aneurysms. 4 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Deciding not to decide: computational and neural evidence for hidden behavior in sequential choice.

    Directory of Open Access Journals (Sweden)

    Sebastian Gluth

    2013-10-01

    Full Text Available Understanding the cognitive and neural processes that underlie human decision making requires the successful prediction of how, but also of when, people choose. Sequential sampling models (SSMs have greatly advanced the decision sciences by assuming decisions to emerge from a bounded evidence accumulation process so that response times (RTs become predictable. Here, we demonstrate a difficulty of SSMs that occurs when people are not forced to respond at once but are allowed to sample information sequentially: The decision maker might decide to delay the choice and terminate the accumulation process temporarily, a scenario not accounted for by the standard SSM approach. We developed several SSMs for predicting RTs from two independent samples of an electroencephalography (EEG and a functional magnetic resonance imaging (fMRI study. In these studies, participants bought or rejected fictitious stocks based on sequentially presented cues and were free to respond at any time. Standard SSM implementations did not describe RT distributions adequately. However, by adding a mechanism for postponing decisions to the model we obtained an accurate fit to the data. Time-frequency analysis of EEG data revealed alternating states of de- and increasing oscillatory power in beta-band frequencies (14-30 Hz, indicating that responses were repeatedly prepared and inhibited and thus lending further support for the existence of a decision not to decide. Finally, the extended model accounted for the results of an adapted version of our paradigm in which participants had to press a button for sampling more information. Our results show how computational modeling of decisions and RTs support a deeper understanding of the hidden dynamics in cognition.

  14. A methodology for sunlight urban planning: a computer-based solar and sky vault obstruction analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Fernando Oscar Ruttkay; Silva, Carlos Alejandro Nome [Federal Univ. of Santa Catarina (UFSC), Dept. of Architecture and Urbanism, Florianopolis, SC (Brazil); Turkienikz, Benamy [Federal Univ. of Rio Grande do Sul (UFRGS), Faculty of Architecture, Porto Alegre, RS (Brazil)

    2001-07-01

    The main purpose of the present study is to describe a planning methodology to improve the quality of the built environment based on the rational control of solar radiation and the view of the sky vault. The main criterion used to control the access and obstruction of solar radiation was the concept of desirability and undesirability of solar radiation. A case study for implementing the proposed methodology is developed. Although needing further developments to find its way into regulations and practical applications, the methodology has shown a strong potential to deal with an aspect that otherwise would be almost impossible. (Author)

  15. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    Science.gov (United States)

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-06

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. Copyright © 2015 the authors 0270-6474/15/357256-08$15.00/0.

  16. Methodological study of computational approaches to address the problem of strong correlations

    Science.gov (United States)

    Lee, Juho

    The main focus of this thesis is the detailed investigation of computational methods to tackle strongly correlated materials in which a rich variety of exotic phenomena are found. A many-body problem with sizable electronic correlations can no longer be explained by independent-particle approximations such as density functional theory (DFT) or tight-binding approaches. The influence of an electron to the others is too strong for each electron to be treated as an independent quasiparticle and consequently those standard band-structure methods fail even at a qualitative level. One of the most powerful approaches for strong correlations is the dynamical mean-field theory (DMFT), which has enlightened the understanding of the Mott transition based on the Hubbard model. For realistic applications, the dynamical mean-field theory is combined with various independent-particles approaches. The most widely used one is the DMFT combined with the DFT in the local density approximation (LDA), so-called LDA+DMFT. In this approach, the electrons in the weakly correlated orbitals are calculated by LDA while others in the strongly correlated orbitals are treated by DMFT. Recently, the method combining DMFT with Hedin's GW approximation was also developed, in which the momentum-dependent self-energy is also added. In this thesis, we discuss the application of those methodologies based on DMFT. First, we apply the dynamical mean-field theory to solve the 3-dimensional Hubbard model in Chap. 3. In this application, we model the interface between the thermodynamically coexisting metal and Mott insulator. We show how to model the required slab geometry and extract the electronic spectra. We construct an effective Landau free energy and compute the variation of its parameters across the phase diagram. Finally, using a linear mixture of the density and double-occupancy, we identify a natural Ising order parameter which unifies the treatment of the bandwidth and filling controlled Mott

  17. A computational model incorporating neural stem cell dynamics reproduces glioma incidence across the lifespan in the human population.

    Directory of Open Access Journals (Sweden)

    Roman Bauer

    Full Text Available Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computational model to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert differential susceptibility throughout the population. Overall, our model supports the hypothesis that glioma is caused by randomly-occurring oncogenic mutations within the neural stem cell population. Based on this model, we assess the influence of the (experimentally indicated decrease in the number of neural stem cells and increase of cell division rate during aging. Our model provides multiple testable predictions, and suggests that different temporal sequences of oncogenic mutations can lead to tumorigenesis. Finally, we conclude that four or five oncogenic mutations are sufficient for the formation of glioma.

  18. Development of methodology and computer programs for the ground response spectrum and the probabilistic seismic hazard analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Joon Kyoung [Semyung Univ., Research Institute of Industrial Science and Technol , Jecheon (Korea, Republic of)

    1996-12-15

    Objective of this study is to investigate and develop the methodologies and corresponding computer codes, compatible to the domestic seismological and geological environments, for estimating ground response spectrum and probabilistic seismic hazard. Using the PSHA computer program, the Cumulative Probability Functions(CPDF) and Probability Functions (PDF) of the annual exceedence have been investigated for the analysis of the uncertainty space of the annual probability at ten interested seismic hazard levels (0.1 g to 0.99 g). The cumulative provability functions and provability functions of the annual exceedence have been also compared to those results from the different input parameter spaces.

  19. Cat Swarm Optimization Based Functional Link Artificial Neural Network Filter for Gaussian Noise Removal from Computed Tomography Images

    Directory of Open Access Journals (Sweden)

    M. Kumar

    2016-01-01

    Full Text Available Gaussian noise is one of the dominant noises, which degrades the quality of acquired Computed Tomography (CT image data. It creates difficulties in pathological identification or diagnosis of any disease. Gaussian noise elimination is desirable to improve the clarity of a CT image for clinical, diagnostic, and postprocessing applications. This paper proposes an evolutionary nonlinear adaptive filter approach, using Cat Swarm Functional Link Artificial Neural Network (CS-FLANN to remove the unwanted noise. The structure of the proposed filter is based on the Functional Link Artificial Neural Network (FLANN and the Cat Swarm Optimization (CSO is utilized for the selection of optimum weight of the neural network filter. The applied filter has been compared with the existing linear filters, like the mean filter and the adaptive Wiener filter. The performance indices, such as peak signal to noise ratio (PSNR, have been computed for the quantitative analysis of the proposed filter. The experimental evaluation established the superiority of the proposed filtering technique over existing methods.

  20. Implementation of a Computational Model for Information Processing and Signaling from a Biological Neural Network of Neostriatum Nucleus

    Directory of Open Access Journals (Sweden)

    C. Sanchez-Vazquez

    2014-06-01

    Full Text Available Recently, several mathematical models have been developed to study and explain the way information is processed in the brain. The models published account for a myriad of perspectives from single neuron segments to neural networks, and lately, with the use of supercomputing facilities, to the study of whole environments of nuclei interacting for massive stimuli and processing. Some of the most complex neural structures -and also most studied- are basal ganglia nuclei in the brain; amongst which we can find the Neostriatum. Currently, just a few papers about high scale biological-based computational modeling of this region have been published. It has been demonstrated that the Basal Ganglia region contains functions related to learning and decision making based on rules of the action-selection type, which are of particular interest for the machine autonomous-learning field. This knowledge could be clearly transferred between areas of research. The present work proposes a model of information processing, by integrating knowledge generated from widely accepted experiments in both morphology and biophysics, through integrating theories such as the compartmental electrical model, the Rall’s cable equation, and the Hodking-Huxley particle potential regulations, among others. Additionally, the leaky integrator framework is incorporated in an adapted function. This was accomplished through a computational environment prepared for high scale neural simulation which delivers data output equivalent to that from the original model, and that can not only be analyzed as a Bayesian problem, but also successfully compared to the biological specimen.

  1. A Reconfigurable and Biologically Inspired Paradigm for Computation Using Network-On-Chip and Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Jim Harkin

    2009-01-01

    Full Text Available FPGA devices have emerged as a popular platform for the rapid prototyping of biological Spiking Neural Networks (SNNs applications, offering the key requirement of reconfigurability. However, FPGAs do not efficiently realise the biologically plausible neuron and synaptic models of SNNs, and current FPGA routing structures cannot accommodate the high levels of interneuron connectivity inherent in complex SNNs. This paper highlights and discusses the current challenges of implementing scalable SNNs on reconfigurable FPGAs. The paper proposes a novel field programmable neural network architecture (EMBRACE, incorporating low-power analogue spiking neurons, interconnected using a Network-on-Chip architecture. Results on the evaluation of the EMBRACE architecture using the XOR benchmark problem are presented, and the performance of the architecture is discussed. The paper also discusses the adaptability of the EMBRACE architecture in supporting fault tolerant computing.

  2. Mathematical Modelling and Optimization of Cutting Force, Tool Wear and Surface Roughness by Using Artificial Neural Network and Response Surface Methodology in Milling of Ti-6242S

    Directory of Open Access Journals (Sweden)

    Erol Kilickap

    2017-10-01

    Full Text Available In this paper, an experimental study was conducted to determine the effect of different cutting parameters such as cutting speed, feed rate, and depth of cut on cutting force, surface roughness, and tool wear in the milling of Ti-6242S alloy using the cemented carbide (WC end mills with a 10 mm diameter. Data obtained from experiments were defined both Artificial Neural Network (ANN and Response Surface Methodology (RSM. ANN trained network using Levenberg-Marquardt (LM and weights were trained. On the other hand, the mathematical models in RSM were created applying Box Behnken design. Values obtained from the ANN and the RSM was found to be very close to the data obtained from experimental studies. The lowest cutting force and surface roughness were obtained at high cutting speeds and low feed rate and depth of cut. The minimum tool wear was obtained at low cutting speed, feed rate, and depth of cut.

  3. Establishing a standard calibration methodology for MOSFET detectors in computed tomography dosimetry.

    Science.gov (United States)

    Brady, S L; Kaufman, R A

    2012-06-01

    The use of metal-oxide-semiconductor field-effect transistor (MOSFET) detectors for patient dosimetry has increased by ~25% since 2005. Despite this increase, no standard calibration methodology has been identified nor calibration uncertainty quantified for the use of MOSFET dosimetry in CT. This work compares three MOSFET calibration methodologies proposed in the literature, and additionally investigates questions relating to optimal time for signal equilibration and exposure levels for maximum calibration precision. The calibration methodologies tested were (1) free in-air (FIA) with radiographic x-ray tube, (2) FIA with stationary CT x-ray tube, and (3) within scatter phantom with rotational CT x-ray tube. Each calibration was performed at absorbed dose levels of 10, 23, and 35 mGy. Times of 0 min or 5 min were investigated for signal equilibration before or after signal read out. Calibration precision was measured to be better than 5%-7%, 3%-5%, and 2%-4% for the 10, 23, and 35 mGy respective dose levels, and independent of calibration methodology. No correlation was demonstrated for precision and signal equilibration time when allowing 5 min before or after signal read out. Differences in average calibration coefficients were demonstrated between the FIA with CT calibration methodology 26.7 ± 1.1 mV cGy(-1) versus the CT scatter phantom 29.2 ± 1.0 mV cGy(-1) and FIA with x-ray 29.9 ± 1.1 mV cGy(-1) methodologies. A decrease in MOSFET sensitivity was seen at an average change in read out voltage of ~3000 mV. The best measured calibration precision was obtained by exposing the MOSFET detectors to 23 mGy. No signal equilibration time is necessary to improve calibration precision. A significant difference between calibration outcomes was demonstrated for FIA with CT compared to the other two methodologies. If the FIA with a CT calibration methodology was used to create calibration coefficients for the eventual use for phantom dosimetry, a measurement error ~12

  4. Computational Models of Financial Price Prediction: A Survey of Neural Networks, Kernel Machines and Evolutionary Computation Approaches

    Directory of Open Access Journals (Sweden)

    Javier Sandoval

    2011-12-01

    Full Text Available A review of the representative models of machine learning research applied to the foreign exchange rate and stock price prediction problem is conducted.  The article is organized as follows: The first section provides a context on the definitions and importance of foreign exchange rate and stock markets.  The second section reviews machine learning models for financial prediction focusing on neural networks, SVM and evolutionary methods. Lastly, the third section draws some conclusions.

  5. Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images

    Directory of Open Access Journals (Sweden)

    Kuo Men

    2017-12-01

    Full Text Available BackgroundRadiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC. It requires exact delineation of the nasopharynx gross tumor volume (GTVnx, the metastatic lymph node gross tumor volume (GTVnd, the clinical target volume (CTV, and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN for segmentation of these targets.MethodsThe proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model.ResultsThe proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively.ConclusionDDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy

  6. Genome-wide identification of specific oligonucleotides using artificial neural network and computational genomic analysis

    Directory of Open Access Journals (Sweden)

    Chen Jiun-Ching

    2007-05-01

    Full Text Available Abstract Background Genome-wide identification of specific oligonucleotides (oligos is a computationally-intensive task and is a requirement for designing microarray probes, primers, and siRNAs. An artificial neural network (ANN is a machine learning technique that can effectively process complex and high noise data. Here, ANNs are applied to process the unique subsequence distribution for prediction of specific oligos. Results We present a novel and efficient algorithm, named the integration of ANN and BLAST (IAB algorithm, to identify specific oligos. We establish the unique marker database for human and rat gene index databases using the hash table algorithm. We then create the input vectors, via the unique marker database, to train and test the ANN. The trained ANN predicted the specific oligos with high efficiency, and these oligos were subsequently verified by BLAST. To improve the prediction performance, the ANN over-fitting issue was avoided by early stopping with the best observed error and a k-fold validation was also applied. The performance of the IAB algorithm was about 5.2, 7.1, and 6.7 times faster than the BLAST search without ANN for experimental results of 70-mer, 50-mer, and 25-mer specific oligos, respectively. In addition, the results of polymerase chain reactions showed that the primers predicted by the IAB algorithm could specifically amplify the corresponding genes. The IAB algorithm has been integrated into a previously published comprehensive web server to support microarray analysis and genome-wide iterative enrichment analysis, through which users can identify a group of desired genes and then discover the specific oligos of these genes. Conclusion The IAB algorithm has been developed to construct SpecificDB, a web server that provides a specific and valid oligo database of the probe, siRNA, and primer design for the human genome. We also demonstrate the ability of the IAB algorithm to predict specific oligos through

  7. Genome-wide identification of specific oligonucleotides using artificial neural network and computational genomic analysis.

    Science.gov (United States)

    Liu, Chun-Chi; Lin, Chin-Chung; Li, Ker-Chau; Chen, Wen-Shyen E; Chen, Jiun-Ching; Yang, Ming-Te; Yang, Pan-Chyr; Chang, Pei-Chun; Chen, Jeremy J W

    2007-05-22

    Genome-wide identification of specific oligonucleotides (oligos) is a computationally-intensive task and is a requirement for designing microarray probes, primers, and siRNAs. An artificial neural network (ANN) is a machine learning technique that can effectively process complex and high noise data. Here, ANNs are applied to process the unique subsequence distribution for prediction of specific oligos. We present a novel and efficient algorithm, named the integration of ANN and BLAST (IAB) algorithm, to identify specific oligos. We establish the unique marker database for human and rat gene index databases using the hash table algorithm. We then create the input vectors, via the unique marker database, to train and test the ANN. The trained ANN predicted the specific oligos with high efficiency, and these oligos were subsequently verified by BLAST. To improve the prediction performance, the ANN over-fitting issue was avoided by early stopping with the best observed error and a k-fold validation was also applied. The performance of the IAB algorithm was about 5.2, 7.1, and 6.7 times faster than the BLAST search without ANN for experimental results of 70-mer, 50-mer, and 25-mer specific oligos, respectively. In addition, the results of polymerase chain reactions showed that the primers predicted by the IAB algorithm could specifically amplify the corresponding genes. The IAB algorithm has been integrated into a previously published comprehensive web server to support microarray analysis and genome-wide iterative enrichment analysis, through which users can identify a group of desired genes and then discover the specific oligos of these genes. The IAB algorithm has been developed to construct SpecificDB, a web server that provides a specific and valid oligo database of the probe, siRNA, and primer design for the human genome. We also demonstrate the ability of the IAB algorithm to predict specific oligos through polymerase chain reaction experiments. Specific

  8. Response surface methodology and artificial neural network modeling of reactive red 33 decolorization by O3/UV in a bubble column reactor

    Directory of Open Access Journals (Sweden)

    Jamshid Behin

    2016-12-01

    Full Text Available In this work, response surface methodology (RSM and artificial neural network (ANN were used to predict the decolorization efficiency of Reactive Red 33 (RR 33 by applying the O3/UV process in a bubble column reactor. The effects of four independent variables including time (20-60 min, superficial gas velocity (0.06-0.18 cm/s, initial concentration of dye (50-150 ppm, and pH (3-11 were investigated using a 3-level 4-factor central composite experimental design. This design was utilized to train a feed-forward multilayered perceptron artificial neural network with a back-propagation algorithm. A comparison between the models’ results and experimental data gave high correlation coefficients and showed that the two models were able to predict Reactive Red 33 removal by employing the O3/UV process. Considering the results of the yield of dye removal and the response surface-generated model, the optimum conditions for dye removal were found to be a retention time of 59.87 min, a superficial gas velocity of 0.18 cm/s, an initial concentration of 96.33 ppm, and a pH of 7.99.

  9. Methodology for Benefit Analysis of CAD/CAM (Computer-Aided Design/Computer-Aided Manufacturing) in USN Shipyards.

    Science.gov (United States)

    1984-03-01

    34’ 1C.V r . *F - - I YW COMuts ’ IAC NC CAD CAL CAM I 1919 Niai1oapatew...on the system. This relationship can be described mathematically and once determined, can be used to compute the desired efficiency correction

  10. General design methodology applied to the research domain of physical programming for computer illiterate

    CSIR Research Space (South Africa)

    Smith, Andrew C

    2011-09-01

    Full Text Available The authors discuss the application of the 'general design methodology‘ in the context of a physical computing project. The aim of the project was to design and develop physical objects that could serve as metaphors for computer programming elements...

  11. Designing with Space Syntax : A configurative approach to architectural layout, proposing a computational methodology

    NARCIS (Netherlands)

    Nourian, P.; Rezvani, S.; Sariyildiz, I.S.

    2013-01-01

    This paper introduces a design methodology and a toolkit developed as a parametric CAD program for configurative design of architectural plan layouts. Using this toolkit, designers can start plan layout process with sketching the way functional spaces need to connect to each other. A tool draws an

  12. Neural networks advances and applications 2

    CERN Document Server

    Gelenbe, E

    1992-01-01

    The present volume is a natural follow-up to Neural Networks: Advances and Applications which appeared one year previously. As the title indicates, it combines the presentation of recent methodological results concerning computational models and results inspired by neural networks, and of well-documented applications which illustrate the use of such models in the solution of difficult problems. The volume is balanced with respect to these two orientations: it contains six papers concerning methodological developments and five papers concerning applications and examples illustrating the theoret

  13. 3-D components of a biological neural network visualized in computer generated imagery. I - Macular receptive field organization

    Science.gov (United States)

    Ross, Muriel D.; Cutler, Lynn; Meyer, Glenn; Lam, Tony; Vaziri, Parshaw

    1990-01-01

    Computer-assisted, 3-dimensional reconstructions of macular receptive fields and of their linkages into a neural network have revealed new information about macular functional organization. Both type I and type II hair cells are included in the receptive fields. The fields are rounded, oblong, or elongated, but gradations between categories are common. Cell polarizations are divergent. Morphologically, each calyx of oblong and elongated fields appears to be an information processing site. Intrinsic modulation of information processing is extensive and varies with the kind of field. Each reconstructed field differs in detail from every other, suggesting that an element of randomness is introduced developmentally and contributes to endorgan adaptability.

  14. Improving Error Resilience Analysis Methodology of Iterative Workloads for Approximate Computing

    NARCIS (Netherlands)

    Gillani, G.A.; Kokkeler, Andre B.J.

    2017-01-01

    Assessing error resilience inherent to the digital processing workloads provides application-specific insights towards approximate computing strategies for improving power efficiency and/or performance. With the case study of radio astronomy calibration, our contributions for improving the error

  15. Modeling of biosorption of Cu(II) by alkali-modified spent tea leaves using response surface methodology (RSM) and artificial neural network (ANN)

    Science.gov (United States)

    Ghosh, Arpita; Das, Papita; Sinha, Keka

    2015-06-01

    In the present work, spent tea leaves were modified with Ca(OH)2 and used as a new, non-conventional and low-cost biosorbent for the removal of Cu(II) from aqueous solution. Response surface methodology (RSM) and artificial neural network (ANN) were used to develop predictive models for simulation and optimization of the biosorption process. The influence of process parameters (pH, biosorbent dose and reaction time) on the biosorption efficiency was investigated through a two-level three-factor (23) full factorial central composite design with the help of Design Expert. The same design was also used to obtain a training set for ANN. Finally, both modeling methodologies were statistically compared by the root mean square error and absolute average deviation based on the validation data set. Results suggest that RSM has better prediction performance as compared to ANN. The biosorption followed Langmuir adsorption isotherm and it followed pseudo-second-order kinetic. The optimum removal efficiency of the adsorbent was found as 96.12 %.

  16. Modeling and optimization of anaerobic codigestion of potato waste and aquatic weed by response surface methodology and artificial neural network coupled genetic algorithm.

    Science.gov (United States)

    Jacob, Samuel; Banerjee, Rintu

    2016-08-01

    A novel approach to overcome the acidification problem has been attempted in the present study by codigesting industrial potato waste (PW) with Pistia stratiotes (PS, an aquatic weed). The effectiveness of codigestion of the weed and PW was tested in an equal (1:1) proportion by weight with substrate concentration of 5g total solid (TS)/L (2.5gPW+2.5gPS) which resulted in enhancement of methane yield by 76.45% as compared to monodigestion of PW with a positive synergistic effect. Optimization of process parameters was conducted using central composite design (CCD) based response surface methodology (RSM) and artificial neural network (ANN) coupled genetic algorithm (GA) model. Upon comparison of these two optimization techniques, ANN-GA model obtained through feed forward back propagation methodology was found to be efficient and yielded 447.4±21.43LCH4/kgVSfed (0.279gCH4/kgCODvs) which is 6% higher as compared to the CCD-RSM based approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Methodologies for the assessment of earthquake-triggered landslides hazard. A comparison of Logistic Regression and Artificial Neural Network models.

    Science.gov (United States)

    García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.

    2009-04-01

    In recent years, interest in landslide hazard assessment studies has increased substantially. They are appropriate for evaluation and mitigation plan development in landslide-prone areas. There are several techniques available for landslide hazard research at a regional scale. Generally, they can be classified in two groups: qualitative and quantitative methods. Most of qualitative methods tend to be subjective, since they depend on expert opinions and represent hazard levels in descriptive terms. On the other hand, quantitative methods are objective and they are commonly used due to the correlation between the instability factors and the location of the landslides. Within this group, statistical approaches and new heuristic techniques based on artificial intelligence (artificial neural network (ANN), fuzzy logic, etc.) provide rigorous analysis to assess landslide hazard over large regions. However, they depend on qualitative and quantitative data, scale, types of movements and characteristic factors used. We analysed and compared an approach for assessing earthquake-triggered landslides hazard using logistic regression (LR) and artificial neural networks (ANN) with a back-propagation learning algorithm. One application has been developed in El Salvador, a country of Central America where the earthquake-triggered landslides are usual phenomena. In a first phase, we analysed the susceptibility and hazard associated to the seismic scenario of the 2001 January 13th earthquake. We calibrated the models using data from the landslide inventory for this scenario. These analyses require input variables representing physical parameters to contribute to the initiation of slope instability, for example, slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness, while the occurrence or non-occurrence of landslides is considered as dependent variable. The results of the landslide susceptibility analysis are checked using landslide

  18. An integrated impact assessment and weighting methodology: evaluation of the environmental consequences of computer display technology substitution.

    Science.gov (United States)

    Zhou, Xiaoying; Schoenung, Julie M

    2007-04-01

    Computer display technology is currently in a state of transition, as the traditional technology of cathode ray tubes is being replaced by liquid crystal display flat-panel technology. Technology substitution and process innovation require the evaluation of the trade-offs among environmental impact, cost, and engineering performance attributes. General impact assessment methodologies, decision analysis and management tools, and optimization methods commonly used in engineering cannot efficiently address the issues needed for such evaluation. The conventional Life Cycle Assessment (LCA) process often generates results that can be subject to multiple interpretations, although the advantages of the LCA concept and framework obtain wide recognition. In the present work, the LCA concept is integrated with Quality Function Deployment (QFD), a popular industrial quality management tool, which is used as the framework for the development of our integrated model. The problem of weighting is addressed by using pairwise comparison of stakeholder preferences. Thus, this paper presents a new integrated analytical approach, Integrated Industrial Ecology Function Deployment (I2-EFD), to assess the environmental behavior of alternative technologies in correlation with their performance and economic characteristics. Computer display technology is used as the case study to further develop our methodology through the modification and integration of various quality management tools (e.g., process mapping, prioritization matrix) and statistical methods (e.g., multi-attribute analysis, cluster analysis). Life cycle thinking provides the foundation for our methodology, as we utilize a published LCA report, which stopped at the characterization step, as our starting point. Further, we evaluate the validity and feasibility of our methodology by considering uncertainty and conducting sensitivity analysis.

  19. Comparison of different computer models of the neural control system of the lower urinary tract

    NARCIS (Netherlands)

    van Duin, F.; Rosier, P. F.; Bemelmans, B. L.; Wijkstra, H.; Debruyne, F. M.; van Oosterom, A.

    2000-01-01

    This paper presents a series of five models that were formulated for describing the neural control of the lower urinary tract in humans. A parsimonious formulation of the effect of the sympathetic system, the pre-optic area, and urethral afferents on the simulated behavior are included. In spite of

  20. A methodology to urban air quality assessment during large time periods of winter using computational fluid dynamic models

    Science.gov (United States)

    Parra, M. A.; Santiago, J. L.; Martín, F.; Martilli, A.; Santamaría, J. M.

    2010-06-01

    The representativeness of point measurements in urban areas is limited due to the strong heterogeneity of the atmospheric flows in cities. To get information on air quality in the gaps between measurement points, and have a 3D field of pollutant concentration, Computational Fluid Dynamic (CFD) models can be used. However, unsteady simulations during time periods of the order of months, often required for regulatory purposes, are not possible for computational reasons. The main objective of this study is to develop a methodology to evaluate the air quality in a real urban area during large time periods by means of steady CFD simulations. One steady simulation for each inlet wind direction was performed and factors like the number of cars inside each street, the length of streets and the wind speed and direction were taken into account to compute the pollutant concentration. This approach is only valid in winter time when the pollutant concentrations are less affected by atmospheric chemistry. A model based on the steady-state Reynolds-Averaged Navier-Stokes equations (RANS) and standard k-ɛ turbulence model was used to simulate a set of 16 different inlet wind directions over a real urban area (downtown Pamplona, Spain). The temporal series of NO x and PM 10 and the spatial differences in pollutant concentration of NO 2 and BTEX obtained were in agreement with experimental data. Inside urban canopy, an important influence of urban boundary layer dynamics on the pollutant concentration patterns was observed. Large concentration differences between different zones of the same square were found. This showed that concentration levels measured by an automatic monitoring station depend on its location in the street or square, and a modelling methodology like this is useful to complement the experimental information. On the other hand, this methodology can also be applied to evaluate abatement strategies by redistributing traffic emissions.

  1. Optimality in Microwave-Assisted Drying of Aloe Vera ( Aloe barbadensis Miller) Gel using Response Surface Methodology and Artificial Neural Network Modeling

    Science.gov (United States)

    Das, Chandan; Das, Arijit; Kumar Golder, Animes

    2016-10-01

    The present work illustrates the Microwave-Assisted Drying (MWAD) characteristic of aloe vera gel combined with process optimization and artificial neural network modeling. The influence of microwave power (160-480 W), gel quantity (4-8 g) and drying time (1-9 min) on the moisture ratio was investigated. The drying of aloe gel exhibited typical diffusion-controlled characteristics with a predominant interaction between input power and drying time. Falling rate period was observed for the entire MWAD of aloe gel. Face-centered Central Composite Design (FCCD) developed a regression model to evaluate their effects on moisture ratio. The optimal MWAD conditions were established as microwave power of 227.9 W, sample amount of 4.47 g and 5.78 min drying time corresponding to the moisture ratio of 0.15. A computer-stimulated Artificial Neural Network (ANN) model was generated for mapping between process variables and the desired response. `Levenberg-Marquardt Back Propagation' algorithm with 3-5-1 architect gave the best prediction, and it showed a clear superiority over FCCD.

  2. A Computationally Inexpensive Optimal Guidance via Radial-Basis-Function Neural Network for Autonomous Soft Landing on Asteroids.

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    Full Text Available Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP. Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm.

  3. A Computationally Inexpensive Optimal Guidance via Radial-Basis-Function Neural Network for Autonomous Soft Landing on Asteroids.

    Science.gov (United States)

    Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun

    2015-01-01

    Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm.

  4. Convolutional Neural Network on Embedded Linux(trademark) System-on-Chip: A Methodology and Performance Benchmark

    Science.gov (United States)

    2016-05-01

    on a specific dataset with minimal concern of compute resources. What are the Pareto -optimal points and trade-offs for an energy-efficient CNN, when...system function calls. CNNs operate in two phases or modes: training and testing. The goal of training is to determine the optimal parameter values in...accuracy performance. The research question that follows is mostly unanswered: what are the Pareto -optimal points and trade-offs for an energy

  5. A Methodology to Reduce the Computational Effort in the Evaluation of the Lightning Performance of Distribution Networks

    Directory of Open Access Journals (Sweden)

    Ilaria Bendato

    2016-11-01

    Full Text Available The estimation of the lightning performance of a power distribution network is of great importance to design its protection system against lightning. An accurate evaluation of the number of lightning events that can create dangerous overvoltages requires a huge computational effort, as it implies the adoption of a Monte Carlo procedure. Such a procedure consists of generating many different random lightning events and calculating the corresponding overvoltages. The paper proposes a methodology to deal with the problem in two computationally efficient ways: (i finding out the minimum number of Monte Carlo runs that lead to reliable results; and (ii setting up a procedure that bypasses the lightning field-to-line coupling problem for each Monte Carlo run. The proposed approach is shown to provide results consistent with existing approaches while exhibiting superior Central Processing Unit (CPU time performances.

  6. Holistic Development of Computer Engineering Curricula Using Y-Chart Methodology

    Science.gov (United States)

    Rashid, Muhammad; Tasadduq, Imran A.

    2014-01-01

    The exponential growth of advancing technologies is pushing curriculum designers in computer engineering (CpE) education to compress more and more content into the typical 4-year program, without necessarily paying much attention to the cohesiveness of those contents. The result has been highly fragmented curricula consisting of various…

  7. Oil Well Blowout 3D computational modeling: review of methodology and environmental requirements

    Directory of Open Access Journals (Sweden)

    Pedro Mello Paiva

    2016-12-01

    Full Text Available This literature review aims to present the different methodologies used in the three-dimensional modeling of the hydrocarbons dispersion originated from an oil well blowout. It presents the concepts of coastal environmental sensitivity and vulnerability, their importance for prioritizing the most vulnerable areas in case of contingency, and the relevant legislation. We also discuss some limitations about the methodology currently used in environmental studies of oil drift, which considers simplification of the spill on the surface, even in the well blowout scenario. Efforts to better understand the oil and gas behavior in the water column and three-dimensional modeling of the trajectory gained strength after the Deepwater Horizon spill in 2010 in the Gulf of Mexico. The data collected and the observations made during the accident were widely used for adjustment of the models, incorporating various factors related to hydrodynamic forcing and weathering processes to which the hydrocarbons are subjected during subsurface leaks. The difficulties show to be even more challenging in the case of blowouts in deep waters, where the uncertainties are still larger. The studies addressed different variables to make adjustments of oil and gas dispersion models along the upward trajectory. Factors that exert strong influences include: speed of the subsurface currents;  gas separation from the main plume; hydrate formation, dissolution of oil and gas droplets; variations in droplet diameter; intrusion of the droplets at intermediate depths; biodegradation; and appropriate parametrization of the density, salinity and temperature profiles of water through the column.

  8. Computational methodology for solubility prediction: Application to the sparingly soluble solutes.

    Science.gov (United States)

    Li, Lunna; Totton, Tim; Frenkel, Daan

    2017-06-07

    The solubility of a crystalline substance in the solution can be estimated from its absolute solid free energy and excess solvation free energy. Here, we present a numerical method, which enables convenient solubility estimation of general molecular crystals at arbitrary thermodynamic conditions where solid and solution can coexist. The methodology is based on standard alchemical free energy methods, such as thermodynamic integration and free energy perturbation, and consists of two parts: (1) systematic extension of the Einstein crystal method to calculate the absolute solid free energies of molecular crystals at arbitrary temperatures and pressures and (2) a flexible cavity method that can yield accurate estimates of the excess solvation free energies. As an illustration, via classical Molecular Dynamic simulations, we show that our approach can predict the solubility of OPLS-AA-based (Optimized Potentials for Liquid Simulations All Atomic) naphthalene in SPC (Simple Point Charge) water in good agreement with experimental data at various temperatures and pressures. Because the procedure is simple and general and only makes use of readily available open-source software, the methodology should provide a powerful tool for universal solubility prediction.

  9. Grid cells generate an analog error-correcting code for singularly precise neural computation.

    Science.gov (United States)

    Sreenivasan, Sameet; Fiete, Ila

    2011-09-11

    Entorhinal grid cells in mammals fire as a function of animal location, with spatially periodic response patterns. This nonlocal periodic representation of location, a local variable, is unlike other neural codes. There is no theoretical explanation for why such a code should exist. We examined how accurately the grid code with noisy neurons allows an ideal observer to estimate location and found this code to be a previously unknown type of population code with unprecedented robustness to noise. In particular, the representational accuracy attained by grid cells over the coding range was in a qualitatively different class from what is possible with observed sensory and motor population codes. We found that a simple neural network can effectively correct the grid code. To the best of our knowledge, these results are the first demonstration that the brain contains, and may exploit, powerful error-correcting codes for analog variables. © 2011 Nature America, Inc. All rights reserved.

  10. Better Computer Go Player with Neural Network and Long-term Prediction

    OpenAIRE

    Tian, Yuandong; Zhu, Yan

    2015-01-01

    Competing with top human players in the ancient game of Go has been a long-term goal of artificial intelligence. Go's high branching factor makes traditional search techniques ineffective, even on leading-edge hardware, and Go's evaluation function could change drastically with one stone change. Recent works [Maddison et al. (2015); Clark & Storkey (2015)] show that search is not strictly necessary for machine Go players. A pure pattern-matching approach, based on a Deep Convolutional Neural ...

  11. Spatial consistency of neural firing regulates long-range local field potential synchronization: a computational study.

    Science.gov (United States)

    Sato, Naoyuki

    2015-02-01

    Local field potentials (LFPs) are thought to integrate neuronal processes within the range of a few millimeters of radius, which corresponds to the scale of multiple columns. In this study, the model of LFP in the visual cortex proposed by Mazzoni et al. (2008) was adapted to organize a network of two cortical areas, in which pyramidal neurons were divided into two sub-population modeling columns with spatially organized connections to neurons in other areas. Using the model enabled the relationship between neural firing and LFP to be evaluated, in addition to the LFP coherence between the two areas. Results showed that: (1) neurons in a particular sub-population generated the LFP in the area; (2) the spatial consistency of neural firing in the two areas was strongly correlated with LFP coherence; and (3) this consistency was capable of regulating LFP coherence in a lower frequency band, which was originally introduced to neurons in a particular sub-population. These results were derived from a winner-take-all operation in the columnar structure; thus, they are expected to be common in the cortex. It is suggested that the spatial consistency of neural firing is essential for regulating long-range LFP synchronization, which would facilitate neuronal integration processes over multiple cortical areas. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Computing elastic‐rebound‐motivated rarthquake probabilities in unsegmented fault models: a new methodology supported by physics‐based simulators

    Science.gov (United States)

    Field, Edward H.

    2015-01-01

    A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.

  13. Computational evaluation of some indenopyrazole derivatives as anticancer compounds; application of QSAR and docking methodologies.

    Science.gov (United States)

    Shahlaei, Mohsen; Fassihi, Afshin; Saghaie, Lotfollah; Arkan, Elham; Madadkar-Sobhani, Armin; Pourhossein, Alireza

    2013-02-01

    A computational procedure was performed on some indenopyrazole derivatives. Two important procedures in computational drug discovery, namely docking for modeling ligand-receptor interactions and quantitative structure activity relationships were employed. MIA-QSAR analysis of the studied derivatives produced a model with high predictability. The developed model was then used to evaluate the bioactivity of 54 proposed indenopyrazole derivatives. In order to confirm the obtained results through this ligand-based method, docking was performed on the selected compounds. An ADME-Tox evaluation was also carried out to search for more suitable compounds. Satisfactory bioactivities and ADME-Tox profiles for two of the compounds, namely 62 and S13, propose that further studies should be performed on such devoted chemical structures.

  14. Multiscale Computational Fluid Dynamics: Methodology and Application to PECVD of Thin Film Solar Cells

    Directory of Open Access Journals (Sweden)

    Marquis Crose

    2017-02-01

    Full Text Available This work focuses on the development of a multiscale computational fluid dynamics (CFD simulation framework with application to plasma-enhanced chemical vapor deposition of thin film solar cells. A macroscopic, CFD model is proposed which is capable of accurately reproducing plasma chemistry and transport phenomena within a 2D axisymmetric reactor geometry. Additionally, the complex interactions that take place on the surface of a-Si:H thin films are coupled with the CFD simulation using a novel kinetic Monte Carlo scheme which describes the thin film growth, leading to a multiscale CFD model. Due to the significant computational challenges imposed by this multiscale CFD model, a parallel computation strategy is presented which allows for reduced processing time via the discretization of both the gas-phase mesh and microscopic thin film growth processes. Finally, the multiscale CFD model has been applied to the PECVD process at industrially relevant operating conditions revealing non-uniformities greater than 20% in the growth rate of amorphous silicon films across the radius of the wafer.

  15. Development of dynamic control rod reactivity measurement methodology and computer code system for PWR

    Energy Technology Data Exchange (ETDEWEB)

    Zee, Sung Quun; Lee, Chung Chan; Song, Jae Seung [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-09-01

    In order to apply dynamic control rod reactivity measurement (DCRM) method to domestic nuclear power reactor, the methodology of EPRC, 'Dynamic Reactivity Measurement of Rod Worth', was reviewed. It was also reviewed that items should be improve in three-dimensional kinetics code MASTER, which was developed by Korea Atomic Energy Research Institute, for use in DCRM. The validity of DORT two-dimensional synthesis method to calculate excore detector weighting factor were benchmarked via Yonggwang Unit 3 three-dimensional TORT calculation. The consistency of MASTER static core calculation results using neutron cross sections generated by commercial design tools PHENIX/ANC and DIT/ROCS were also verified via rodded and unrodded radial power distributions and control rod worth comparisons. 14 refs., 28 figs., 3 tabs. (Author)

  16. Neural network technologies

    Science.gov (United States)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  17. Evolutionary and Neural Computing Based Decision Support System for Disease Diagnosis from Clinical Data Sets in Medical Practice.

    Science.gov (United States)

    Sudha, M

    2017-09-27

    As a recent trend, various computational intelligence and machine learning approaches have been used for mining inferences hidden in the large clinical databases to assist the clinician in strategic decision making. In any target data the irrelevant information may be detrimental, causing confusion for the mining algorithm and degrades the prediction outcome. To address this issue, this study attempts to identify an intelligent approach to assist disease diagnostic procedure using an optimal set of attributes instead of all attributes present in the clinical data set. In this proposed Application Specific Intelligent Computing (ASIC) decision support system, a rough set based genetic algorithm is employed in pre-processing phase and a back propagation neural network is applied in training and testing phase. ASIC has two phases, the first phase handles outliers, noisy data, and missing values to obtain a qualitative target data to generate appropriate attribute reduct sets from the input data using rough computing based genetic algorithm centred on a relative fitness function measure. The succeeding phase of this system involves both training and testing of back propagation neural network classifier on the selected reducts. The model performance is evaluated with widely adopted existing classifiers. The proposed ASIC system for clinical decision support has been tested with breast cancer, fertility diagnosis and heart disease data set from the University of California at Irvine (UCI) machine learning repository. The proposed system outperformed the existing approaches attaining the accuracy rate of 95.33%, 97.61%, and 93.04% for breast cancer, fertility issue and heart disease diagnosis.

  18. HRP's Healthcare Spin-Offs Through Computational Modeling and Simulation Practice Methodologies

    Science.gov (United States)

    Mulugeta, Lealem; Walton, Marlei; Nelson, Emily; Peng, Grace; Morrison, Tina; Erdemir, Ahmet; Myers, Jerry

    2014-01-01

    Spaceflight missions expose astronauts to novel operational and environmental conditions that pose health risks that are currently not well understood, and perhaps unanticipated. Furthermore, given the limited number of humans that have flown in long duration missions and beyond low Earth-orbit, the amount of research and clinical data necessary to predict and mitigate these health and performance risks are limited. Consequently, NASA's Human Research Program (HRP) conducts research and develops advanced methods and tools to predict, assess, and mitigate potential hazards to the health of astronauts. In this light, NASA has explored the possibility of leveraging computational modeling since the 1970s as a means to elucidate the physiologic risks of spaceflight and develop countermeasures. Since that time, substantial progress has been realized in this arena through a number of HRP funded activates such as the Digital Astronaut Project (DAP) and the Integrated Medical Model (IMM). Much of this success can be attributed to HRP's endeavor to establish rigorous verification, validation, and credibility (VV&C) processes that ensure computational models and simulations (M&S) are sufficiently credible to address issues within their intended scope. This presentation summarizes HRP's activities in credibility of modeling and simulation, in particular through its outreach to the community of modeling and simulation practitioners. METHODS: The HRP requires all M&S that can have moderate to high impact on crew health or mission success must be vetted in accordance to NASA Standard for Models and Simulations, NASA-STD-7009 (7009) [5]. As this standard mostly focuses on engineering systems, the IMM and DAP have invested substantial efforts to adapt the processes established in this standard for their application to biological M&S, which is more prevalent in human health and performance (HHP) and space biomedical research and operations [6,7]. These methods have also generated

  19. A methodology for selecting an optimal experimental design for the computer analysis of a complex system

    Energy Technology Data Exchange (ETDEWEB)

    RUTHERFORD,BRIAN M.

    2000-02-03

    Investigation and evaluation of a complex system is often accomplished through the use of performance measures based on system response models. The response models are constructed using computer-generated responses supported where possible by physical test results. The general problem considered is one where resources and system complexity together restrict the number of simulations that can be performed. The levels of input variables used in defining environmental scenarios, initial and boundary conditions and for setting system parameters must be selected in an efficient way. This report describes an algorithmic approach for performing this selection.

  20. Computational Efficient Upscaling Methodology for Predicting Thermal Conductivity of Nuclear Waste forms

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2011-09-28

    This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.

  1. Substrate tunnels in enzymes: structure-function relationships and computational methodology.

    Science.gov (United States)

    Kingsley, Laura J; Lill, Markus A

    2015-04-01

    In enzymes, the active site is the location where incoming substrates are chemically converted to products. In some enzymes, this site is deeply buried within the core of the protein, and, in order to access the active site, substrates must pass through the body of the protein via a tunnel. In many systems, these tunnels act as filters and have been found to influence both substrate specificity and catalytic mechanism. Identifying and understanding how these tunnels exert such control has been of growing interest over the past several years because of implications in fields such as protein engineering and drug design. This growing interest has spurred the development of several computational methods to identify and analyze tunnels and how ligands migrate through these tunnels. The goal of this review is to outline how tunnels influence substrate specificity and catalytic efficiency in enzymes with buried active sites and to provide a brief summary of the computational tools used to identify and evaluate these tunnels. © 2015 Wiley Periodicals, Inc.

  2. A new methodology to compute the gravitational contribution of a spherical tesseroid based on the analytical solution of a sector of a spherical zonal band

    Science.gov (United States)

    Marotta, Anna Maria; Barzaghi, Riccardo

    2017-10-01

    A new methodology for computing the gravitational effect of a spherical tesseroid has been devised and implemented. The methodology is based on the rotation from the global Earth-Centred Rotational reference frame to the local Earth-Centred P-Rotational reference frame, referred to the computation point P, and it requires knowledge of the height and the angular extension of each topographic column. After rotation, the gravitational effect of the tesseroid is computed via the effect of a sector of the spherical zonal band. In this respect, two possible procedures for handling the rotated tesseroids have been proposed and tested. The results obtained with the devised methodology are in good agreement with those derived by applying other existing methodologies.

  3. The mechanical design of a transfemoral prosthesis using computational tools and design methodology

    Directory of Open Access Journals (Sweden)

    John Sánchez Otero

    2012-12-01

    Full Text Available Artificial limb replacement with lower limb prostheses has been widely reported in current scientific literature. There are many lower limb prosthetic designs ranging from a single-axis knee mechanism to complex mechanisms involving microcontrollers, made from many materials ranging from lightweight, high specific strength ones (e.g., carbon fibre to traditional forms (e.g., stainless steel. However, the challenge is to design prostheses whose movement resembles the human body’s natural movement as closely as possible. Advances in prosthetics have enabled many amputees to return to their everyday activities; however, such prostheses are expensive, some costing as much as $60,000. Many of the affected population in Colombia have scarce economic resources; there is therefore a need to develop affordable functional prostheses.The Universidad del Norte’s Materials, Processes and Design Research Group and the Robotics and Intelligent Systems Group have been working on this line of research to develop modular prostheses which can be adjusted to each patient’s requirements. This research represents an initial methodological approach to developing a prosthesis in which software tools have been used (the finite element method with a criteria relationship matrix for selecting the best alternative while considering different aspects such as mod-ularity, cost, stiffness and weight.

  4. Matching Behavior as a Tradeoff Between Reward Maximization and Demands on Neural Computation [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jan Kubanek

    2015-10-01

    Full Text Available When faced with a choice, humans and animals commonly distribute their behavior in proportion to the frequency of payoff of each option. Such behavior is referred to as matching and has been captured by the matching law. However, matching is not a general law of economic choice. Matching in its strict sense seems to be specifically observed in tasks whose properties make matching an optimal or a near-optimal strategy. We engaged monkeys in a foraging task in which matching was not the optimal strategy. Over-matching the proportions of the mean offered reward magnitudes would yield more reward than matching, yet, surprisingly, the animals almost exactly matched them. To gain insight into this phenomenon, we modeled the animals' decision-making using a mechanistic model. The model accounted for the animals' macroscopic and microscopic choice behavior. When the models' three parameters were not constrained to mimic the monkeys' behavior, the model over-matched the reward proportions and in doing so, harvested substantially more reward than the monkeys. This optimized model revealed a marked bottleneck in the monkeys' choice function that compares the value of the two options. The model featured a very steep value comparison function relative to that of the monkeys. The steepness of the value comparison function had a profound effect on the earned reward and on the level of matching. We implemented this value comparison function through responses of simulated biological neurons. We found that due to the presence of neural noise, steepening the value comparison requires an exponential increase in the number of value-coding neurons. Matching may be a compromise between harvesting satisfactory reward and the high demands placed by neural noise on optimal neural computation.

  5. Separation and determination of honokiol and magnolol in Chinese traditional medicines by capillary electrophoresis with the application of response surface methodology and radial basis function neural network.

    Science.gov (United States)

    Han, Ping; Luan, Feng; Yan, Xizu; Gao, Yuan; Liu, Huitao

    2012-01-01

    A method for the separation and determination of honokiol and magnolol in Magnolia officinalis and its medicinal preparation is developed by capillary zone electrophoresis and response surface methodology. The concentration of borate, content of organic modifier, and applied voltage are selected as variables. The optimized conditions (i.e., 16 mmol/L sodium tetraborate at pH 10.0, 11% methanol, applied voltage of 25 kV and UV detection at 210 nm) are obtained and successfully applied to the analysis of honokiol and magnolol in Magnolia officinalis and Huoxiang Zhengqi Liquid. Good separation is achieved within 6 min. The limits of detection are 1.67 µg/mL for honokiol and 0.83 µg/mL for magnolol, respectively. In addition, an artificial neural network with "3-7-1" structure based on the ratio of peak resolution to the migration time of the later component (R(s)/t) given by Box-Behnken design is also reported, and the predicted results are in good agreement with the values given by the mathematic software and the experimental results. © The Author [2011]. Published by Oxford University Press. All rights reserved.

  6. Experimental Optimization and Modeling of Sodium Sulfide Production from H2S-Rich Off-Gas via Response Surface Methodology and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Bashipour Fatemeh

    2017-03-01

    Full Text Available The existence of hydrogen sulfide (H2S in the gas effluents of oil, gas and petrochemical industries causes environmental pollution and equipment corrosion. These gas streams, called off-gas, have high H2S concentration, which can be used to produce sodium sulfide (Na2S by H2S reactive absorption. Na2S has a wide variety of applications in chemical industries. In this study, the reactive absorption process was performed using a spray column. Response Surface Methodology (RSM was applied to design and optimize experiments based on Central Composite Design (CCD. The individual and interactive effects of three independent operating conditions on the weight percent of the produced Na2S (Y were investigated by RSM: initial NaOH concentration (10-20% w/w, scrubbing solution temperature (40-60 °C and liquid-to-gas volumetric ratio (15 × 10−3 to 25 × 10−3. Furthermore, an Artificial Neural Network (ANN model was used to predict Y. The results from RSM and ANN models were compared with experimental data by the regression analysis method. The optimum operating conditions specified by RSM resulted in Y of 15.5% at initial NaOH concentration of 19.3% w/w, scrubbing solution temperature of 40 °C and liquid-to-gas volumetric ratio of 24.6 × 10−3 v/v.

  7. Application of response surface methodology and artificial neural network: modeling and optimization of Cr(VI) adsorption process using Dowex 1X8 anion exchange resin.

    Science.gov (United States)

    Harbi, Soumaya; Guesmi, Fatma; Tabassi, Dorra; Hannachi, Chiraz; Hamrouni, Bechir

    2016-01-01

    We report the adsorption efficiency of Cr(VI) on a strong anionic resin Dowex 1X8. The Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD) analysis of this adsorbent were investigated. Response surface methodology was applied to evaluate the main effects and interactions among initial pH, initial Cr(VI) concentration, adsorbent dose and temperature. Analysis of variance depicted that resin dose and initial pH were the most significant factors. Desirability function (DF) showed that the maximum Cr(VI) removal of 95.96% was obtained at initial pH 5, initial Cr(VI) concentration of 100 mg/L, resin dose of 2 g and temperature of 283 K. Additionally, a simulated industrial wastewater containing 14.95 mg/L of Cr(VI) was treated successfully by Dowex 1X8 at optimum conditions. Same experimental design was employed to develop the artificial neural network. Both models gave a high correlation coefficient (RRSM(2) = 0.932, RANN(2) = 0.996).

  8. Medium optimization for pyrroloquinoline quinone (PQQ) production by Methylobacillus sp. zju323 using response surface methodology and artificial neural network-genetic algorithm.

    Science.gov (United States)

    Wei, Peilian; Si, Zhenjun; Lu, Yao; Yu, Qingfei; Huang, Lei; Xu, Zhinan

    2017-08-09

    Methylobacillus sp. zju323 was adopted to improve the biosynthesis of pyrroloquinoline quinone (PQQ) by systematic optimization of the fermentation medium. The Plackett-Burman design was implemented to screen for the key medium components for the PQQ production. CoCl2 · 6H2O, ρ-amino benzoic acid, and MgSO4 · 7H2O were found capable of enhancing the PQQ production most significantly. A five-level three-factor central composite design was used to investigate the direct and interactive effects of these variables. Both response surface methodology (RSM) and artificial neural network-genetic algorithm (ANN-GA) were used to predict the PQQ production and to optimize the medium composition. The results showed that the medium optimized by ANN-GA was better than that by RSM in maximizing PQQ production and the experimental PQQ concentration in the ANN-GA-optimized medium was improved by 44.3% compared with that in the unoptimized medium. Further study showed that this ANN-GA-optimized medium was also effective in improving PQQ production by fed-batch mode, reaching the highest PQQ accumulation of 232.0 mg/L, which was about 47.6% increase relative to that in the original medium. The present work provided an optimized medium and developed a fed-batch strategy which might be potentially applicable in industrial PQQ production.

  9. Measurements of air kerma index in computed tomography: a comparison among methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, T. C.; Mourao, A. P.; Da Silva, T. A., E-mail: alonso@cdtn.br [Universidade Federal de Minas Gerais, Programa de Ciencia y Tecnicas Nucleares, Av. Pres. Antonio Carlos 6627, Pampulha, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2016-10-15

    Computed tomography (CT) has become the most important and widely used technique for diagnosis purpose. As CT exams impart high doses to patients in comparison to other radiologist techniques, reliable dosimetry is required. Dosimetry in CT is done in terms of air kerma index in air or in a phantom measured by a pencil ionization chamber under a single X-ray tube rotation. In this work, a comparison among CT dosimetric quantities measured by an UNFORS pencil ionization chamber, MTS-N RADOS thermoluminescent dosimeters and GAFCHROMIC XR-CT radiochromic film was done. The three dosimetric systems were properly calibrated in X-ray reference radiations in a calibration laboratory. CT dosimetric quantities were measured in CT Bright Speed GE Medical Systems Inc., scanner in a PMMA trunk phantom and a comparison among the three dosimetric techniques was done. (Author)

  10. Neural Correlates of Racial Ingroup Bias in Observing Computer-Animated Social Encounters

    Directory of Open Access Journals (Sweden)

    Yuta Katsumi

    2018-01-01

    Full Text Available Despite evidence for the role of group membership in the neural correlates of social cognition, the mechanisms associated with processing non-verbal behaviors displayed by racially ingroup vs. outgroup members remain unclear. Here, 20 Caucasian participants underwent fMRI recording while observing social encounters with ingroup and outgroup characters displaying dynamic and static non-verbal behaviors. Dynamic behaviors included approach and avoidance behaviors, preceded or not by a handshake; both dynamic and static behaviors were followed by participants’ ratings. Behaviorally, participants showed bias toward their ingroup members, demonstrated by faster/slower reaction times for evaluating ingroup static/approach behaviors, respectively. At the neural level, despite overall similar responses in the action observation network to ingroup and outgroup encounters, the medial prefrontal cortex showed dissociable activation, possibly reflecting spontaneous processing of ingroup static behaviors and positive evaluations of ingroup approach behaviors. The anterior cingulate and superior frontal cortices also showed sensitivity to race, reflected in coordinated and reduced activation for observing ingroup static behaviors. Finally, the posterior superior temporal sulcus showed uniquely increased activity to observing ingroup handshakes. These findings shed light on the mechanisms of racial ingroup bias in observing social encounters, and have implications for understanding factors related to successful interactions with individuals from diverse backgrounds.

  11. The Prediction of Bandwidth On Need Computer Network Through Artificial Neural Network Method of Backpropagation

    Directory of Open Access Journals (Sweden)

    Ikhthison Mekongga

    2014-02-01

    Full Text Available The need for bandwidth has been increasing recently. This is because the development of internet infrastructure is also increasing so that we need an economic and efficient provider system. This can be achieved through good planning and a proper system. The prediction of the bandwidth consumption is one of the factors that support the planning for an efficient internet service provider system. Bandwidth consumption is predicted using ANN. ANN is an information processing system which has similar characteristics as the biologic al neural network.  ANN  is  chosen  to  predict  the  consumption  of  the  bandwidth  because  ANN  has  good  approachability  to  non-linearity.  The variable used in ANN is the historical load data. A bandwidth consumption information system was built using neural networks  with a backpropagation algorithm to make the use of bandwidth more efficient in the future both in the rental rate of the bandwidth and in the usage of the bandwidth.Keywords: Forecasting, Bandwidth, Backpropagation

  12. Introduction to neural networks

    CERN Document Server

    James, Frederick E

    1994-02-02

    1. Introduction and overview of Artificial Neural Networks. 2,3. The Feed-forward Network as an inverse Problem, and results on the computational complexity of network training. 4.Physics applications of neural networks.

  13. Development of artificial neural network models based on experimental data of response surface methodology to establish the nutritional requirements of digestible lysine, methionine, and threonine in broiler chicks.

    Science.gov (United States)

    Mehri, M

    2012-12-01

    An artificial neural network (ANN) approach was used to develop feed-forward multilayer perceptron models to estimate the nutritional requirements of digestible lysine (dLys), methionine (dMet), and threonine (dThr) in broiler chicks. Sixty data lines representing response of the broiler chicks during 3 to 16 d of age to dietary levels of dLys (0.88-1.32%), dMet (0.42-0.58%), and dThr (0.53-0.87%) were obtained from literature and used to train the networks. The prediction values of ANN were compared with those of response surface methodology to evaluate the fitness of these 2 methods. The models were tested using R(2), mean absolute deviation, mean absolute percentage error, and absolute average deviation. The random search algorithm was used to optimize the developed ANN models to estimate the optimal values of dietary dLys, dMet, and dThr. The ANN models were used to assess the relative importance of each dietary input on the bird performance using sensitivity analysis. The statistical evaluations revealed the higher accuracy of ANN to predict the bird performance compared with response surface methodology models. The optimization results showed that the maximum BW gain may be obtained with dietary levels of 1.11, 0.51, and 0.78% of dLys, dMet, and dThr, respectively. Minimum feed conversion ratio may be achieved with dietary levels of 1.13, 0.54, 0.78% of dLys, dMet, and dThr, respectively. The sensitivity analysis on the models indicated that dietary Lys is the most important variable in the growth performance of the broiler chicks, followed by dietary Thr and Met. The results of this research revealed that the experimental data of a response-surface-methodology design could be successfully used to develop the well-designed ANN for pattern recognition of bird growth and optimization of nutritional requirements. The comparison between the 2 methods also showed that the statistical methods may have little effect on the ideal ratios of dMet and dThr to dLys in

  14. An Application of Artificial Neural Network to Compute the Resonant Frequency of E-Shaped Compact Microstrip Antennas

    Science.gov (United States)

    Akdagli, Ali; Toktas, Abdurrahim; Kayabasi, Ahmet; Develi, Ibrahim

    2013-09-01

    An application of artificial neural network (ANN) based on multilayer perceptrons (MLP) to compute the resonant frequency of E-shaped compact microstrip antennas (ECMAs) is presented in this paper. The resonant frequencies of 144 ECMAs with different dimensions and electrical parameters were firstly determined by using IE3D(tm) software based on the method of moments (MoM), then the ANN model for computing the resonant frequency was built by considering the simulation data. The parameters and respective resonant frequency values of 130 simulated ECMAs were employed for training and the remaining 14 ECMAs were used for testing the model. The computed resonant frequencies for training and testing by ANN were obtained with the average percentage errors (APE) of 0.257% and 0.523%, respectively. The validity and accuracy of the present approach was verified on the measurement results of an ECMA fabricated in this study. Furthermore, the effects of the slots loading method over the resonant frequency were investigated to explain the relationship between the slots and resonant frequency.

  15. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    Directory of Open Access Journals (Sweden)

    WenBo Xiao

    Full Text Available In this article, we introduced an artificial neural network (ANN based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-, multi-crystalline (multi-, and amorphous (amor- crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  16. Neural Correlates of Phrase Quadrature Perception in Harmonic Rhythm: An EEG Study (Using a Brain-Computer Interface).

    Science.gov (United States)

    Fernández-Sotos, Alicia; Martínez-Rodrigo, Arturo; Moncho-Bogani, José; Latorre, José Miguel; Fernández-Caballero, Antonio

    2017-11-13

    For the sake of establishing the neural correlates of phrase quadrature perception in harmonic rhythm, a musical experiment has been designed to induce music-evoked stimuli related to one important aspect of harmonic rhythm, namely the phrase quadrature. Brain activity is translated to action through electroencephalography (EEG) by using a brain-computer interface. The power spectral value of each EEG channel is estimated to obtain how power variance distributes as a function of frequency. The results of processing the acquired signals are in line with previous studies that use different musical parameters to induce emotions. Indeed, our experiment shows statistical differences in theta and alpha bands between the fulfillment and break of phrase quadrature, an important cue of harmonic rhythm, in two classical sonatas.

  17. A neural network based computational model to predict the output power of different types of photovoltaic cells.

    Science.gov (United States)

    Xiao, WenBo; Nazario, Gina; Wu, HuaMing; Zhang, HuaMing; Cheng, Feng

    2017-01-01

    In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generation power output influenced by the external conditions from smallest to biggest is: multi-, mono-, and amor- crystalline silicon cells. In addition, the dependences of power prediction on the number of hidden neurons were studied. For multi- and amorphous crystalline cell, three or four hidden layer units resulted in the high correlation coefficient and low MSEs. For mono-crystalline cell, the best results were achieved at the hidden layer unit of 8.

  18. Foundation and methodologies in computer-aided diagnosis systems for breast cancer detection.

    Science.gov (United States)

    Jalalian, Afsaneh; Mashohor, Syamsiah; Mahmud, Rozi; Karasfi, Babak; Saripan, M Iqbal B; Ramli, Abdul Rahman B

    2017-01-01

    Breast cancer is the most prevalent cancer that affects women all over the world. Early detection and treatment of breast cancer could decline the mortality rate. Some issues such as technical reasons, which related to imaging quality and human error, increase misdiagnosis of breast cancer by radiologists. Computer-aided detection systems (CADs) are developed to overcome these restrictions and have been studied in many imaging modalities for breast cancer detection in recent years. The CAD systems improve radiologists' performance in finding and discriminating between the normal and abnormal tissues. These procedures are performed only as a double reader but the absolute decisions are still made by the radiologist. In this study, the recent CAD systems for breast cancer detection on different modalities such as mammography, ultrasound, MRI, and biopsy histopathological images are introduced. The foundation of CAD systems generally consist of four stages: Pre-processing, Segmentation, Feature extraction, and Classification. The approaches which applied to design different stages of CAD system are summarised. Advantages and disadvantages of different segmentation, feature extraction and classification techniques are listed. In addition, the impact of imbalanced datasets in classification outcomes and appropriate methods to solve these issues are discussed. As well as, performance evaluation metrics for various stages of breast cancer detection CAD systems are reviewed.

  19. Foundation and methodologies in computer-aided diagnosis systems for breast cancer detection

    Science.gov (United States)

    Jalalian, Afsaneh; Mashohor, Syamsiah; Mahmud, Rozi; Karasfi, Babak; Saripan, M. Iqbal B.; Ramli, Abdul Rahman B.

    2017-01-01

    Breast cancer is the most prevalent cancer that affects women all over the world. Early detection and treatment of breast cancer could decline the mortality rate. Some issues such as technical reasons, which related to imaging quality and human error, increase misdiagnosis of breast cancer by radiologists. Computer-aided detection systems (CADs) are developed to overcome these restrictions and have been studied in many imaging modalities for breast cancer detection in recent years. The CAD systems improve radiologists' performance in finding and discriminating between the normal and abnormal tissues. These procedures are performed only as a double reader but the absolute decisions are still made by the radiologist. In this study, the recent CAD systems for breast cancer detection on different modalities such as mammography, ultrasound, MRI, and biopsy histopathological images are introduced. The foundation of CAD systems generally consist of four stages: Pre-processing, Segmentation, Feature extraction, and Classification. The approaches which applied to design different stages of CAD system are summarised. Advantages and disadvantages of different segmentation, feature extraction and classification techniques are listed. In addition, the impact of imbalanced datasets in classification outcomes and appropriate methods to solve these issues are discussed. As well as, performance evaluation metrics for various stages of breast cancer detection CAD systems are reviewed. PMID:28435432

  20. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Yidong [Idaho National Lab. (INL), Idaho Falls, ID (United States); Andrs, David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Martineau, Richard Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for time integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.

  1. Computer Class Role Playing Games, an innovative teaching methodology based on STEM and ICT: first experimental results

    Science.gov (United States)

    Maraffi, S.

    2016-12-01

    Context/PurposeWe experienced a new teaching and learning technology: a Computer Class Role Playing Game (RPG) to perform educational activity in classrooms through an interactive game. This approach is new, there are some experiences on educational games, but mainly individual and not class-based. Gaming all together in a class, with a single scope for the whole class, it enhances peer collaboration, cooperative problem solving and friendship. MethodsTo perform the research we experimented the games in several classes of different degrees, acquiring specific questionnaire by teachers and pupils. Results Experimental results were outstanding: RPG, our interactive activity, exceed by 50% the overall satisfaction compared to traditional lessons or Power Point supported teaching. InterpretationThe appreciation of RPG was in agreement with the class level outcome identified by the teacher after the experimentation. Our work experience get excellent feedbacks by teachers, in terms of efficacy of this new teaching methodology and of achieved results. Using new methodology more close to the student point of view improves the innovation and creative capacities of learners, and it support the new role of teacher as learners' "coach". ConclusionThis paper presents the first experimental results on the application of this new technology based on a Computer game which project on a wall in the class an adventure lived by the students. The plots of the actual adventures are designed for deeper learning of Science, Technology, Engineering, Mathematics (STEM) and Social Sciences & Humanities (SSH). The participation of the pupils it's based on the interaction with the game by the use of their own tablets or smartphones. The game is based on a mixed reality learning environment, giving the students the feel "to be IN the adventure".

  2. Detection and diagnosis of colitis on computed tomography using deep convolutional neural networks.

    Science.gov (United States)

    Liu, Jiamin; Wang, David; Lu, Le; Wei, Zhuoshi; Kim, Lauren; Turkbey, Evrim B; Sahiner, Berkman; Petrick, Nicholas A; Summers, Ronald M

    2017-09-01

    Colitis refers to inflammation of the inner lining of the colon that is frequently associated with infection and allergic reactions. In this paper, we propose deep convolutional neural networks methods for lesion-level colitis detection and a support vector machine (SVM) classifier for patient-level colitis diagnosis on routine abdominal CT scans. The recently developed Faster Region-based Convolutional Neural Network (Faster RCNN) is utilized for lesion-level colitis detection. For each 2D slice, rectangular region proposals are generated by region proposal networks (RPN). Then, each region proposal is jointly classified and refined by a softmax classifier and bounding-box regressor. Two convolutional neural networks, eight layers of ZF net and 16 layers of VGG net are compared for colitis detection. Finally, for each patient, the detections on all 2D slices are collected and a SVM classifier is applied to develop a patient-level diagnosis. We trained and evaluated our method with 80 colitis patients and 80 normal cases using 4 × 4-fold cross validation. For lesion-level colitis detection, with ZF net, the mean of average precisions (mAP) were 48.7% and 50.9% for RCNN and Faster RCNN, respectively. The detection system achieved sensitivities of 51.4% and 54.0% at two false positives per patient for RCNN and Faster RCNN, respectively. With VGG net, Faster RCNN increased the mAP to 56.9% and increased the sensitivity to 58.4% at two false positive per patient. For patient-level colitis diagnosis, with ZF net, the average areas under the ROC curve (AUC) were 0.978 ± 0.009 and 0.984 ± 0.008 for RCNN and Faster RCNN method, respectively. The difference was not statistically significant with P = 0.18. At the optimal operating point, the RCNN method correctly identified 90.4% (72.3/80) of the colitis patients and 94.0% (75.2/80) of normal cases. The sensitivity improved to 91.6% (73.3/80) and the specificity improved to 95.0% (76.0/80) for the Faster RCNN

  3. ESTABLISHING A METHODOLOGY FOR BENCHMARKING SPEECH SYNTHESIS FOR COMPUTER-ASSISTED LANGUAGE LEARNING (CALL

    Directory of Open Access Journals (Sweden)

    Zöe Handley

    2005-09-01

    Full Text Available Despite the new possibilities that speech synthesis brings about, few Computer-Assisted Language Learning (CALL applications integrating speech synthesis have found their way onto the market. One potential reason is that the suitability and benefits of the use of speech synthesis in CALL have not been proven. One way to do this is through evaluation. Yet, very few formal evaluations of speech synthesis for CALL purposes have been conducted. One possible reason for the neglect of evaluation in this context is the fact that it is expensive in terms of time and resources. An important concern given that there are several levels of evaluation from which such applications would benefit. Benchmarking, the comparison of the score obtained by a system with that obtained by one which is known, to guarantee user satisfaction in a standard task or set of tasks, is introduced as a potential solution to this problem. In this article, we report on our progress towards the development of one of these benchmarks, namely a benchmark for determining the adequacy of speech synthesis systems for use in CALL. We do so by presenting the results of a case study which aimed to identify the criteria which determine the adequacy of the output of speech synthesis systems for use in its various roles in CALL with a view to the selection of benchmark tests which will address these criteria. These roles (reading machine, pronunciation model, and conversational partner are also discussed here. An agenda for further research and evaluation is proposed in the conclusion.

  4. Exploring methodological frameworks for a mental task-based near-infrared spectroscopy brain-computer interface.

    Science.gov (United States)

    Weyand, Sabine; Takehara-Nishiuchi, Kaori; Chau, Tom

    2015-10-30

    Near-infrared spectroscopy (NIRS) brain-computer interfaces (BCIs) enable users to interact with their environment using only cognitive activities. This paper presents the results of a comparison of four methodological frameworks used to select a pair of tasks to control a binary NIRS-BCI; specifically, three novel personalized task paradigms and the state-of-the-art prescribed task framework were explored. Three types of personalized task selection approaches were compared, including: user-selected mental tasks using weighted slope scores (WS-scores), user-selected mental tasks using pair-wise accuracy rankings (PWAR), and researcher-selected mental tasks using PWAR. These paradigms, along with the state-of-the-art prescribed mental task framework, where mental tasks are selected based on the most commonly used tasks in literature, were tested by ten able-bodied participants who took part in five NIRS-BCI sessions. The frameworks were compared in terms of their accuracy, perceived ease-of-use, computational time, user preference, and length of training. Most notably, researcher-selected personalized tasks resulted in significantly higher accuracies, while user-selected personalized tasks resulted in significantly higher perceived ease-of-use. It was also concluded that PWAR minimized the amount of data that needed to be collected; while, WS-scores maximized user satisfaction and minimized computational time. In comparison to the state-of-the-art prescribed mental tasks, our findings show that overall, personalized tasks appear to be superior to prescribed tasks with respect to accuracy and perceived ease-of-use. The deployment of personalized rather than prescribed mental tasks ought to be considered and further investigated in future NIRS-BCI studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Science.gov (United States)

    Browning, N. Andrew; Grossberg, Stephen; Mingolla, Ennio

    2009-01-01

    Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard…

  6. Goal-Directed Decision Making as Probabilistic Inference: A Computational Framework and Potential Neural Correlates

    Science.gov (United States)

    Solway, Alec; Botvinick, Matthew M.

    2012-01-01

    Recent work has given rise to the view that reward-based decision making is governed by two key controllers: a habit system, which stores stimulus-response associations shaped by past reward, and a goal-oriented system that selects actions based on their anticipated outcomes. The current literature provides a rich body of computational theory…

  7. BNCI Horizon 2020 - Towards a Roadmap for Brain/Neural Computer Interaction

    NARCIS (Netherlands)

    Brunner, Clemens; Blankertz, Benjamin; Cincotti, Febo; Kübler, Andrea; Mattia, Donatella; Miralles, Felip; Nijholt, Antinus; Otal, Begonya; Salomon, Patric; Müller-Putz, Gernot R.; Stephanidis, C; Antona, M.

    2014-01-01

    In this paper, we present BNCI Horizon 2020, an EU Coordination and Support Action (CSA) that will provide a roadmap for brain-computer interaction research for the next years, starting in 2013, and aiming at research efforts until 2020 and beyond. The project is a successor of the earlier EU-funded

  8. Morphological neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-31

    The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.

  9. How attention and contrast gain control interact to regulate lightness contrast and assimilation: a computational neural model.

    Science.gov (United States)

    Rudd, Michael E

    2010-12-31

    Recent theories of lightness perception assume that lightness (perceived reflectance) is computed by a process that contrasts the target's luminance with that of one or more regions in its spatial surround. A challenge for any such theory is the phenomenon of lightness assimilation, which occurs when increasing the luminance of a surround region increases the target lightness: the opposite of contrast. Here contrast and assimilation are studied quantitatively in lightness matching experiments utilizing concentric disk-and-ring displays. Whether contrast or assimilation is seen depends on a number of factors including: the luminance relations of the target, surround, and background; surround size; and matching instructions. When assimilation occurs, it is always part of a larger pattern in which assimilation and contrast both occur over different ranges of surround luminance. These findings are quantitatively modeled by a theory that assumes lightness is computed from a weighted sum of responses of edge detector neurons in visual cortex. The magnitude of the neural response to an edge is regulated by a combination of contrast gain control acting between neighboring edge detectors and a top-down attentional gain control that selectively weights the response to stimulus edges according to their task relevance.

  10. Neural computations underlying arbitration between model-based and model-free learning

    Science.gov (United States)

    Lee, Sang Wan; Shimojo, Shinsuke; O’Doherty, John P.

    2014-01-01

    SUMMARY There is accumulating neural evidence to support the existence of two distinct systems for guiding action-selection in the brain, a deliberative “model-based” and a reflexive “model-free” system. However, little is known about how the brain determines which of these systems controls behavior at one moment in time. We provide evidence for an arbitration mechanism that allocates the degree of control over behavior by model-based and model-free systems as a function of the reliability of their respective predictions. We show that inferior lateral prefrontal and frontopolar cortex encode both reliability signals and the output of a comparison between those signals, implicating these regions in the arbitration process. Moreover, connectivity between these regions and model-free valuation areas is negatively modulated by the degree of model-based control in the arbitrator, suggesting that arbitration may work through modulation of the model-free valuation system when the arbitrator deems that the model-based system should drive behavior. PMID:24507199

  11. A neural computation for visual acuity in the presence of eye movements.

    Directory of Open Access Journals (Sweden)

    Xaq Pitkow

    2007-12-01

    Full Text Available Humans can distinguish visual stimuli that differ by features the size of only a few photoreceptors. This is possible despite the incessant image motion due to fixational eye movements, which can be many times larger than the features to be distinguished. To perform well, the brain must identify the retinal firing patterns induced by the stimulus while discounting similar patterns caused by spontaneous retinal activity. This is a challenge since the trajectory of the eye movements, and consequently, the stimulus position, are unknown. We derive a decision rule for using retinal spike trains to discriminate between two stimuli, given that their retinal image moves with an unknown random walk trajectory. This algorithm dynamically estimates the probability of the stimulus at different retinal locations, and uses this to modulate the influence of retinal spikes acquired later. Applied to a simple orientation-discrimination task, the algorithm performance is consistent with human acuity, whereas naive strategies that neglect eye movements perform much worse. We then show how a simple, biologically plausible neural network could implement this algorithm using a local, activity-dependent gain and lateral interactions approximately matched to the statistics of eye movements. Finally, we discuss evidence that such a network could be operating in the primary visual cortex.

  12. Experimental and Computational Studies of Cortical Neural Network Properties Through Signal Processing

    Science.gov (United States)

    Clawson, Wesley Patrick

    Previous studies, both theoretical and experimental, of network level dynamics in the cerebral cortex show evidence for a statistical phenomenon called criticality; a phenomenon originally studied in the context of phase transitions in physical systems and that is associated with favorable information processing in the context of the brain. The focus of this thesis is to expand upon past results with new experimentation and modeling to show a relationship between criticality and the ability to detect and discriminate sensory input. A line of theoretical work predicts maximal sensory discrimination as a functional benefit of criticality, which can then be characterized using mutual information between sensory input, visual stimulus, and neural response,. The primary finding of our experiments in the visual cortex in turtles and neuronal network modeling confirms this theoretical prediction. We show that sensory discrimination is maximized when visual cortex operates near criticality. In addition to presenting this primary finding in detail, this thesis will also address our preliminary results on change-point-detection in experimentally measured cortical dynamics.

  13. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    Science.gov (United States)

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Modeling daily discharge responses of a large karstic aquifer using soft computing methods: Artificial neural network and neuro-fuzzy

    Science.gov (United States)

    Kurtulus, Bedri; Razack, Moumtaz

    2010-02-01

    SummaryThis paper compares two methods for modeling karst aquifers, which are heterogeneous, highly non-linear, and hierarchical systems. There is a clear need to model these systems given the crucial role they play in water supply in many countries. In recent years, the main components of soft computing (fuzzy logic (FL), and Artificial Neural Networks, (ANNs)) have come to prevail in the modeling of complex non-linear systems in different scientific and technologic disciplines. In this study, Artificial Neural Networks and Adaptive Neuro-Fuzzy Interface System (ANFIS) methods were used for the prediction of daily discharge of karstic aquifers and their capability was compared. The approach was applied to 7 years of daily data of La Rochefoucauld karst system in south-western France. In order to predict the karst daily discharges, single-input (rainfall, piezometric level) vs. multiple-input (rainfall and piezometric level) series were used. In addition to these inputs, all models used measured or simulated discharges from the previous days with a specified delay. The models were designed in a Matlab™ environment. An automatic procedure was used to select the best calibrated models. Daily discharge predictions were then performed using the calibrated models. Comparing predicted and observed hydrographs indicates that both models (ANN and ANFIS) provide close predictions of the karst daily discharges. The summary statistics of both series (observed and predicted daily discharges) are comparable. The performance of both models is improved when the number of inputs is increased from one to two. The root mean square error between the observed and predicted series reaches a minimum for two-input models. However, the ANFIS model demonstrates a better performance than the ANN model to predict peak flow. The ANFIS approach demonstrates a better generalization capability and slightly higher performance than the ANN, especially for peak discharges.

  15. Xenopus laevis: an ideal experimental model for studying the developmental dynamics of neural network assembly and sensory-motor computations.

    Science.gov (United States)

    Straka, Hans; Simmers, John

    2012-04-01

    The amphibian Xenopus laevis represents a highly amenable model system for exploring the ontogeny of central neural networks, the functional establishment of sensory-motor transformations, and the generation of effective motor commands for complex behaviors. Specifically, the ability to employ a range of semi-intact and isolated preparations for in vitro morphophysiological experimentation has provided new insights into the developmental and integrative processes associated with the generation of locomotory behavior during changing life styles. In vitro electrophysiological studies have begun to explore the functional assembly, disassembly and dynamic plasticity of spinal pattern generating circuits as Xenopus undergoes the developmental switch from larval tail-based swimming to adult limb-based locomotion. Major advances have also been made in understanding the developmental onset of multisensory signal processing for reactive gaze and posture stabilizing reflexes during self-motion. Additionally, recent evidence from semi-intact animal and isolated CNS experiments has provided compelling evidence that in Xenopus tadpoles, predictive feed-forward signaling from the spinal locomotor pattern generator are engaged in minimizing visual disturbances during tail-based swimming. This new concept questions the traditional view of retinal image stabilization that in vertebrates has been exclusively attributed to sensory-motor transformations of body/head motion-detecting signals. Moreover, changes in visuomotor demands associated with the developmental transition in propulsive strategy from tail- to limb-based locomotion during metamorphosis presumably necessitates corresponding adaptive alterations in the intrinsic spinoextraocular coupling mechanism. Consequently, Xenopus provides a unique opportunity to address basic questions on the developmental dynamics of neural network assembly and sensory-motor computations for vertebrate motor behavior in general. Copyright

  16. Neural Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — As part of the Electrical and Computer Engineering Department and The Institute for System Research, the Neural Systems Laboratory studies the functionality of the...

  17. Performance Driven Design and Design Information Exchange : Establishing a computational design methodology for parametric and performance-driven design of structures via topology optimization for rough structurally informed design models

    NARCIS (Netherlands)

    Mostafavi, S.; Morales Beltran, M.G.; Biloria, N.M.

    2013-01-01

    This paper presents a performance driven computational design methodology through introducing a case on parametric structural design. The paper describes the process of design technology development and frames a design methodology through which engineering, -in this case structural- aspects of

  18. The integration of social influence and reward: Computational approaches and neural evidence.

    Science.gov (United States)

    Tomlin, Damon; Nedic, Andrea; Prentice, Deborah A; Holmes, Philip; Cohen, Jonathan D

    2017-05-24

    Decades of research have established that decision-making is dramatically impacted by both the rewards an individual receives and the behavior of others. How do these distinct influences exert their influence on an individual's actions, and can the resulting behavior be effectively captured in a computational model? To address this question, we employed a novel spatial foraging game in which groups of three participants sought to find the most rewarding location in an unfamiliar two-dimensional space. As the game transitioned from one block to the next, the availability of information regarding other group members was varied systematically, revealing the relative impacts of feedback from the environment and information from other group members on individual decision-making. Both reward-based and socially-based sources of information exerted a significant influence on behavior, and a computational model incorporating these effects was able to recapitulate several key trends in the behavioral data. In addition, our findings suggest how these sources were processed and combined during decision-making. Analysis of reaction time, location of gaze, and functional magnetic resonance imaging (fMRI) data indicated that these distinct sources of information were integrated simultaneously for each decision, rather than exerting their influence in a separate, all-or-none fashion across separate subsets of trials. These findings add to our understanding of how the separate influences of reward from the environment and information derived from other social agents are combined to produce decisions.

  19. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    Science.gov (United States)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  20. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    Science.gov (United States)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-10-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  1. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    Science.gov (United States)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-09-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  2. Neural correlates of learning in an electrocorticographic motor-imagery brain-computer interface

    Science.gov (United States)

    Blakely, Tim M.; Miller, Kai J.; Rao, Rajesh P. N.; Ojemann, Jeffrey G.

    2014-01-01

    Human subjects can learn to control a one-dimensional electrocorticographic (ECoG) brain-computer interface (BCI) using modulation of primary motor (M1) high-gamma activity (signal power in the 75–200 Hz range). However, the stability and dynamics of the signals over the course of new BCI skill acquisition have not been investigated. In this study, we report 3 characteristic periods in evolution of the high-gamma control signal during BCI training: initial, low task accuracy with corresponding low power modulation in the gamma spectrum, followed by a second period of improved task accuracy with increasing average power separation between activity and rest, and a final period of high task accuracy with stable (or decreasing) power separation and decreasing trial-to-trial variance. These findings may have implications in the design and implementation of BCI control algorithms. PMID:25599079

  3. Computational Modelling of the Neural Representation of Object Shape in the Primate Ventral Visual System

    Directory of Open Access Journals (Sweden)

    Akihiro eEguchi

    2015-08-01

    Full Text Available Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognise the whole object.

  4. Neural correlates of user-initiated motor success and failure - A brain-computer interface perspective.

    Science.gov (United States)

    Yazmir, Boris; Reiner, Miriam

    2016-11-02

    Any motor action is, by nature, potentially accompanied by human errors. In order to facilitate development of error-tailored Brain-Computer Interface (BCI) correction systems, we focused on internal, human-initiated errors, and investigated EEG correlates of user outcome successes and errors during a continuous 3D virtual tennis game against a computer player. We used a multisensory, 3D, highly immersive environment. Missing and repelling the tennis ball were considered, as 'error' (miss) and 'success' (repel). Unlike most previous studies, where the environment "encouraged" the participant to perform a mistake, here errors happened naturally, resulting from motor-perceptual-cognitive processes of incorrect estimation of the ball kinematics, and can be regarded as user internal, self-initiated errors. Results show distinct and well-defined Event-Related Potentials (ERPs), embedded in the ongoing EEG, that differ across conditions by waveforms, scalp signal distribution maps, source estimation results (sLORETA) and time-frequency patterns, establishing a series of typical features that allow valid discrimination between user internal outcome success and error. The significant delay in latency between positive peaks of error- and success-related ERPs, suggests a cross-talk between top-down and bottom-up processing, represented by an outcome recognition process, in the context of the game world. Success-related ERPs had a central scalp distribution, while error-related ERPs were centro-parietal. The unique characteristics and sharp differences between EEG correlates of error/success provide the crucial components for an improved BCI system. The features of the EEG waveform can be used to detect user action outcome, to be fed into the BCI correction system. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Adsorptive removal of arsenic by novel iron/olivine composite: Insights into preparation and adsorption process by response surface methodology and artificial neural network.

    Science.gov (United States)

    Ghosal, Partha S; Kattil, Krishna V; Yadav, Manoj K; Gupta, Ashok K

    2017-12-28

    Olivine, a low-cost natural material, impregnated with iron is introduced in the adsorptive removal of arsenic. A wet impregnation method and subsequent calcination were employed for the preparation of iron/olivine composite. The major preparation process parameter, viz., iron loading and calcination temperature were optimized through the response surface methodology coupled with a factorial design. A significant variation of adsorption capacity of arsenic (measured as total arsenic), i.e., 63.15 to 310.85 mg/kg for arsenite [As(III)T] and 76.46 to 329.72 mg/kg for arsenate [As(V)T] was observed, which exhibited the significant effect of the preparation process parameters on the adsorption potential. The iron loading delineated the optima at central points, whereas a monotonous decreasing trend of adsorption capacity for both the As(III)T and As(V)T was observed with the increasing calcination temperature. The variation of adsorption capacity with the increased iron loading is more at lower calcination temperature showing the interactive effect between the factors. The adsorbent prepared at the optimized condition of iron loading and calcination temperature, i.e., 10% and 200 °C, effectively removed the As(III)T and As(V)T by more than 96 and 99%, respectively. The material characterization of the adsorbent showed the formation of the iron compound in the olivine and increase in specific surface area to the tune of 10 multifold compared to the base material, which is conducive to the enhancement of the adsorption capacity. An artificial neural network was applied for the multivariate optimization of the adsorption process from the experimental data of the univariate optimization study and the optimized model showed low values of error functions and high R2 values of more than 0.99 for As(III)T and As(V)T. The adsorption isotherm and kinetics followed Langmuir model and pseudo second order model, respectively demonstrating the chemisorption in this study

  6. Cognitive and Neural Plasticity in Older Adults’ Prospective Memory Following Training with the Virtual Week Computer Game

    Directory of Open Access Journals (Sweden)

    Nathan S Rose

    2015-10-01

    Full Text Available Prospective memory (PM – the ability to remember and successfully execute our intentions and planned activities – is critical for functional independence and declines with age, yet few studies have attempted to train PM in older adults. We developed a PM training program using the Virtual Week computer game. Trained participants played the game in twelve, 1-hour sessions over one month. Measures of neuropsychological functions, lab-based PM, event-related potentials (ERPs during performance on a lab-based PM task, instrumental activities of daily living, and real-world PM were assessed before and after training. Performance was compared to both no-contact and active (music training control groups. PM on the Virtual Week game dramatically improved following training relative to controls, suggesting PM plasticity is preserved in older adults. Relative to control participants, training did not produce reliable transfer to laboratory-based tasks, but was associated with a reduction of an ERP component (sustained negativity over occipito-parietal cortex associated with processing PM cues, indicative of more automatic PM retrieval. Most importantly, training produced far transfer to real-world outcomes including improvements in performance on real-world PM and activities of daily living. Real-world gains were not observed in either control group. Our findings demonstrate that short-term training with the Virtual Week game produces cognitive and neural plasticity that may result in real-world benefits to supporting functional independence in older adulthood.

  7. Cognitive and neural plasticity in older adults' prospective memory following training with the Virtual Week computer game.

    Science.gov (United States)

    Rose, Nathan S; Rendell, Peter G; Hering, Alexandra; Kliegel, Matthias; Bidelman, Gavin M; Craik, Fergus I M

    2015-01-01

    Prospective memory (PM) - the ability to remember and successfully execute our intentions and planned activities - is critical for functional independence and declines with age, yet few studies have attempted to train PM in older adults. We developed a PM training program using the Virtual Week computer game. Trained participants played the game in 12, 1-h sessions over 1 month. Measures of neuropsychological functions, lab-based PM, event-related potentials (ERPs) during performance on a lab-based PM task, instrumental activities of daily living, and real-world PM were assessed before and after training. Performance was compared to both no-contact and active (music training) control groups. PM on the Virtual Week game dramatically improved following training relative to controls, suggesting PM plasticity is preserved in older adults. Relative to control participants, training did not produce reliable transfer to laboratory-based tasks, but was associated with a reduction of an ERP component (sustained negativity over occipito-parietal cortex) associated with processing PM cues, indicative of more automatic PM retrieval. Most importantly, training produced far transfer to real-world outcomes including improvements in performance on real-world PM and activities of daily living. Real-world gains were not observed in either control group. Our findings demonstrate that short-term training with the Virtual Week game produces cognitive and neural plasticity that may result in real-world benefits to supporting functional independence in older adulthood.

  8. Cognitive and neural plasticity in older adults’ prospective memory following training with the Virtual Week computer game

    Science.gov (United States)

    Rose, Nathan S.; Rendell, Peter G.; Hering, Alexandra; Kliegel, Matthias; Bidelman, Gavin M.; Craik, Fergus I. M.

    2015-01-01

    Prospective memory (PM) – the ability to remember and successfully execute our intentions and planned activities – is critical for functional independence and declines with age, yet few studies have attempted to train PM in older adults. We developed a PM training program using the Virtual Week computer game. Trained participants played the game in 12, 1-h sessions over 1 month. Measures of neuropsychological functions, lab-based PM, event-related potentials (ERPs) during performance on a lab-based PM task, instrumental activities of daily living, and real-world PM were assessed before and after training. Performance was compared to both no-contact and active (music training) control groups. PM on the Virtual Week game dramatically improved following training relative to controls, suggesting PM plasticity is preserved in older adults. Relative to control participants, training did not produce reliable transfer to laboratory-based tasks, but was associated with a reduction of an ERP component (sustained negativity over occipito-parietal cortex) associated with processing PM cues, indicative of more automatic PM retrieval. Most importantly, training produced far transfer to real-world outcomes including improvements in performance on real-world PM and activities of daily living. Real-world gains were not observed in either control group. Our findings demonstrate that short-term training with the Virtual Week game produces cognitive and neural plasticity that may result in real-world benefits to supporting functional independence in older adulthood. PMID:26578936

  9. Computer-aided diagnosis of mammography using an artificial neural network: predicting the invasiveness of breast cancers from image features

    Science.gov (United States)

    Lo, Joseph Y.; Kim, Jeffrey; Baker, Jay A.; Floyd, Carey E., Jr.

    1996-04-01

    The study aim is to develop an artificial neural network (ANN) for computer-aided diagnosis of mammography. Using 9 mammographic image features and patient age, the ANN predicted whether breast lesions were benign, invasive malignant, or noninvasive malignant. Given only 97 malignant patients, the 3-layer backpropagation ANN successfully predicted the invasiveness of those breast cancers, performing with Az of 0.88 plus or minus 0.03. To determine more generalized clinical performance, a different ANN was developed using 266 consecutive patients (97 malignant, 169 benign). This ANN predicted whether those patients were benign or noninvasive malignant vs. invasive malignant with Az of 0.86 plus or minus 0.03. This study is unique because it is the first to predict the invasiveness of breast cancers using mammographic features and age. This knowledge, which was previously available only through surgical biopsy, may assist in the planning of surgical procedures for patients with breast lesions, and may help reduce the cost and morbidity associated with unnecessary surgical biopsies.

  10. Characterizing cartilage microarchitecture on phase-contrast x-ray computed tomography using deep learning with convolutional neural networks

    Science.gov (United States)

    Deng, Botao; Abidin, Anas Z.; D'Souza, Adora M.; Nagarajan, Mahesh B.; Coan, Paola; Wismüller, Axel

    2017-03-01

    The effectiveness of phase contrast X-ray computed tomography (PCI-CT) in visualizing human patellar cartilage matrix has been demonstrated due to its ability to capture soft tissue contrast on a micrometer resolution scale. Recent studies have shown that off-the-shelf Convolutional Neural Network (CNN) features learned from a nonmedical data set can be used for medical image classification. In this paper, we investigate the ability of features extracted from two different CNNs for characterizing chondrocyte patterns in the cartilage matrix. We obtained features from 842 regions of interest annotated on PCI-CT images of human patellar cartilage using CaffeNet and Inception-v3 Network, which were then used in a machine learning task involving support vector machines with radial basis function kernel to classify the ROIs as healthy or osteoarthritic. Classification performance was evaluated using the area (AUC) under the Receiver Operating Characteristic (ROC) curve. The best classification performance was observed with features from Inception-v3 network (AUC = 0.95), which outperforms features extracted from CaffeNet (AUC = 0.91). These results suggest that such characterization of chondrocyte patterns using features from internal layers of CNNs can be used to distinguish between healthy and osteoarthritic tissue with high accuracy.

  11. Evaluation of a Novel Computer Color Matching System Based on the Improved Back-Propagation Neural Network Model.

    Science.gov (United States)

    Wei, Jiaqiang; Peng, Mengdong; Li, Qing; Wang, Yining

    2016-11-09

    To explore the feasibility of a novel computer color-matching (CCM) system based on the improved back-propagation neural network (BPNN) model by comparing it with the traditional visual method. Forty-three metal-ceramic specimens were fabricated by proportionally mixing porcelain powders. Thirty-nine specimens were randomly selected to train the BPNN model, while the remaining four specimens were used to test and calibrate the model. A CCM system based on the improved BPNN model was constructed using MATLAB software. A comparison of the novel CCM system and the traditional visual method was conducted by evaluating the color reproduction results of 10 maxillary central incisors. Metal-ceramic specimens were fabricated using two color reproduction approaches. Color distributions (L*, a*, and b*) of the target teeth and of the corresponding metal-ceramic specimens were measured using a spectroradiometer. Color differences (ΔE) and color distributions (ΔL*, Δa*, and Δb*) between the teeth and their corresponding specimens were calculated. The average ΔE value of the CCM system was 1.89 ± 0.75, which was lower than that of the visual approach (3.54 ± 1.11, p systems, except for ΔL* (p > 0.05). The novel CCM system produced greater accuracy in color reproduction within the given color space than the traditional visual approach. © 2016 by the American College of Prosthodontists.

  12. Application of Reinforcement Learning Algorithms for the Adaptive Computation of the Smoothing Parameter for Probabilistic Neural Network.

    Science.gov (United States)

    Kusy, Maciej; Zajdel, Roman

    2015-09-01

    In this paper, we propose new methods for the choice and adaptation of the smoothing parameter of the probabilistic neural network (PNN). These methods are based on three reinforcement learning algorithms: Q(0)-learning, Q(λ)-learning, and stateless Q-learning. We regard three types of PNN classifiers: the model that uses single smoothing parameter for the whole network, the model that utilizes single smoothing parameter for each data attribute, and the model that possesses the matrix of smoothing parameters different for each data variable and data class. Reinforcement learning is applied as the method of finding such a value of the smoothing parameter, which ensures the maximization of the prediction ability. PNN models with smoothing parameters computed according to the proposed algorithms are tested on eight databases by calculating the test error with the use of the cross validation procedure. The results are compared with state-of-the-art methods for PNN training published in the literature up to date and, additionally, with PNN whose sigma is determined by means of the conjugate gradient approach. The results demonstrate that the proposed approaches can be used as alternative PNN training procedures.

  13. Neural computational modeling reveals a major role of corticospinal gating of central oscillations in the generation of essential tremor

    Directory of Open Access Journals (Sweden)

    Hong-en Qu

    2017-01-01

    Full Text Available Essential tremor, also referred to as familial tremor, is an autosomal dominant genetic disease and the most common movement disorder. It typically involves a postural and motor tremor of the hands, head or other part of the body. Essential tremor is driven by a central oscillation signal in the brain. However, the corticospinal mechanisms involved in the generation of essential tremor are unclear. Therefore, in this study, we used a neural computational model that includes both monosynaptic and multisynaptic corticospinal pathways interacting with a propriospinal neuronal network. A virtual arm model is driven by the central oscillation signal to simulate tremor activity behavior. Cortical descending commands are classified as alpha or gamma through monosynaptic or multisynaptic corticospinal pathways, which converge respectively on alpha or gamma motoneurons in the spinal cord. Several scenarios are evaluated based on the central oscillation signal passing down to the spinal motoneurons via each descending pathway. The simulated behaviors are compared with clinical essential tremor characteristics to identify the corticospinal pathways responsible for transmitting the central oscillation signal. A propriospinal neuron with strong cortical inhibition performs a gating function in the generation of essential tremor. Our results indicate that the propriospinal neuronal network is essential for relaying the central oscillation signal and the production of essential tremor.

  14. A Computational Model of Torque Generation: Neural, Contractile, Metabolic and Musculoskeletal Components

    Science.gov (United States)

    Callahan, Damien M.; Umberger, Brian R.; Kent-Braun, Jane A.

    2013-01-01

    The pathway of voluntary joint torque production includes motor neuron recruitment and rate-coding, sarcolemmal depolarization and calcium release by the sarcoplasmic reticulum, force generation by motor proteins within skeletal muscle, and force transmission by tendon across the joint. The direct source of energetic support for this process is ATP hydrolysis. It is possible to examine portions of this physiologic pathway using various in vivo and in vitro techniques, but an integrated view of the multiple processes that ultimately impact joint torque remains elusive. To address this gap, we present a comprehensive computational model of the combined neuromuscular and musculoskeletal systems that includes novel components related to intracellular bioenergetics function. Components representing excitatory drive, muscle activation, force generation, metabolic perturbations, and torque production during voluntary human ankle dorsiflexion were constructed, using a combination of experimentally-derived data and literature values. Simulation results were validated by comparison with torque and metabolic data obtained in vivo. The model successfully predicted peak and submaximal voluntary and electrically-elicited torque output, and accurately simulated the metabolic perturbations associated with voluntary contractions. This novel, comprehensive model could be used to better understand impact of global effectors such as age and disease on various components of the neuromuscular system, and ultimately, voluntary torque output. PMID:23405245

  15. Neural computations mediating one-shot learning in the human brain.

    Science.gov (United States)

    Lee, Sang Wan; O'Doherty, John P; Shimojo, Shinsuke

    2015-04-01

    Incremental learning, in which new knowledge is acquired gradually through trial and error, can be distinguished from one-shot learning, in which the brain learns rapidly from only a single pairing of a stimulus and a consequence. Very little is known about how the brain transitions between these two fundamentally different forms of learning. Here we test a computational hypothesis that uncertainty about the causal relationship between a stimulus and an outcome induces rapid changes in the rate of learning, which in turn mediates the transition between incremental and one-shot learning. By using a novel behavioral task in combination with functional magnetic resonance imaging (fMRI) data from human volunteers, we found evidence implicating the ventrolateral prefrontal cortex and hippocampus in this process. The hippocampus was selectively "switched" on when one-shot learning was predicted to occur, while the ventrolateral prefrontal cortex was found to encode uncertainty about the causal association, exhibiting increased coupling with the hippocampus for high-learning rates, suggesting this region may act as a "switch," turning on and off one-shot learning as required.

  16. Spike-timing computation properties of a feed-forward neural network model

    Directory of Open Access Journals (Sweden)

    Drew Benjamin Sinha

    2014-01-01

    Full Text Available Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g. serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.

  17. Architecture Analysis of an FPGA-Based Hopfield Neural Network

    Directory of Open Access Journals (Sweden)

    Miguel Angelo de Abreu de Sousa

    2014-01-01

    Full Text Available Interconnections between electronic circuits and neural computation have been a strongly researched topic in the machine learning field in order to approach several practical requirements, including decreasing training and operation times in high performance applications and reducing cost, size, and energy consumption for autonomous or embedded developments. Field programmable gate array (FPGA hardware shows some inherent features typically associated with neural networks, such as, parallel processing, modular executions, and dynamic adaptation, and works on different types of FPGA-based neural networks were presented in recent years. This paper aims to address different aspects of architectural characteristics analysis on a Hopfield Neural Network implemented in FPGA, such as maximum operating frequency and chip-area occupancy according to the network capacity. Also, the FPGA implementation methodology, which does not employ multipliers in the architecture developed for the Hopfield neural model, is presented, in detail.

  18. A methodology for extracting knowledge rules from artificial neural networks applied to forecast demand for electric power; Uma metodologia para extracao de regras de conhecimento a partir de redes neurais artificiais aplicadas para previsao de demanda por energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Steinmetz, Tarcisio; Souza, Glauber; Ferreira, Sandro; Santos, Jose V. Canto dos; Valiati, Joao [Universidade do Vale do Rio dos Sinos (PIPCA/UNISINOS), Sao Leopoldo, RS (Brazil). Programa de Pos-Graduacao em Computacao Aplicada], Emails: trsteinmetz@unisinos.br, gsouza@unisinos.br, sferreira, jvcanto@unisinos.br, jfvaliati@unisinos.br

    2009-07-01

    We present a methodology for the extraction of rules from Artificial Neural Networks (ANN) trained to forecast the electric load demand. The rules have the ability to express the knowledge regarding the behavior of load demand acquired by the ANN during the training process. The rules are presented to the user in an easy to read format, such as IF premise THEN consequence. Where premise relates to the input data submitted to the ANN (mapped as fuzzy sets), and consequence appears as a linear equation describing the output to be presented by the ANN, should the premise part holds true. Experimentation demonstrates the method's capacity for acquiring and presenting high quality rules from neural networks trained to forecast electric load demand for several amounts of time in the future. (author)

  19. Hafnium transistor process design for neural interfacing.

    Science.gov (United States)

    Parent, David W; Basham, Eric J

    2009-01-01

    A design methodology is presented that uses 1-D process simulations of Metal Insulator Semiconductor (MIS) structures to design the threshold voltage of hafnium oxide based transistors used for neural recording. The methodology is comprised of 1-D analytical equations for threshold voltage specification, and doping profiles, and 1-D MIS Technical Computer Aided Design (TCAD) to design a process to implement a specific threshold voltage, which minimized simulation time. The process was then verified with a 2-D process/electrical TCAD simulation. Hafnium oxide films (HfO) were grown and characterized for dielectric constant and fixed oxide charge for various annealing temperatures, two important design variables in threshold voltage design.

  20. Automated segmentation of synchrotron radiation micro-computed tomography biomedical images using Graph Cuts and neural networks

    Science.gov (United States)

    Alvarenga de Moura Meneses, Anderson; Giusti, Alessandro; de Almeida, André Pereira; Parreira Nogueira, Liebert; Braz, Delson; Cely Barroso, Regina; deAlmeida, Carlos Eduardo

    2011-12-01

    Synchrotron Radiation (SR) X-ray micro-Computed Tomography (μCT) enables magnified images to be used as a non-invasive and non-destructive technique with a high space resolution for the qualitative and quantitative analyses of biomedical samples. The research on applications of segmentation algorithms to SR-μCT is an open problem, due to the interesting and well-known characteristics of SR images for visualization, such as the high resolution and the phase contrast effect. In this article, we describe and assess the application of the Energy Minimization via Graph Cuts (EMvGC) algorithm for the segmentation of SR-μCT biomedical images acquired at the Synchrotron Radiation for MEdical Physics (SYRMEP) beam line at the Elettra Laboratory (Trieste, Italy). We also propose a method using EMvGC with Artificial Neural Networks (EMANNs) for correcting misclassifications due to intensity variation of phase contrast, which are important effects and sometimes indispensable in certain biomedical applications, although they impair the segmentation provided by conventional techniques. Results demonstrate considerable success in the segmentation of SR-μCT biomedical images, with average Dice Similarity Coefficient 99.88% for bony tissue in Wistar Rats rib samples (EMvGC), as well as 98.95% and 98.02% for scans of Rhodnius prolixus insect samples (Chagas's disease vector) with EMANNs, in relation to manual segmentation. The techniques EMvGC and EMANNs cope with the task of performing segmentation in images with the intensity variation due to phase contrast effects, presenting a superior performance in comparison to conventional segmentation techniques based on thresholding and linear/nonlinear image filtering, which is also discussed in the present article.

  1. Quality assessment of microwave-vacuum dried material with the use of computer image analysis and neural model

    Science.gov (United States)

    Koszela, K.; OtrzÄ sek, J.; Zaborowicz, M.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.

    2014-04-01

    The farming area for vegetables in Poland is constantly changed and modified. Each year the cultivation structure of particular vegetables is different. However, it is the cultivation of carrots that plays a significant role among vegetables. According to the Main Statistical Office (GUS), in 2012 carrot held second position among the cultivated root vegetables, and it was estimated at 835 thousand tons. In the world we are perceived as the leading producer of carrot, due to the fourth place in the ranking of global producers. Poland is the largest producer of this vegetable in the EU [1]. It is also noteworthy, that the demand for dried vegetables is still increasing. This tendency affects the development of drying industry in our country, contributing to utilization of the product surplus. Dried vegetables are used increasingly often in various sectors of food products industry, due to high nutrition value, as well as to changing alimentary preferences of consumers [2-3]. Dried carrot plays a crucial role among dried vegetables, because of its wide scope of use and high nutrition value. It contains a lot of carotene and sugar present in the form of crystals. Carrot also undergoes many different drying processes, which makes it difficult to perform a reliable quality assessment and classification of this dried material. One of many qualitative properties of dried carrot, having important influence on a positive or negative result of the quality assessment, is color and shape. The aim of the research project was to develop a method for the analysis of microwave-vacuum dried carrot images, and its application for the classification of individual fractions in the sample studied for quality assessment. During the research digital photographs of dried carrot were taken, which constituted the basis for assessment performed by a dedicated computer programme developed as a part of the research. Consequently, using a neural model, the dried material was classified [4-6].

  2. Automated segmentation of synchrotron radiation micro-computed tomography biomedical images using Graph Cuts and neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga de Moura Meneses, Anderson, E-mail: ameneses@ieee.org [Radiological Sciences Laboratory, Rio de Janeiro State University, Rua Sao Francisco Xavier 524, CEP 20550-900, RJ (Brazil); Giusti, Alessandro [IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Pereira de Almeida, Andre; Parreira Nogueira, Liebert; Braz, Delson [Nuclear Engineering Program, Federal University of Rio de Janeiro, RJ (Brazil); Cely Barroso, Regina [Laboratory of Applied Physics on Biomedical Sciences, Physics Department, Rio de Janeiro State University, RJ (Brazil); Almeida, Carlos Eduardo de [Radiological Sciences Laboratory, Rio de Janeiro State University, Rua Sao Francisco Xavier 524, CEP 20550-900, RJ (Brazil)

    2011-12-21

    Synchrotron Radiation (SR) X-ray micro-Computed Tomography ({mu}CT) enables magnified images to be used as a non-invasive and non-destructive technique with a high space resolution for the qualitative and quantitative analyses of biomedical samples. The research on applications of segmentation algorithms to SR-{mu}CT is an open problem, due to the interesting and well-known characteristics of SR images for visualization, such as the high resolution and the phase contrast effect. In this article, we describe and assess the application of the Energy Minimization via Graph Cuts (EMvGC) algorithm for the segmentation of SR-{mu}CT biomedical images acquired at the Synchrotron Radiation for MEdical Physics (SYRMEP) beam line at the Elettra Laboratory (Trieste, Italy). We also propose a method using EMvGC with Artificial Neural Networks (EMANNs) for correcting misclassifications due to intensity variation of phase contrast, which are important effects and sometimes indispensable in certain biomedical applications, although they impair the segmentation provided by conventional techniques. Results demonstrate considerable success in the segmentation of SR-{mu}CT biomedical images, with average Dice Similarity Coefficient 99.88% for bony tissue in Wistar Rats rib samples (EMvGC), as well as 98.95% and 98.02% for scans of Rhodnius prolixus insect samples (Chagas's disease vector) with EMANNs, in relation to manual segmentation. The techniques EMvGC and EMANNs cope with the task of performing segmentation in images with the intensity variation due to phase contrast effects, presenting a superior performance in comparison to conventional segmentation techniques based on thresholding and linear/nonlinear image filtering, which is also discussed in the present article.

  3. Fish and chips: implementation of a neural network model into computer chips to maximize swimming efficiency in autonomous underwater vehicles.

    Science.gov (United States)

    Blake, R W; Ng, H; Chan, K H S; Li, J

    2008-09-01

    Recent developments in the design and propulsion of biomimetic autonomous underwater vehicles (AUVs) have focused on boxfish as models (e.g. Deng and Avadhanula 2005 Biomimetic micro underwater vehicle with oscillating fin propulsion: system design and force measurement Proc. 2005 IEEE Int. Conf. Robot. Auto. (Barcelona, Spain) pp 3312-7). Whilst such vehicles have many potential advantages in operating in complex environments (e.g. high manoeuvrability and stability), limited battery life and payload capacity are likely functional disadvantages. Boxfish employ undulatory median and paired fins during routine swimming which are characterized by high hydromechanical Froude efficiencies (approximately 0.9) at low forward speeds. Current boxfish-inspired vehicles are propelled by a low aspect ratio, 'plate-like' caudal fin (ostraciiform tail) which can be shown to operate at a relatively low maximum Froude efficiency (approximately 0.5) and is mainly employed as a rudder for steering and in rapid swimming bouts (e.g. escape responses). Given this and the fact that bioinspired engineering designs are not obligated to wholly duplicate a biological model, computer chips were developed using a multilayer perception neural network model of undulatory fin propulsion in the knifefish Xenomystus nigri that would potentially allow an AUV to achieve high optimum values of propulsive efficiency at any given forward velocity, giving a minimum energy drain on the battery. We envisage that externally monitored information on flow velocity (sensory system) would be conveyed to the chips residing in the vehicle's control unit, which in turn would signal the locomotor unit to adopt kinematics (e.g. fin frequency, amplitude) associated with optimal propulsion efficiency. Power savings could protract vehicle operational life and/or provide more power to other functions (e.g. communications).

  4. An Expedient Study on Back-Propagation (BPN) Neural Networks for Modeling Automated Evaluation of the Answers and Progress of Deaf Students' That Possess Basic Knowledge of the English Language and Computer Skills

    Science.gov (United States)

    Vrettaros, John; Vouros, George; Drigas, Athanasios S.

    This article studies the expediency of using neural networks technology and the development of back-propagation networks (BPN) models for modeling automated evaluation of the answers and progress of deaf students' that possess basic knowledge of the English language and computer skills, within a virtual e-learning environment. The performance of the developed neural models is evaluated with the correlation factor between the neural networks' response values and the real value data as well as the percentage measurement of the error between the neural networks' estimate values and the real value data during its training process and afterwards with unknown data that weren't used in the training process.

  5. Real-time decision fusion for multimodal neural prosthetic devices.

    Directory of Open Access Journals (Sweden)

    James Robert White

    Full Text Available BACKGROUND: The field of neural prosthetics aims to develop prosthetic limbs with a brain-computer interface (BCI through which neural activity is decoded into movements. A natural extension of current research is the incorporation of neural activity from multiple modalities to more accurately estimate the user's intent. The challenge remains how to appropriately combine this information in real-time for a neural prosthetic device. METHODOLOGY/PRINCIPAL FINDINGS: Here we propose a framework based on decision fusion, i.e., fusing predictions from several single-modality decoders to produce a more accurate device state estimate. We examine two algorithms for continuous variable decision fusion: the Kalman filter and artificial neural networks (ANNs. Using simulated cortical neural spike signals, we implemented several successful individual neural decoding algorithms, and tested the capabilities of each fusion method in the context of decoding 2-dimensional endpoint trajectories of a neural prosthetic arm. Extensively testing these methods on random trajectories, we find that on average both the Kalman filter and ANNs successfully fuse the individual decoder estimates to produce more accurate predictions. CONCLUSIONS: Our results reveal that a fusion-based approach has the potential to improve prediction accuracy over individual decoders of varying quality, and we hope that this work will encourage multimodal neural prosthetics experiments in the future.

  6. Application of a computational neural network to optimize the fluorescence signal from a receptor-ligand interaction on a microfluidic chip.

    Science.gov (United States)

    Ortega, Maria; Hanrahan, Grady; Arceo, Marilyn; Gomez, Frank A

    2015-02-01

    We describe the use of a computational neural network platform to optimize the fluorescence upon binding 5-carboxyfluorescein-d-Ala-d-Ala-d-Ala (5-FAM(DA)3 ) (1) to the antibiotic teicoplanin covalently attached to a glass slide. A three-level response surface experimental design was used as the first stage of investigation. Subsequently, three defined experimental parameters were examined by the neural network approach: (i) the concentration of teicoplanin used to derivatize a glass platform on the microfluidic device, (ii) the time required for the immobilization of teicoplanin on the platform, and (iii) the length of time 1 is allowed to equilibrate with teicoplanin in the microfluidic channel. Optimal neural structure provided a best fit model, both for the training set (r(2) = 0.961) and test set (r(2) = 0.934) data. Model simulated results were experimentally validated with excellent agreement (% difference) between experimental and predicted fluorescence shown, thus demonstrating efficiency of the neural network approach. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Modular representation of layered neural networks.

    Science.gov (United States)

    Watanabe, Chihiro; Hiramatsu, Kaoru; Kashino, Kunio

    2018-01-01

    Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Towards a Conceptual Framework and an Empirical Methodology in Research on Artistic Human-Computer and Human-Robot Interaction

    OpenAIRE

    Seifert, Uwe; Kim, Jin Hyun

    2008-01-01

    In order to develop a new approach to the scientific study of the musical mind, cognitive musicology has to be complemented by research on human-computer and human-robot interaction. Within the computational approach to mind, interactionism or embodied cognitive science using robots for modeling cognitive and behavioral processes provides an adequate framework for modeling internal processes underlying artistic and aesthetic experiences. The computational framework provided by cognitive scien...

  9. The neural processing of voluntary completed, real and virtual violent and nonviolent computer game scenarios displaying predefined actions in gamers and nongamers.

    Science.gov (United States)

    Regenbogen, Christina; Herrmann, Manfred; Fehr, Thorsten

    2010-01-01

    Studies investigating the effects of violent computer and video game playing have resulted in heterogeneous outcomes. It has been assumed that there is a decreased ability to differentiate between virtuality and reality in people that play these games intensively. FMRI data of a group of young males with (gamers) and without (controls) a history of long-term violent computer game playing experience were obtained during the presentation of computer game and realistic video sequences. In gamers the processing of real violence in contrast to nonviolence produced activation clusters in right inferior frontal, left lingual and superior temporal brain regions. Virtual violence activated a network comprising bilateral inferior frontal, occipital, postcentral, right middle temporal, and left fusiform regions. Control participants showed extended left frontal, insula and superior frontal activations during the processing of real, and posterior activations during the processing of virtual violent scenarios. The data suggest that the ability to differentiate automatically between real and virtual violence has not been diminished by a long-term history of violent video game play, nor have gamers' neural responses to real violence in particular been subject to desensitization processes. However, analyses of individual data indicated that group-related analyses reflect only a small part of actual individual different neural network involvement, suggesting that the consideration of individual learning history is sufficient for the present discussion.

  10. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  11. Implications of the Integration of Computing Methodologies into Conventional Marketing Research upon the Quality of Students' Understanding of the Concept

    Science.gov (United States)

    Ayman, Umut; Serim, Mehmet Cenk

    2004-01-01

    It has been an ongoing concern among academicians teaching social sciences to develop a better methodology to ease understanding of students. Since verbal emphasis is at the core of the concepts within such disciplines it has been observed that the adequate or desired level of conceptual understanding of the students to transforms the theories…

  12. Implementing the flipped classroom methodology to the subject "Applied computing" of the chemical engineering degree at the University of Barcelona

    Directory of Open Access Journals (Sweden)

    Montserrat Iborra

    2017-06-01

    Full Text Available This work is focus on implementation, development, documentation, analysis and assessment of flipped classroom methodology, by means of just in time teaching strategy, in a pilot group (1 of 6 of the subject “Applied Computing” of Chemical Engineering Undergraduate Degree of the University of Barcelona. The results show that this technique promotes self-learning, autonomy, time management as well as an increase in the effectiveness of classroom hours.

  13. A methodology for incorporating web technologies into a computer-based patient record, with contributions from cognitive science.

    Science.gov (United States)

    Webster, Charles

    2002-12-18

    Cognitive science is a rich source of insight for creative use of new Web technologies by medical informatics workers. I outline a project to Web-enable an existing computer-based patient record (CPR) in the context of ideas from philosophy, linguistics, artificial intelligence, and cognitive psychology. Web prototypes play an important role (a) because Web technology lends itself to rapid prototype development, and (b) because prototypes help team members bridge among disparate medical, computing, and business ontologies. Six Web-enabled CPR prototypes were created and ranked. User scenarios were generated using a user communication matrix. Resulting prototypes were compared according to the degree to which they satisfied medical, computing, and business constraints. In a different organization, or at different time, candidate prototypes and their ranking might have been different. However, prototype generation and comparison are fundamentally influenced by factors usefully understood in a cognitive science framework.

  14. Recent developments in methodologies for calculating the entropy and free energy of biological systems by computer simulation.

    Science.gov (United States)

    Meirovitch, Hagai

    2007-04-01

    The Helmholtz free energy, F, plays an important role in proteins because of their rugged potential energy surface, which is 'decorated' with a tremendous number of local wells (denoted microstates, m). F governs protein folding, whereas differences DeltaF(mn) determine the relative populations of microstates that are visited by a flexible cyclic peptide or a flexible protein segment (e.g. a surface loop). Recently developed methodologies for calculating DeltaF(mn) (and entropy differences, DeltaS(mn)) mainly use thermodynamic integration and calculation of the absolute F; interesting new approaches in these categories are the adaptive integration method and the hypothetical scanning molecular dynamics method, respectively.

  15. Real-time ocular artifact suppression using recurrent neural network for electro-encephalogram based brain-computer interface.

    Science.gov (United States)

    Erfanian, A; Mahmoudi, B

    2005-03-01

    The paper presents an adaptive noise canceller (ANC) filter using an artificial neural network for real-time removal of electro-oculogram (EOG) interference from electro-encephalogram (EEG) signals. Conventional ANC filters are based on linear models of interference. Such linear models provide poorer prediction for biomedical signals. In this work, a recurrent neural network was employed for modelling the interference signals. The eye movement and eye blink artifacts were recorded by the placing of an electrode on the forehead above the left eye and an electrode on the left temple. The reference signal was then generated by the data collected from the forehead electrode being added to data recorded from the temple electrode. The reference signal was also contaminated by the EEG. To reduce the EEG interference, the reference signal was first low-pass filtered by a moving averaged filter and then applied to the ANC. Matlab Simulink was used for real-time data acquisition, filtering and ocular artifact suppression. Simulation results show the validity and effectiveness of the technique with different signal-to-noise ratios (SNRs) of the primary signal. On average, a significant improvement in SNR up to 27 dB was achieved with the recurrent neural network. The results from real data demonstrate that the proposed scheme removes ocular artifacts from contaminated EEG signals and is suitable for real-time and short-time EEG recordings.

  16. New methodology for computing tsunami generation by subaerial landslides: Application to the 2015 Tyndall Glacier landslide, Alaska

    Science.gov (United States)

    George, D. L.; Iverson, R. M.; Cannon, C. M.

    2017-07-01

    Landslide-generated tsunamis pose significant hazards and involve complex, multiphase physics that are challenging to model. We present a new methodology in which our depth-averaged two-phase model D-Claw is used to seamlessly simulate all stages of landslide dynamics as well as tsunami generation, propagation, and inundation. Because the model describes the evolution of solid and fluid volume fractions, it treats both landslides and tsunamis as special cases of a more general class of phenomena. Therefore, the landslide and tsunami can be efficiently simulated as a single-layer continuum with evolving solid-grain concentrations, and with wave generation via direct longitudinal momentum transfer—a dominant physical mechanism that has not been previously addressed in this manner. To test our methodology, we used D-Claw to model a large subaerial landslide and resulting tsunami that occurred on 17 October 2015, in Taan Fjord near the terminus of Tyndall Glacier, Alaska. Modeled shoreline inundation patterns compare well with those observed in satellite imagery.

  17. Computational design of new Peptide inhibitors for amyloid Beta (aβ aggregation in Alzheimer's disease: application of a novel methodology.

    Directory of Open Access Journals (Sweden)

    Gözde Eskici

    Full Text Available Alzheimer's disease is the most common form of dementia. It is a neurodegenerative and incurable disease that is associated with the tight packing of amyloid fibrils. This packing is facilitated by the compatibility of the ridges and grooves on the amyloid surface. The GxMxG motif is the major factor creating the compatibility between two amyloid surfaces, making it an important target for the design of amyloid aggregation inhibitors. In this study, a peptide, experimentally proven to bind Aβ40 fibrils at the GxMxG motif, was mutated by a novel methodology that systematically replaces amino acids with residues that share similar chemical characteristics and subsequently assesses the energetic favorability of these mutations by docking. Successive mutations are combined and reassessed via docking to a desired level of refinement. This methodology is both fast and efficient in providing potential inhibitors. Its efficiency lies in the fact that it does not perform all possible combinations of mutations, therefore decreasing the computational time drastically. The binding free energies of the experimentally studied reference peptide and its three top scoring derivatives were evaluated as a final assessment/valuation. The potential of mean forces (PMFs were calculated by applying the Jarzynski's equality to results of steered molecular dynamics simulations. For all of the top scoring derivatives, the PMFs showed higher binding free energies than the reference peptide substantiating the usage of the introduced methodology to drug design.

  18. A new decomposition-based computer-aided molecular/mixture design methodology for the design of optimal solvents and solvent mixtures

    DEFF Research Database (Denmark)

    Karunanithi, A.T.; Achenie, L.E.K.; Gani, Rafiqul

    2005-01-01

    This paper presents a novel computer-aided molecular/mixture design (CAMD) methodology for the design of optimal solvents and solvent mixtures. The molecular/mixture design problem is formulated as a mixed integer nonlinear programming (MINLP) model in which a performance objective is to be optim......This paper presents a novel computer-aided molecular/mixture design (CAMD) methodology for the design of optimal solvents and solvent mixtures. The molecular/mixture design problem is formulated as a mixed integer nonlinear programming (MINLP) model in which a performance objective...... is to be optimized subject to structural, property, and process constraints. The general molecular/mixture design problem is divided into two parts. For optimal single-compound design, the first part is solved. For mixture design, the single-compound design is first carried out to identify candidates...... and then the second part is solved to determine the optimal mixture. The decomposition of the CAMD MINLP model into relatively easy to solve subproblems is essentially a partitioning of the constraints from the original set. This approach is illustrated through two case studies. The first case study involves...

  19. Computational simulation: astrocyte-induced depolarization of neighboring neurons mediates synchronous UP states in a neural network.

    Science.gov (United States)

    Kuriu, Takayuki; Kakimoto, Yuta; Araki, Osamu

    2015-09-01

    Although recent reports have suggested that synchronous neuronal UP states are mediated by astrocytic activity, the mechanism responsible for this remains unknown. Astrocytic glutamate release synchronously depolarizes adjacent neurons, while synaptic transmissions are blocked. The purpose of this study was to confirm that astrocytic depolarization, propagated through synaptic connections, can lead to synchronous neuronal UP states. We applied astrocytic currents to local neurons in a neural network consisting of model cortical neurons. Our results show that astrocytic depolarization may generate synchronous UP states for hundreds of milliseconds in neurons even if they do not directly receive glutamate release from the activated astrocyte.

  20. Trace determination of safranin O dye using ultrasound assisted dispersive solid-phase micro extraction: Artificial neural network-genetic algorithm and response surface methodology.

    Science.gov (United States)

    Dil, Ebrahim Alipanahpour; Ghaedi, Mehrorang; Asfaram, Arash; Mehrabi, Fatemeh; Bazrafshan, Ali Akbar; Ghaedi, Abdol Mohammad

    2016-11-01

    In this study, ultrasound assisted dispersive solid-phase micro extraction combined with spectrophotometry (USA-DSPME-UV) method based on activated carbon modified with Fe2O3 nanoparticles (Fe2O3-NPs-AC) was developed for pre-concentration and determination of safranin O (SO). It is known that the efficiency of USA-DSPME-UV method may be affected by pH, amount of adsorbent, ultrasound time and eluent volume and the extent and magnitude of their contribution on response (in term of main and interaction part) was studied by using central composite design (CCD) and artificial neural network-genetic algorithms (ANN-GA). Accordingly by adjustment of experimental conditions suggested by ANN-GA at pH 6.5, 1.1mg of adsorbent, 10min ultrasound and 150μL of eluent volume led to achievement of best operation performance like low LOD (6.3ngmL(-1)) and LOQ (17.5ngmL(-1)) in the range of 25-3500ngmL(-1). In following stage, the SO content in real water and wastewater samples with recoveries between 93.27-99.41% with RSD lower than 3% was successfully determined. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Strategies and methodologies to develop techniques for computer-assisted analysis of gas phase formation during altitude decompression

    Science.gov (United States)

    Powell, Michael R.; Hall, W. A.

    1993-01-01

    It would be of operational significance if one possessed a device that would indicate the presence of gas phase formation in the body during hypobaric decompression. Automated analysis of Doppler gas bubble signals has been attempted for 2 decades but with generally unfavorable results, except with surgically implanted transducers. Recently, efforts have intensified with the introduction of low-cost computer programs. Current NASA work is directed towards the development of a computer-assisted method specifically targeted to EVA, and we are most interested in Spencer Grade 4. We note that Spencer Doppler Grades 1 to 3 have increased in the FFT sonogram and spectrogram in the amplitude domain, and the frequency domain is sometimes increased over that created by the normal blood flow envelope. The amplitude perturbations are of very short duration, in both systole and diastole and at random temporal positions. Grade 4 is characteristic in the amplitude domain but with modest increases in the FFT sonogram and spectral frequency power from 2K to 4K over all of the cardiac cycle. Heart valve motion appears to characteristic display signals: (1) the demodulated Doppler signal amplitude is considerably above the Doppler-shifted blow flow signal (even Grade 4); and (2) demodulated Doppler frequency shifts are considerably greater (often several kHz) than the upper edge of the blood flow envelope. Knowledge of these facts will aid in the construction of a real-time, computer-assisted discriminator to eliminate cardiac motion artifacts. There could also exist perturbations in the following: (1) modifications of the pattern of blood flow in accordance with Poiseuille's Law, (2) flow changes with a change in the Reynolds number, (3) an increase in the pulsatility index, and/or (4) diminished diastolic flow or 'runoff.' Doppler ultrasound devices have been constructed with a three-transducer array and a pulsed frequency generator.

  2. Computer aided vertebral visualization and analysis: a methodology using the sand rat, a small animal model of disc degeneration

    Directory of Open Access Journals (Sweden)

    Hanley Edward N

    2003-03-01

    Full Text Available Abstract Background The purpose of this study is to present an automated system that analyzes digitized x-ray images of small animal spines identifying the effects of disc degeneration. The age-related disc and spine degeneration that occurs in the sand rat (Psammomys obesus has previously been documented radiologically; selected representative radiographs with age-related changes were used here to develop computer-assisted vertebral visualization/analysis techniques. Techniques presented here have the potential to produce quantitative algorithms that create more accurate and informative measurements in a time efficient manner. Methods Signal and image processing techniques were applied to digitized spine x-ray images the spine was segmented, and orientation and curvature determined. The image was segmented based on orientation changes of the spine; edge detection was performed to define vertebral boundaries. Once vertebrae were identified, a number of measures were introduced and calculated to retrieve information on the vertebral separation/orientation and sclerosis. Results A method is described which produces computer-generated quantitative measurements of vertebrae and disc spaces. Six sand rat spine radiographs illustrate applications of this technique. Results showed that this method can successfully automate calculation and analysis of vertebral length, vertebral spacing, vertebral angle, and can score sclerosis. Techniques also provide quantitative means to explore the relation between age and vertebral shape. Conclusions This method provides a computationally efficient system to analyze spinal changes during aging. Techniques can be used to automate the quantitative processing of vertebral radiographic images and may be applicable to human and other animal radiologic models of the aging/degenerating spine.

  3. Combining two open source tools for neural computation (BioPatRec and Netlab) improves movement classification for prosthetic control.

    Science.gov (United States)

    Prahm, Cosima; Eckstein, Korbinian; Ortiz-Catalan, Max; Dorffner, Georg; Kaniusas, Eugenijus; Aszmann, Oskar C

    2016-08-31

    Controlling a myoelectric prosthesis for upper limbs is increasingly challenging for the user as more electrodes and joints become available. Motion classification based on pattern recognition with a multi-electrode array allows multiple joints to be controlled simultaneously. Previous pattern recognition studies are difficult to compare, because individual research groups use their own data sets. To resolve this shortcoming and to facilitate comparisons, open access data sets were analysed using components of BioPatRec and Netlab pattern recognition models. Performances of the artificial neural networks, linear models, and training program components were compared. Evaluation took place within the BioPatRec environment, a Matlab-based open source platform that provides feature extraction, processing and motion classification algorithms for prosthetic control. The algorithms were applied to myoelectric signals for individual and simultaneous classification of movements, with the aim of finding the best performing algorithm and network model. Evaluation criteria included classification accuracy and training time. Results in both the linear and the artificial neural network models demonstrated that Netlab's implementation using scaled conjugate training algorithm reached significantly higher accuracies than BioPatRec. It is concluded that the best movement classification performance would be achieved through integrating Netlab training algorithms in the BioPatRec environment so that future prosthesis training can be shortened and control made more reliable. Netlab was therefore included into the newest release of BioPatRec (v4.0).

  4. Methodology used to compute maximum potential doses from ingestion of edible plants and wildlife found on the Hanford Site

    Energy Technology Data Exchange (ETDEWEB)

    Soldat, J.K.; Price, K.R.; Rickard, W.H.

    1990-10-01

    The purpose of this report is to summarize the assumptions, dose factors, consumption rates, and methodology used to evaluate potential radiation doses to persons who may eat contaminated wildlife or contaminated plants collected from the Hanford Site. This report includes a description of the number and variety of wildlife and edible plants on the Hanford Site, methods for estimation of the quantities of these items consumed and conversion of intake of radionuclides to radiation doses, and example calculations of radiation doses from consumption of plants and wildlife. Edible plants on the publicly accessible margins of the shoreline of the Hanford Site and Wildlife that move offsite are potential sources of contaminated food for the general public. Calculations of potential radiation doses from consumption of agricultural plants and farm animal products are made routinely and reported annually for those produced offsite, using information about concentrations of radionuclides, consumption rates, and factors for converting radionuclide intake into dose. Dose calculations for onsite plants and wildlife are made intermittently when appropriate samples become available for analysis or when special studies are conducted. Consumption rates are inferred from the normal intake rates of similar food types raised offsite and from the edible weight of the onsite product that is actually available for harvest. 19 refs., 4 tabs.

  5. Symbolic processing in neural networks

    OpenAIRE

    Neto, João Pedro; Hava T Siegelmann; Costa,J.Félix

    2003-01-01

    In this paper we show that programming languages can be translated into recurrent (analog, rational weighted) neural nets. Implementation of programming languages in neural nets turns to be not only theoretical exciting, but has also some practical implications in the recent efforts to merge symbolic and sub symbolic computation. To be of some use, it should be carried in a context of bounded resources. Herein, we show how to use resource bounds to speed up computations over neural nets, thro...

  6. Convolutional neural network architecture and input volume matrix design for ERP classifications in a tactile P300-based Brain-Computer Interface.

    Science.gov (United States)

    Kodama, Takumi; Makino, Shoji

    2017-07-01

    In the presented study we conduct the off-line ERP classification using the convolutional neural network (CNN) classifier for somatosensory ERP intervals acquired in the full- body tactile P300-based Brain-Computer Interface paradigm (fbBCI). The main objective of the study is to enhance fbBCI stimulus pattern classification accuracies by applying the CNN classifier. A 60 × 60 squared input volume transformed by one-dimensional somatosensory ERP intervals in each electrode channel is input to the convolutional architecture for a filter training. The flattened activation maps are evaluated by a multilayer perceptron with one-hidden-layer in order to calculate classification accuracy results. The proposed method reveals that the CNN classifier model can achieve a non-personal- training ERP classification with the fbBCI paradigm, scoring 100 % classification accuracy results for all the participated ten users.

  7. Methodology of problem-based learning engineering and technology and of its implementation with modern computer resources

    Science.gov (United States)

    Lebedev, A. A.; Ivanova, E. G.; Komleva, V. A.; Klokov, N. M.; Komlev, A. A.

    2017-01-01

    The considered method of learning the basics of microelectronic circuits and systems amplifier enables one to understand electrical processes deeper, to understand the relationship between static and dynamic characteristics and, finally, bring the learning process to the cognitive process. The scheme of problem-based learning can be represented by the following sequence of procedures: the contradiction is perceived and revealed; the cognitive motivation is provided by creating a problematic situation (the mental state of the student), moving the desire to solve the problem, to raise the question "why?", the hypothesis is made; searches for solutions are implemented; the answer is looked for. Due to the complexity of architectural schemes in the work the modern methods of computer analysis and synthesis are considered in the work. Examples of engineering by students in the framework of students' scientific and research work of analog circuits with improved performance based on standard software and software developed at the Department of Microelectronics MEPhI.

  8. Logging into the Field—Methodological Reflections on Ethnographic Research in a Pluri-Local and Computer-Mediated Field

    Directory of Open Access Journals (Sweden)

    Heike Mónika Greschke

    2007-09-01

    Full Text Available This article aims to introduce an ethnic group inhabiting a common virtual space in the World Wide Web (WWW, while being physically located in different socio-geographical contexts. Potentially global in its geographical extent, this social formation is constituted by means of interrelating virtual-global dimensions with physically grounded parts of the actors' lifeworlds. In addition, the communities' social life relies on specific communicative practices joining mediated forms of communication with co-presence based encounters. Ethnographic research in a pluri-local and computer-mediated field poses a set of problems which demand thorough reflection as well as a search for creative solutions. How can the boundaries of the field be determined? What does "being there" signify in such a case? Is it possible to enter the field while sitting at my own desk, just by visiting the respective site in the WWW, simply observing the communication going on without even being noticed by the subjects in the field? Or does "being in the field" imply that I ought to turn into a member of the studied community? Am I supposed to effectively live with the others for a while? And then, what can "living together" actually mean in that case? Will I learn enough about the field simply by participating in its virtual activities? Or do I have to account for the physically grounded dimensions of the actors' lifeworlds, as well? Ethnographic research in a pluri-local and computer-mediated field in practice raises a lot of questions regarding the ways of entering the field and being in the field. Some of them will be discussed in this paper by means of reflecting research experiences gained in the context of a recently concluded case study. URN: urn:nbn:de:0114-fqs0703321

  9. EFFICIENT LANE DETECTION BASED ON ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    F. Arce

    2017-09-01

    Full Text Available Lane detection is a problem that has attracted in the last years the attention of the computer vision community. Most of approaches used until now to face this problem combine conventional image processing, image analysis and pattern classification techniques. In this paper, we propose a methodology based on so-called Ellipsoidal Neural Networks with Dendritic Processing (ENNDPs as a new approach to provide a solution to this important problem. The functioning and performance of the proposed methodology is validated with a real video taken by a camera mounted on a car circulating on urban highway of Mexico City.

  10. Efficient Lane Detection Based on Artificial Neural Networks

    Science.gov (United States)

    Arce, F.; Zamora, E.; Hernández, G.; Sossa, H.

    2017-09-01

    Lane detection is a problem that has attracted in the last years the attention of the computer vision community. Most of approaches used until now to face this problem combine conventional image processing, image analysis and pattern classification techniques. In this paper, we propose a methodology based on so-called Ellipsoidal Neural Networks with Dendritic Processing (ENNDPs) as a new approach to provide a solution to this important problem. The functioning and performance of the proposed methodology is validated with a real video taken by a camera mounted on a car circulating on urban highway of Mexico City.

  11. The potential of computer vision, optical backscattering parameters and artificial neural network modelling in monitoring the shrinkage of sweet potato (Ipomoea batatas L.) during drying.

    Science.gov (United States)

    Onwude, Daniel I; Hashim, Norhashila; Abdan, Khalina; Janius, Rimfiel; Chen, Guangnan

    2018-03-01

    Drying is a method used to preserve agricultural crops. During the drying of products with high moisture content, structural changes in shape, volume, area, density and porosity occur. These changes could affect the final quality of dried product and also the effective design of drying equipment. Therefore, this study investigated a novel approach in monitoring and predicting the shrinkage of sweet potato during drying. Drying experiments were conducted at temperatures of 50-70 °C and samples thicknesses of 2-6 mm. The volume and surface area obtained from camera vision, and the perimeter and illuminated area from backscattered optical images were analysed and used to evaluate the shrinkage of sweet potato during drying. The relationship between dimensionless moisture content and shrinkage of sweet potato in terms of volume, surface area, perimeter and illuminated area was found to be linearly correlated. The results also demonstrated that the shrinkage of sweet potato based on computer vision and backscattered optical parameters is affected by the product thickness, drying temperature and drying time. A multilayer perceptron (MLP) artificial neural network with input layer containing three cells, two hidden layers (18 neurons), and five cells for output layer, was used to develop a model that can monitor, control and predict the shrinkage parameters and moisture content of sweet potato slices under different drying conditions. The developed ANN model satisfactorily predicted the shrinkage and dimensionless moisture content of sweet potato with correlation coefficient greater than 0.95. Combined computer vision, laser light backscattering imaging and artificial neural network can be used as a non-destructive, rapid and easily adaptable technique for in-line monitoring, predicting and controlling the shrinkage and moisture changes of food and agricultural crops during drying. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  12. The Wellcome Prize Lecture. A map of auditory space in the mammalian brain: neural computation and development.

    Science.gov (United States)

    King, A J

    1993-09-01

    The experiments described in this review have demonstrated that the SC contains a two-dimensional map of auditory space, which is synthesized within the brain using a combination of monaural and binaural localization cues. There is also an adaptive fusion of auditory and visual space in this midbrain nucleus, providing for a common access to the motor pathways that control orientation behaviour. This necessitates a highly plastic relationship between the visual and auditory systems, both during postnatal development and in adult life. Because of the independent mobility of difference sense organs, gating mechanisms are incorporated into the auditory representation to provide up-to-date information about the spatial orientation of the eyes and ears. The SC therefore provides a valuable model system for studying a number of important issues in brain function, including the neural coding of sound location, the co-ordination of spatial information between different sensory systems, and the integration of sensory signals with motor outputs.

  13. Application of computational neural networks in predicting atmospheric pollutant concentrations due to fossil-fired electric power generation

    Energy Technology Data Exchange (ETDEWEB)

    El-Hawary, F. [BH Engineering Systems & Technical Univ. of Nova Scotia (Canada)

    1995-12-31

    The ability to accurately predict the behavior of a dynamic system is of essential importance in monitoring and control of complex processes. In this regard recent advances in neural-net based system identification represent a significant step toward development and design of a new generation of control tools for increased system performance and reliability. The enabling functionality is the one of accurate representation of a model of a nonlinear and nonstationary dynamic system. This functionality provides valuable new opportunities including: (1) The ability to predict future system behavior on the basis of actual system observations, (2) On-line evaluation and display of system performance and design of early warning systems, and (3) Controller optimization for improved system performance. In this presentation, we discuss the issues involved in definition and design of learning control systems and their impact on power system control. Several numerical examples are provided for illustrative purpose.

  14. Computer-Aided Diagnosis of Parkinson's Disease Using Complex-Valued Neural Networks and mRMR Feature Selection Algorithm.

    Science.gov (United States)

    Peker, Musa; Sen, Baha; Delen, Dursun

    2015-01-01

    Parkinson's disease (PD) is a neurological disorder which has a significant social and economic impact. PD is diagnosed by clinical observation and evaluations, coupled with a PD rating scale. However, these methods may be insufficient, especially in the initial phase of the disease. The processes are tedious and time-consuming, and hence systems that can automatically offer a diagnosis are needed. In this study, a novel method for the diagnosis of PD is proposed. Biomedical sound measurements obtained from continuous phonation samples were used as attributes. First, a minimum redundancy maximum relevance (mRMR) attribute selection algorithm was applied for the identification of the effective attributes. After conversion to a complex number, the resulting attributes are presented as input data to the complex-valued artificial neural network (CVANN). The proposed novel system might be a powerful tool for effective diagnosis of PD.

  15. Neural network-based estimates of Southern Ocean net community production from in situ O2 / Ar and satellite observation: a methodological study

    Science.gov (United States)

    Chang, C.-H.; Johnson, N. C.; Cassar, N.

    2014-06-01

    Southern Ocean organic carbon export plays an important role in the global carbon cycle, yet its basin-scale climatology and variability are uncertain due to limited coverage of in situ observations. In this study, a neural network approach based on the self-organizing map (SOM) is adopted to construct weekly gridded (1° × 1°) maps of organic carbon export for the Southern Ocean from 1998 to 2009. The SOM is trained with in situ measurements of O2 / Ar-derived net community production (NCP) that are tightly linked to the carbon export in the mixed layer on timescales of one to two weeks and with six potential NCP predictors: photosynthetically available radiation (PAR), particulate organic carbon (POC), chlorophyll (Chl), sea surface temperature (SST), sea surface height (SSH), and mixed layer depth (MLD). This nonparametric approach is based entirely on the observed statistical relationships between NCP and the predictors and, therefore, is strongly constrained by observations. A thorough cross-validation yields three retained NCP predictors, Chl, PAR, and MLD. Our constructed NCP is further validated by good agreement with previously published, independent in situ derived NCP of weekly or longer temporal resolution through real-time and climatological comparisons at various sampling sites. The resulting November-March NCP climatology reveals a pronounced zonal band of high NCP roughly following the Subtropical Front in the Atlantic, Indian, and western Pacific sectors, and turns southeastward shortly after the dateline. Other regions of elevated NCP include the upwelling zones off Chile and Namibia, the Patagonian Shelf, the Antarctic coast, and areas surrounding the Islands of Kerguelen, South Georgia, and Crozet. This basin-scale NCP climatology closely resembles that of the satellite POC field and observed air-sea CO2 flux. The long-term mean area-integrated NCP south of 50° S from our dataset, 17.9 mmol C m-2 d-1, falls within the range of 8.3 to 24 mmol

  16. Neural network-based estimates of Southern Ocean net community production from in-situ O2 / Ar and satellite observation: a methodological study

    Science.gov (United States)

    Chang, C.-H.; Johnson, N. C.; Cassar, N.

    2013-10-01

    Southern Ocean organic carbon export plays an important role in the global carbon cycle, yet its basin-scale climatology and variability are uncertain due to limited coverage of in situ observations. In this study, a neural network approach based on the self-organizing map (SOM) is adopted to construct weekly gridded (1° × 1°) maps of organic carbon export for the Southern Ocean from 1998 to 2009. The SOM is trained with in situ measurements of O2 / Ar-derived net community production (NCP) that are tightly linked to the carbon export in the mixed layer on timescales of 1-2 weeks, and six potential NCP predictors: photosynthetically available radiation (PAR), particulate organic carbon (POC), chlorophyll (Chl), sea surface temperature (SST), sea surface height (SSH), and mixed layer depth (MLD). This non-parametric approach is based entirely on the observed statistical relationships between NCP and the predictors, and therefore is strongly constrained by observations. A thorough cross-validation yields three retained NCP predictors, Chl, PAR, and MLD. Our constructed NCP is further validated by good agreement with previously published independent in situ derived NCP of weekly or longer temporal resolution through real-time and climatological comparisons at various sampling sites. The resulting November-March NCP climatology reveals a pronounced zonal band of high NCP roughly following the subtropical front in the Atlantic, Indian and western Pacific sectors, and turns southeastward shortly after the dateline. Other regions of elevated NCP include the upwelling zones off Chile and Namibia, Patagonian Shelf, Antarctic coast, and areas surrounding the Islands of Kerguelen, South Georgia, and Crozet. This basin-scale NCP climatology closely resembles that of the satellite POC field and observed air-sea CO2 flux. The long-term mean area-integrated NCP south of 50° S from our dataset, 14 mmol C m-2 d-1, falls within the range of 8.3-24 mmol C m-2 d-1 from other model

  17. Predicting Motivation: Computational Models of PFC Can Explain Neural Coding of Motivation and Effort-based Decision-making in Health and Disease.

    Science.gov (United States)

    Vassena, Eliana; Deraeve, James; Alexander, William H

    2017-10-01

    Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO

  18. Human Computer Interactions in Next-Generation of Aircraft Smart Navigation Management Systems: Task Analysis and Architecture under an Agent-Oriented Methodological Approach

    Directory of Open Access Journals (Sweden)

    José M. Canino-Rodríguez

    2015-03-01

    Full Text Available The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers’ indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications.

  19. The methodology of designing and optimizing microalgal unit in BLSS based on system dynamics and computer simulation

    Science.gov (United States)

    Hu, Dawei; Liu, Hong; Hu, Enzhu; Li, Ming

    In a sense, the Bioregenerative Life Support System (BLSS) and its components stably and robustly work at a prescribed point is significant for both safety and reliability of BLSS. In the article, the objectives are to design and optimize an important subsystem in BLSS, unit of microalge, to make it gradually stabilize at a required work point and have the best response specifications. The methods include several steps as follows: Firstly, the mathematical models of subunits of microalgal unit depicted by state-space equations which consisted of first-order nonlinear ordinary differential equations were developed, and based on mathematical models and simulation models of subunits developed by Matlab/Simulink to design structures of microalgal unit by series, parallel and feedback connecting the different subunits into a whole model: dx/dt=f(p, x), p and x was parameters vector and state variables vector respectively; Secondly, made f(p, x)=0 solve p and linearization process at the prescribed point were conducted, and then got state matrix. According to system dynamics and Hurwitz principles, the linear system is gradually stable, and have ideal dynamic performances, if its all poles (i.e.eigenvalues of state matrix) are located on the left-half of complex plane and a pair of dominant complex-conjugate poles which can provide appropriate damping ratio to determine the characteristics of transient response are existed, and its nonlinear counterpart must have similar dynamic characteristics at the prescribed point. Optimized and solved systematic parameters based on computer simulation by a combination of Quasi-Newton method, genetic algorithms, root-locus method, Nyquist stability criterion, and so on. If the desired parameters could not be found, the processes of unit structure changed or compensated design were tentatively conducted. The results show that after system and parameters have been appropriately designed and optimized, which could make the linear system

  20. Potential and limitations of X-Ray micro-computed tomography in arthropod neuroanatomy: A methodological and comparative survey

    Science.gov (United States)

    Sombke, Andy; Lipke, Elisabeth; Michalik, Peter; Uhl, Gabriele; Harzsch, Steffen

    2015-01-01

    Classical histology or immunohistochemistry combined with fluorescence or confocal laser scanning microscopy are common techniques in arthropod neuroanatomy, and these methods often require time-consuming and difficult dissections and sample preparations. Moreover, these methods are prone to artifacts due to compression and distortion of tissues, which often result in information loss and especially affect the spatial relationships of the examined parts of the nervous system in their natural anatomical context. Noninvasive approaches such as X-ray micro-computed tomography (micro-CT) can overcome such limitations and have been shown to be a valuable tool for understanding and visualizing internal anatomy and structural complexity. Nevertheless, knowledge about the potential of this method for analyzing the anatomy and organization of nervous systems, especially of taxa with smaller body size (e.g., many arthropods), is limited. This study set out to analyze the brains of selected arthropods with micro-CT, and to compare these results with available histological and immunohistochemical data. Specifically, we explored the influence of different sample preparation procedures. Our study shows that micro-CT is highly suitable for analyzing arthropod neuroarchitecture in situ and allows specific neuropils to be distinguished within the brain to extract quantitative data such as neuropil volumes. Moreover, data acquisition is considerably faster compared with many classical histological techniques. Thus, we conclude that micro-CT is highly suitable for targeting neuroanatomy, as it reduces the risk of artifacts and is faster than classical techniques. J. Comp. Neurol. 523:1281–1295, 2015. © 2015 Wiley Periodicals, Inc. PMID:25728683

  1. Application of Artificial Neural Network to Computer-Aided Diagnosis of Coronary Artery Disease in Myocardial SPECT Bull's-eye Images

    National Research Council Canada - National Science Library

    Fujita, Hiroshi; Katafuchi, Tetsuro; Uehara, Toshiisa; Nishimura, Tsunehiko

    1992-01-01

    .... The technique employs a neural network to analyze 201 Tl myocardial SPECT bull's-eye images. This multi-layer feed-forward neural network with a backpropagation algorithm has 256 input units (pattern...

  2. Forecasting the EMU inflation rate: Linear econometric vs. non-linear computational models using genetic neural fuzzy systems

    DEFF Research Database (Denmark)

    Kooths, Stefan; Mitze, Timo Friedel; Ringhut, Eric

    2004-01-01

    This paper compares the predictive power of linear econometric and non-linear computational models for forecasting the inflation rate in the European Monetary Union (EMU). Various models of both types are developed using different monetary and real activity indicators. They are compared according...

  3. Neural Oscillators Programming Simplified

    Directory of Open Access Journals (Sweden)

    Patrick McDowell

    2012-01-01

    Full Text Available The neurological mechanism used for generating rhythmic patterns for functions such as swallowing, walking, and chewing has been modeled computationally by the neural oscillator. It has been widely studied by biologists to model various aspects of organisms and by computer scientists and robotics engineers as a method for controlling and coordinating the gaits of walking robots. Although there has been significant study in this area, it is difficult to find basic guidelines for programming neural oscillators. In this paper, the authors approach neural oscillators from a programmer’s point of view, providing background and examples for developing neural oscillators to generate rhythmic patterns that can be used in biological modeling and robotics applications.

  4. Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms

    Science.gov (United States)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Helvie, Mark A.; Cha, Kenny H.; Richter, Caleb D.

    2017-12-01

    Transfer learning in deep convolutional neural networks (DCNNs) is an important step in its application to medical imaging tasks. We propose a multi-task transfer learning DCNN with the aim of translating the ‘knowledge’ learned from non-medical images to medical diagnostic tasks through supervised training and increasing the generalization capabilities of DCNNs by simultaneously learning auxiliary tasks. We studied this approach in an important application: classification of malignant and benign breast masses. With Institutional Review Board (IRB) approval, digitized screen-film mammograms (SFMs) and digital mammograms (DMs) were collected from our patient files and additional SFMs were obtained from the Digital Database for Screening Mammography. The data set consisted of 2242 views with 2454 masses (1057 malignant, 1397 benign). In single-task transfer learning, the DCNN was trained and tested on SFMs. In multi-task transfer learning, SFMs and DMs were used to train the DCNN, which was then tested on SFMs. N-fold cross-validation with the training set was used for training and parameter optimization. On the independent test set, the multi-task transfer learning DCNN was found to have significantly (p  =  0.007) higher performance compared to the single-task transfer learning DCNN. This study demonstrates that multi-task transfer learning may be an effective approach for training DCNN in medical imaging applications when training samples from a single modality are limited.

  5. Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms.

    Science.gov (United States)

    Samala, Ravi K; Chan, Heang-Ping; Hadjiiski, Lubomir M; Helvie, Mark A; Cha, Kenny; Richter, Caleb

    2017-10-16

    Transfer learning in deep convolutional neural networks (DCNNs) is an important step in its application to medical imaging tasks. We propose a multi-task transfer learning DCNN with the aims of translating the 'knowledge' learned from non-medical images to medical diagnostic tasks through supervised training and increasing the generalization capabilities of DCNNs by simultaneously learning auxiliary tasks. We studied this approach in an important application: classification of malignant and benign breast masses. With IRB approval, digitized screen-film mammograms (SFMs) and digital mammograms (DMs) were collected from our patient files and additional SFMs were obtained from the Digital Database for Screening Mammography. The data set consisted of 2,242 views with 2,454 masses (1,057 malignant, 1,397 benign). In single-task transfer learning, the DCNN was trained and tested on SFMs. In multi-task transfer learning, SFMs and DMs were used to train the DCNN, which was then tested on SFMs. N-fold cross-validation with the training set was used for training and parameter optimization. On the independent test set, the multi-task transfer learning DCNN was found to have significantly (p=0.007) higher performance compared to the single-task transfer learning DCNN. This study demonstrates that multi-task transfer learning may be an effective approach for training DCNN in medical imaging applications when training samples from a single modality are limited. © 2017 Institute of Physics and Engineering in Medicine.

  6. Computation of the Speed of Four In-Wheel Motors of an Electric Vehicle Using a Radial Basis Neural Network

    Directory of Open Access Journals (Sweden)

    M. Yildirim

    2016-12-01

    Full Text Available This paper presents design and speed estimation for an Electric Vehicle (EV with four in-wheel motors using Radial Basis Neural Network (RBNN. According to the steering angle and the speed of EV, the speeds of all wheels are calculated by equations derived from the Ackermann-Jeantand model using CoDeSys Software Package. The Electronic Differential System (EDS is also simulated by Matlab/Simulink using the mathematical equations. RBNN is used for the estimation of the wheel speeds based on the steering angle and EV speed. Further, different levels of noise are added to the steering angle and the EV speed. The speeds of front wheels calculated by CoDeSys are sent to two Induction Motor (IM drives via a Controller Area Network-Bus (CAN-Bus. These speed values are measured experimentally by a tachometer changing the steering angle and EV speed. RBNN results are verified by CoDeSys, Simulink, and experimental results. As a result, it is observed that RBNN is a good estimator for EDS of an EV with in-wheel motor due to its robustness to different levels of sensor noise.

  7. Optimization of solid-phase extraction using artificial neural networks and response surface methodology in combination with experimental design for determination of gold by atomic absorption spectrometry in industrial wastewater samples.

    Science.gov (United States)

    Ebrahimzadeh, H; Tavassoli, N; Sadeghi, O; Amini, M M

    2012-08-15

    Solid-phase extraction (SPE) is often used for preconcentration and determination of metal ions from industrial and natural samples. A traditional single variable approach (SVA) is still often carried out for optimization in analytical chemistry. Since there is always a risk of not finding the real optimum by single variation method, more advanced optimization approaches such as multivariable approach (MVA) should be applied. Applying MVA optimization can save both time and chemical materials, and consequently decrease analytical costs. Nowadays, using artificial neural network (ANN) and response surface methodology (RSM) in combination with experimental design (MVA) are rapidly developing. After prediction of model equation in RSM and training of artificial neurons in ANNs, the products were used for estimation of the response of the 27 experimental runs. In the present work, the optimization of SPE using single variation method and optimization by ANN and RSM in combination with central composite design (CCD) are compared and the latter approach is practically illustrated. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Optimization of microwave-assisted extraction of total extract, stevioside and rebaudioside-A from Stevia rebaudiana (Bertoni) leaves, using response surface methodology (RSM) and artificial neural network (ANN) modelling.

    Science.gov (United States)

    Ameer, Kashif; Bae, Seong-Woo; Jo, Yunhee; Lee, Hyun-Gyu; Ameer, Asif; Kwon, Joong-Ho

    2017-08-15

    Stevia rebaudiana (Bertoni) consists of stevioside and rebaudioside-A (Reb-A). We compared response surface methodology (RSM) and artificial neural network (ANN) modelling for their estimation and predictive capabilities in building effective models with maximum responses. A 5-level 3-factor central composite design was used to optimize microwave-assisted extraction (MAE) to obtain maximum yield of target responses as a function of extraction time (X 1 : 1-5min), ethanol concentration, (X 2 : 0-100%) and microwave power (X 3 : 40-200W). Maximum values of the three output parameters: 7.67% total extract yield, 19.58mg/g stevioside yield, and 15.3mg/g Reb-A yield, were obtained under optimum extraction conditions of 4min X 1 , 75% X 2 , and 160W X 3 . The ANN model demonstrated higher efficiency than did the RSM model. Hence, RSM can demonstrate interaction effects of inherent MAE parameters on target responses, whereas ANN can reliably model the MAE process with better predictive and estimation capabilities. Copyright © 2017. Published by Elsevier Ltd.

  9. Building Neural Net Software

    OpenAIRE

    Neto, João Pedro; Costa, José Félix

    1999-01-01

    In a recent paper [Neto et al. 97] we showed that programming languages can be translated on recurrent (analog, rational weighted) neural nets. The goal was not efficiency but simplicity. Indeed we used a number-theoretic approach to machine programming, where (integer) numbers were coded in a unary fashion, introducing a exponential slow down in the computations, with respect to a two-symbol tape Turing machine. Implementation of programming languages in neural nets turns to be not only theo...

  10. Neural fields theory and applications

    CERN Document Server

    Graben, Peter; Potthast, Roland; Wright, James

    2014-01-01

    With this book, the editors present the first comprehensive collection in neural field studies, authored by leading scientists in the field - among them are two of the founding-fathers of neural field theory. Up to now, research results in the field have been disseminated across a number of distinct journals from mathematics, computational neuroscience, biophysics, cognitive science and others. Starting with a tutorial for novices in neural field studies, the book comprises chapters on emergent patterns, their phase transitions and evolution, on stochastic approaches, cortical development, cognition, robotics and computation, large-scale numerical simulations, the coupling of neural fields to the electroencephalogram and phase transitions in anesthesia. The intended readership are students and scientists in applied mathematics, theoretical physics, theoretical biology, and computational neuroscience. Neural field theory and its applications have a long-standing tradition in the mathematical and computational ...

  11. Training and Validating a Deep Convolutional Neural Network for Computer-Aided Detection and Classification of Abnormalities on Frontal Chest Radiographs.

    Science.gov (United States)

    Cicero, Mark; Bilbily, Alexander; Colak, Errol; Dowdell, Tim; Gray, Bruce; Perampaladas, Kuhan; Barfett, Joseph

    2017-05-01

    Convolutional neural networks (CNNs) are a subtype of artificial neural network that have shown strong performance in computer vision tasks including image classification. To date, there has been limited application of CNNs to chest radiographs, the most frequently performed medical imaging study. We hypothesize CNNs can learn to classify frontal chest radiographs according to common findings from a sufficiently large data set. Our institution's research ethics board approved a single-center retrospective review of 35,038 adult posterior-anterior chest radiographs and final reports performed between 2005 and 2015 (56% men, average age of 56, patient type: 24% inpatient, 39% outpatient, 37% emergency department) with a waiver for informed consent. The GoogLeNet CNN was trained using 3 graphics processing units to automatically classify radiographs as normal (n = 11,702) or into 1 or more of cardiomegaly (n = 9240), consolidation (n = 6788), pleural effusion (n = 7786), pulmonary edema (n = 1286), or pneumothorax (n = 1299). The network's performance was evaluated using receiver operating curve analysis on a test set of 2443 radiographs with the criterion standard being board-certified radiologist interpretation. Using 256 × 256-pixel images as input, the network achieved an overall sensitivity and specificity of 91% with an area under the curve of 0.964 for classifying a study as normal (n = 1203). For the abnormal categories, the sensitivity, specificity, and area under the curve, respectively, were 91%, 91%, and 0.962 for pleural effusion (n = 782), 82%, 82%, and 0.868 for pulmonary edema (n = 356), 74%, 75%, and 0.850 for consolidation (n = 214), 81%, 80%, and 0.875 for cardiomegaly (n = 482), and 78%, 78%, and 0.861 for pneumothorax (n = 167). Current deep CNN architectures can be trained with modest-sized medical data sets to achieve clinically useful performance at detecting and excluding common pathology on chest radiographs.

  12. Improvement of reliability of molecular DNA computing: solution of inverse problem of Raman spectroscopy using artificial neural networks

    Science.gov (United States)

    Dolenko, T. A.; Burikov, S. A.; Vervald, E. N.; Efitorov, A. O.; Laptinskiy, K. A.; Sarmanova, O. E.; Dolenko, S. A.

    2017-02-01

    Elaboration of methods for the control of biochemical reactions with deoxyribonucleic acid (DNA) strands is necessary for the solution of one of the basic problems in the creation of biocomputers—improvement in the reliability of molecular DNA computing. In this paper, the results of the solution of the four-parameter inverse problem of laser Raman spectroscopy—the determination of the type and concentration of each of the DNA nitrogenous bases in multi-component solutions—are presented.

  13. Multidimensional Space-Time Methodology for Development of Planetary and Space Sciences, S-T Data Management and S-T Computational Tomography

    Science.gov (United States)

    Andonov, Zdravko

    This R&D represent innovative multidimensional 6D-N(6n)D Space-Time (S-T) Methodology, 6D-6nD Coordinate Systems, 6D Equations, new 6D strategy and technology for development of Planetary Space Sciences, S-T Data Management and S-T Computational To-mography. . . The Methodology is actual for brain new RS Microwaves' Satellites and Compu-tational Tomography Systems development, aimed to defense sustainable Earth, Moon, & Sun System evolution. Especially, extremely important are innovations for monitoring and protec-tion of strategic threelateral system H-OH-H2O Hydrogen, Hydroxyl and Water), correspond-ing to RS VHRS (Very High Resolution Systems) of 1.420-1.657-22.089GHz microwaves. . . One of the Greatest Paradox and Challenge of World Science is the "transformation" of J. L. Lagrange 4D Space-Time (S-T) System to H. Minkovski 4D S-T System (O-X,Y,Z,icT) for Einstein's "Theory of Relativity". As a global result: -In contemporary Advanced Space Sciences there is not real adequate 4D-6D Space-Time Coordinate System and 6D Advanced Cosmos Strategy & Methodology for Multidimensional and Multitemporal Space-Time Data Management and Tomography. . . That's one of the top actual S-T Problems. Simple and optimal nD S-T Methodology discovery is extremely important for all Universities' Space Sci-ences' Education Programs, for advances in space research and especially -for all young Space Scientists R&D!... The top ten 21-Century Challenges ahead of Planetary and Space Sciences, Space Data Management and Computational Space Tomography, important for successfully de-velopment of Young Scientist Generations, are following: 1. R&D of W. R. Hamilton General Idea for transformation all Space Sciences to Time Sciences, beginning with 6D Eukonal for 6D anisotropic mediums & velocities. Development of IERS Earth & Space Systems (VLBI; LLR; GPS; SLR; DORIS Etc.) for Planetary-Space Data Management & Computational Planetary & Space Tomography. 2. R&D of S. W. Hawking Paradigm for 2D

  14. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

    Directory of Open Access Journals (Sweden)

    Arianna eLaCroix

    2015-08-01

    Full Text Available The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel’s Shared Syntactic Integration Resource Hypothesis (SSIRH and Koelsch’s neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music versus speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.

  15. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study

    Science.gov (United States)

    LaCroix, Arianna N.; Diaz, Alvaro F.; Rogalsky, Corianne

    2015-01-01

    The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music. PMID:26321976

  16. The emergence of two anti-phase oscillatory neural populations in a computational model of the Parkinsonian globus pallidus

    Directory of Open Access Journals (Sweden)

    Robert John Merrison-Hort

    2013-12-01

    Full Text Available Experiments in rodent models of Parkinson's Disease have demonstrated a prominent increase of oscillatory firing patterns in neurons within the Parkinsonian globus pallidus (GP which may underlie some of the motor symptoms of the disease. There are two main pathways from the cortex to GP: via the striatum and via the subthalamic nucleus (STN, but it is not known how these inputs sculpt the pathological pallidal firing patterns. To study this we developed a novel neural network model of conductance-based spiking pallidal neurons with cortex-modulated input from STN neurons. Our results support the hypothesis that entrainment occurs primarily via the subthalamic pathway. We find that as a result of the interplay between excitatory input from the STN and mutual inhibitory coupling between GP neurons, a homogeneous population of GP neurons demonstrates a self-organising dynamical behaviour where two groups of neurons emerge: one spiking in-phase with the cortical rhythm and the other in anti-phase. This finding mirrors what is seen in recordings from the GP of rodents that have had Parkinsonism induced via brain lesions. Our model also includes downregulation of Hyperpolarization-activated Cyclic Nucleotide-gated (HCN channels in response to burst firing of GP neurons, since this has been suggested as a possible mechanism for the emergence of Parkinsonian activity. We found that the downregulation of HCN channels provides even better correspondence with experimental data but that it is not essential in order for the two groups of oscillatory neurons to appear. We discuss how the influence of inhibitory striatal input will strengthen our results.

  17. A new paradigm of knowledge engineering by soft computing

    CERN Document Server

    Ding, Liya

    2001-01-01

    Soft computing (SC) consists of several computing paradigms, including neural networks, fuzzy set theory, approximate reasoning, and derivative-free optimization methods such as genetic algorithms. The integration of those constituent methodologies forms the core of SC. In addition, the synergy allows SC to incorporate human knowledge effectively, deal with imprecision and uncertainty, and learn to adapt to unknown or changing environments for better performance. Together with other modern technologies, SC and its applications exert unprecedented influence on intelligent systems that mimic hum

  18. Temperature based daily incoming solar radiation modeling based on gene expression programming, neuro-fuzzy and neural network computing techniques.

    Science.gov (United States)

    Landeras, G.; López, J. J.; Kisi, O.; Shiri, J.

    2012-04-01

    The correct observation/estimation of surface incoming solar radiation (RS) is very important for many agricultural, meteorological and hydrological related applications. While most weather stations are provided with sensors for air temperature detection, the presence of sensors necessary for the detection of solar radiation is not so habitual and the data quality provided by them is sometimes poor. In these cases it is necessary to estimate this variable. Temperature based modeling procedures are reported in this study for estimating daily incoming solar radiation by using Gene Expression Programming (GEP) for the first time, and other artificial intelligence models such as Artificial Neural Networks (ANNs), and Adaptive Neuro-Fuzzy Inference System (ANFIS). Traditional temperature based solar radiation equations were also included in this study and compared with artificial intelligence based approaches. Root mean square error (RMSE), mean absolute error (MAE) RMSE-based skill score (SSRMSE), MAE-based skill score (SSMAE) and r2 criterion of Nash and Sutcliffe criteria were used to assess the models' performances. An ANN (a four-input multilayer perceptron with ten neurons in the hidden layer) presented the best performance among the studied models (2.93 MJ m-2 d-1 of RMSE). A four-input ANFIS model revealed as an interesting alternative to ANNs (3.14 MJ m-2 d-1 of RMSE). Very limited number of studies has been done on estimation of solar radiation based on ANFIS, and the present one demonstrated the ability of ANFIS to model solar radiation based on temperatures and extraterrestrial radiation. By the way this study demonstrated, for the first time, the ability of GEP models to model solar radiation based on daily atmospheric variables. Despite the accuracy of GEP models was slightly lower than the ANFIS and ANN models the genetic programming models (i.e., GEP) are superior to other artificial intelligence models in giving a simple explicit equation for the

  19. Soft computing model for optimized siRNA design by identifying off target possibilities using artificial neural network model.

    Science.gov (United States)

    Murali, Reena; John, Philips George; Peter S, David

    2015-05-15

    The ability of small interfering RNA (siRNA) to do posttranscriptional gene regulation by knocking down targeted genes is an important research topic in functional genomics, biomedical research and in cancer therapeutics. Many tools had been developed to design exogenous siRNA with high experimental inhibition. Even though considerable amount of work has been done in designing exogenous siRNA, design of effective siRNA sequences is still a challenging work because the target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. In some cases, siRNAs may tolerate mismatches with the target mRNA, but knockdown of genes other than the intended target could make serious consequences. Hence to design siRNAs, two important concepts must be considered: the ability in knocking down target genes and the off target possibility on any nontarget genes. So before doing gene silencing by siRNAs, it is essential to analyze their off target effects in addition to their inhibition efficacy against a particular target. Only a few methods have been developed by considering both efficacy and off target possibility of siRNA against a gene. In this paper we present a new design of neural network model with whole stacking energy (ΔG) that enables to identify the efficacy and off target effect of siRNAs against target genes. The tool lists all siRNAs against a particular target with their inhibition efficacy and number of matches or sequence similarity with other genes in the database. We could achieve an excellent performance of Pearson Correlation Coefficient (R=0. 74) and Area Under Curve (AUC=0.906) when the threshold of whole stacking energy is ≥-34.6 kcal/mol. To the best of the author's knowledge, this is one of the best score while considering the "combined efficacy and off target possibility" of siRNA for silencing a gene. The proposed model

  20. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  2. Transient stability analysis of electric energy systems via a fuzzy ART-ARTMAP neural network

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Wagner Peron; Silveira, Maria do Carmo G.; Lotufo, AnnaDiva P.; Minussi, Carlos. R. [Department of Electrical Engineering, Sao Paulo State University (UNESP), P.O. Box 31, 15385-000, Ilha Solteira, SP (Brazil)

    2006-04-15

    This work presents a methodology to analyze transient stability (first oscillation) of electric energy systems, using a neural network based on ART architecture (adaptive resonance theory), named fuzzy ART-ARTMAP neural network for real time applications. The security margin is used as a stability analysis criterion, considering three-phase short circuit faults with a transmission line outage. The neural network operation consists of two fundamental phases: the training and the analysis. The training phase needs a great quantity of processing for the realization, while the analysis phase is effectuated almost without computation effort. This is, therefore the principal purpose to use neural networks for solving complex problems that need fast solutions, as the applications in real time. The ART neural networks have as primordial characteristics the plasticity and the stability, which are essential qualities to the training execution and to an efficient analysis. The fuzzy ART-ARTMAP neural network is proposed seeking a superior performance, in terms of precision and speed, when compared to conventional ARTMAP, and much more when compared to the neural networks that use the training by backpropagation algorithm, which is a benchmark in neural network area. (author)

  3. A New Hybrid Methodology for Nonlinear Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Mehdi Khashei

    2011-01-01

    Full Text Available Artificial neural networks (ANNs are flexible computing frameworks and universal approximators that can be applied to a wide range of forecasting problems with a high degree of accuracy. However, using ANNs to model linear problems have yielded mixed results, and hence; it is not wise to apply them blindly to any type of data. This is the reason that hybrid methodologies combining linear models such as ARIMA and nonlinear models such as ANNs have been proposed in the literature of time series forecasting. Despite of all advantages of the traditional methodologies for combining ARIMA and ANNs, they have some assumptions that will degenerate their performance if the opposite situation occurs. In this paper, a new methodology is proposed in order to combine the ANNs with ARIMA in order to overcome the limitations of traditional hybrid methodologies and yield more general and more accurate hybrid models. Empirical results with Canadian Lynx data set indicate that the proposed methodology can be a more effective way in order to combine linear and nonlinear models together than traditional hybrid methodologies. Therefore, it can be applied as an appropriate alternative methodology for hybridization in time series forecasting field, especially when higher forecasting accuracy is needed.

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  6. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  7. Methodology of the design of an integrated telecommunications and computer network in a control information system for artillery battalion fire support

    Directory of Open Access Journals (Sweden)

    Slobodan M. Miletić

    2012-04-01

    Full Text Available A Command Information System (CIS in a broader sense can be defined as a set of hardware and software solutions by which one achieves real-time integration of organizational structures, doctrine, technical and technological systems and facilities, information flows and processes for efficient and rational decision-making and functioning. Time distribution and quality of information directly affect the implementation of the decision making process and criteria for evaluating the effectiveness of the system in which the achievement of the most important role is an integrated telecommunications and computer network (ITCN, dimensioned to the spatial distribution of tactical combat units connecting all the elements in a communications unit. The aim is to establish the design methodology as a way of the ITCN necessary to conduct analysis and extract all the necessary elements for modeling that are mapped to the elements of network infrastructure, and then analyzed from the perspective of telecommunications communication standards and parameters of the layers of the OSI network model. A relevant way to verify the designed model ITCN is the development of a simulation model with which adequate results can be obtained. Conclusions on the compliance with the requirements of tactical combat and tactical communication requirements are drawn on the basis of these results.

  8. A Computational Analysis of Neural Mechanisms Underlying the Maturation of Multisensory Speech Integration in Neurotypical Children and Those on the Autism Spectrum.

    Science.gov (United States)

    Cuppini, Cristiano; Ursino, Mauro; Magosso, Elisa; Ross, Lars A; Foxe, John J; Molholm, Sophie

    2017-01-01

    Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity.

  9. A Computational Analysis of Neural Mechanisms Underlying the Maturation of Multisensory Speech Integration in Neurotypical Children and Those on the Autism Spectrum

    Directory of Open Access Journals (Sweden)

    Cristiano Cuppini

    2017-10-01

    Full Text Available Failure to appropriately develop multisensory integration (MSI of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD. Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity.

  10. Ultrasound assisted biodiesel production from sesame (Sesamum indicum L.) oil using barium hydroxide as a heterogeneous catalyst: Comparative assessment of prediction abilities between response surface methodology (RSM) and artificial neural network (ANN).

    Science.gov (United States)

    Sarve, Antaram; Sonawane, Shriram S; Varma, Mahesh N

    2015-09-01

    The present study estimates the prediction capability of response surface methodology (RSM) and artificial neural network (ANN) models for biodiesel synthesis from sesame (Sesamum indicum L.) oil under ultrasonication (20 kHz and 1.2 kW) using barium hydroxide as a basic heterogeneous catalyst. RSM based on a five level, four factor central composite design, was employed to obtain the best possible combination of catalyst concentration, methanol to oil molar ratio, temperature and reaction time for maximum FAME content. Experimental data were evaluated by applying RSM integrating with desirability function approach. The importance of each independent variable on the response was investigated by using sensitivity analysis. The optimum conditions were found to be catalyst concentration (1.79 wt%), methanol to oil molar ratio (6.69:1), temperature (31.92°C), and reaction time (40.30 min). For these conditions, experimental FAME content of 98.6% was obtained, which was in reasonable agreement with predicted one. The sensitivity analysis confirmed that catalyst concentration was the main factors affecting the FAME content with the relative importance of 36.93%. The lower values of correlation coefficient (R(2)=0.781), root mean square error (RMSE=4.81), standard error of prediction (SEP=6.03) and relative percent deviation (RPD=4.92) for ANN compared to those R(2) (0.596), RMSE (6.79), SEP (8.54) and RPD (6.48) for RSM proved better prediction capability of ANN in predicting the FAME content. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  12. Concise Neural Nonaffine Control of Air-Breathing Hypersonic Vehicles Subject to Parametric Uncertainties

    Directory of Open Access Journals (Sweden)

    Xiangwei Bu

    2017-01-01

    Full Text Available In this paper, a novel simplified neural control strategy is proposed for the longitudinal dynamics of an air-breathing hypersonic vehicle (AHV directly using nonaffine models instead of affine ones. For the velocity dynamics, an adaptive neural controller is devised based on a minimal-learning parameter (MLP technique for the sake of decreasing computational loads. The altitude dynamics is rewritten as a pure feedback nonaffine formulation, for which a novel concise neural control approach is achieved without backstepping. The special contributions are that the control architecture is concise and the computational cost is low. Moreover, the exploited controller possesses good practicability since there is no need for affine models. The semiglobally uniformly ultimate boundedness of all the closed-loop system signals is guaranteed via Lyapunov stability theory. Finally, simulation results are presented to validate the effectiveness of the investigated control methodology in the presence of parametric uncertainties.

  13. The Impact of Sociological Methodology on Statistical Methodology

    OpenAIRE

    Clogg, Clifford C.

    1992-01-01

    Developments in sociological methodology and in quantitative sociology have always been closely related to developments in statistical theory, methodology and computation. The same statement applies if "methodology for social research" and "quantitative social research" replace the more specific terms in this statement. Statistical methodology, including especially the battery of methods used to estimate and evaluate statistical models, has had a tremendous effect on social research in the po...

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  15. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  16. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    Science.gov (United States)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  17. Novel quantum inspired binary neural network algorithm

    Indian Academy of Sciences (India)

    In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically ...

  18. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    Science.gov (United States)

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  19. Use of pattern recognition and neural networks for non-metric sex diagnosis from lateral shape of calvarium: an innovative model for computer-aided diagnosis in forensic and physical anthropology.

    Science.gov (United States)

    Cavalli, Fabio; Lusnig, Luca; Trentin, Edmondo

    2017-05-01

    Sex determination on skeletal remains is one of the most important diagnosis in forensic cases and in demographic studies on ancient populations. Our purpose is to realize an automatic operator-independent method to determine the sex from the bone shape and to test an intelligent, automatic pattern recognition system in an anthropological domain. Our multiple-classifier system is based exclusively on the morphological variants of a curve that represents the sagittal profile of the calvarium, modeled via artificial neural networks, and yields an accuracy higher than 80 %. The application of this system to other bone profiles is expected to further improve the sensibility of the methodology.

  20. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  6. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  7. Artificial neural networks for diagnosis and survival prediction in colon cancer

    OpenAIRE

    Ahmed, Farid E

    2005-01-01

    Abstract ANNs are nonlinear regression computational devices that have been used for over 45 years in classification and survival prediction in several biomedical systems, including colon cancer. Described in this article is the theory behind the three-layer free forward artificial neural networks with backpropagation error, which is widely used in biomedical fields, and a methodological approach to its application for cancer research, as exemplified by colon cancer. Review of the literature ...

  8. Tourism Methodologies

    DEFF Research Database (Denmark)

    This volume offers methodological discussions within the multidisciplinary field of tourism and shows how tourism researchers develop and apply new tourism methodologies. The book is presented as an anthology, giving voice to many diverse researchers who reflect on tourism methodology in differen...... codings and analysis, and tapping into the global network of social media.......This volume offers methodological discussions within the multidisciplinary field of tourism and shows how tourism researchers develop and apply new tourism methodologies. The book is presented as an anthology, giving voice to many diverse researchers who reflect on tourism methodology in different...... in interview and field work situations, and how do we engage with the performative aspects of tourism as a field of study? The book acknowledges that research is also performance and that it constitutes an aspect of intervention in the situations and contexts it is trying to explore. This is an issue dealt...

  9. A supervised 'lesion-enhancement' filter by use of a massive-training artificial neural network (MTANN) in computer-aided diagnosis (CAD)

    Science.gov (United States)

    Suzuki, Kenji

    2009-09-01

    Computer-aided diagnosis (CAD) has been an active area of study in medical image analysis. A filter for the enhancement of lesions plays an important role for improving the sensitivity and specificity in CAD schemes. The filter enhances objects similar to a model employed in the filter; e.g. a blob-enhancement filter based on the Hessian matrix enhances sphere-like objects. Actual lesions, however, often differ from a simple model; e.g. a lung nodule is generally modeled as a solid sphere, but there are nodules of various shapes and with internal inhomogeneities such as a nodule with spiculations and ground-glass opacity. Thus, conventional filters often fail to enhance actual lesions. Our purpose in this study was to develop a supervised filter for the enhancement of actual lesions (as opposed to a lesion model) by use of a massive-training artificial neural network (MTANN) in a CAD scheme for detection of lung nodules in CT. The MTANN filter was trained with actual nodules in CT images to enhance actual patterns of nodules. By use of the MTANN filter, the sensitivity and specificity of our CAD scheme were improved substantially. With a database of 69 lung cancers, nodule candidate detection by the MTANN filter achieved a 97% sensitivity with 6.7 false positives (FPs) per section, whereas nodule candidate detection by a difference-image technique achieved a 96% sensitivity with 19.3 FPs per section. Classification-MTANNs were applied for further reduction of the FPs. The classification-MTANNs removed 60% of the FPs with a loss of one true positive; thus, it achieved a 96% sensitivity with 2.7 FPs per section. Overall, with our CAD scheme based on the MTANN filter and classification-MTANNs, an 84% sensitivity with 0.5 FPs per section was achieved. First presented at the Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA, 11-13 December 2008.

  10. Analysis of breast CT lesions using computer-aided diagnosis: an application of neural networks on extracted morphologic and texture features

    Science.gov (United States)

    Ray, Shonket; Prionas, Nicolas D.; Lindfors, Karen K.; Boone, John M.

    2012-03-01

    Dedicated cone-beam breast CT (bCT) scanners have been developed as a potential alternative imaging modality to conventional X-ray mammography in breast cancer diagnosis. As with other modalities, quantitative imaging (QI) analysis can potentially be utilized as a tool to extract useful numeric information concerning diagnosed lesions from high quality 3D tomographic data sets. In this work, preliminary QI analysis was done by designing and implementing a computer-aided diagnosis (CADx) system consisting of image preprocessing, object(s) of interest (i.e. masses, microcalcifications) segmentation, structural analysis of the segmented object(s), and finally classification into benign or malignant disease. Image sets were acquired from bCT patient scans with diagnosed lesions. Iterative watershed segmentation (IWS), a hybridization of the watershed method using observer-set markers and a gradient vector flow (GVF) approach, was used as the lesion segmentation method in 3D. Eight morphologic parameters and six texture features based on gray level co-occurrence matrix (GLCM) calculations were obtained per segmented lesion and combined into multi-dimensional feature input data vectors. Artificial neural network (ANN) classifiers were used by performing cross validation and network parameter optimization to maximize area under the curve (AUC) values of the resulting receiver-operating characteristic (ROC) curves. Within these ANNs, biopsy-proven diagnoses of malignant and benign lesions were recorded as target data while the feature vectors were saved as raw input data. With the image data separated into post-contrast (n = 55) and pre-contrast sets (n = 39), a maximum AUC of 0.70 +/- 0.02 and 0.80 +/- 0.02 were achieved, respectively, for each data set after ANN application.

  11. A two-step convolutional neural network based computer-aided detection scheme for automatically segmenting adipose tissue volume depicting on CT images.

    Science.gov (United States)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; Moore, Kathleen; Liu, Hong; Zheng, Bin

    2017-06-01

    Accurately assessment of adipose tissue volume inside a human body plays an important role in predicting disease or cancer risk, diagnosis and prognosis. In order to overcome limitation of using only one subjectively selected CT image slice to estimate size of fat areas, this study aims to develop and test a computer-aided detection (CAD) scheme based on deep learning technique to automatically segment subcutaneous fat areas (SFA) and visceral fat areas (VFA) depicting on volumetric CT images. A retrospectively collected CT image dataset was divided into two independent training and testing groups. The proposed CAD framework consisted of two steps with two convolution neural networks (CNNs) namely, Selection-CNN and Segmentation-CNN. The first CNN was trained using 2,240 CT slices to select abdominal CT slices depicting SFA and VFA. The second CNN was trained with 84,000pixel patches and applied to the selected CT slices to identify fat-related pixels and assign them into SFA and VFA classes. Comparing to the manual CT slice selection and fat pixel segmentation results, the accuracy of CT slice selection using the Selection-CNN yielded 95.8%, while the accuracy of fat pixel segmentation using the Segmentation-CNN was 96.8%. This study demonstrated the feasibility of applying a new deep learning based CAD scheme to automatically recognize abdominal section of human body from CT scans and segment SFA and VFA from volumetric CT data with high accuracy or agreement with the manual segmentation results. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Motor-related brain activity during action observation: a neural substrate for electrocorticographic brain-computer interfaces after spinal cord injury

    Directory of Open Access Journals (Sweden)

    Jennifer L Collinger

    2014-02-01

    Full Text Available After spinal cord injury (SCI, motor commands from the brain are unable to reach peripheral nerves and muscles below the level of the lesion. Action observation, in which a person observes someone else performing an action, has been used to augment traditional rehabilitation paradigms. Similarly, action observation can be used to derive the relationship between brain activity and movement kinematics for a motor-based brain-computer interface (BCI even when the user cannot generate overt movements. BCIs use brain signals to control external devices to replace functions that have been lost due to SCI or other motor impairment. Previous studies have reported congruent motor cortical activity during observed and overt movements using magnetoencephalography (MEG and functional magnetic resonance imaging (fMRI. Recent single-unit studies using intracortical microelectrodes also demonstrated that a large number of motor cortical neurons had similar firing rate patterns between overt and observed movements. Given the increasing interest in electrocorticography (ECoG-based BCIs, our goal was to identify whether action observation-related cortical activity could be recorded using ECoG during grasping tasks. Specifically, we aimed to identify congruent neural activity during observed and executed movements in both the sensorimotor rhythm (10-40 Hz and the high-gamma band (65-115 Hz which contains significant movement-related information. We observed significant motor-related high-gamma band activity during action observation in both able-bodied individuals and one participant with a complete C4 SCI. Furthermore, in able-bodied participants, both the low and high frequency bands demonstrated congruent activity between action execution and observation. Our results suggest that action observation could be an effective and critical procedure for deriving the mapping from ECoG signals to intended movement for an ECoG-based BCI system for individuals with

  13. Neural overlap in processing music and speech

    Science.gov (United States)

    Peretz, Isabelle; Vuvan, Dominique; Lagrois, Marie-Élaine; Armony, Jorge L.

    2015-01-01

    Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing. PMID:25646513

  14. Micro- and Nanotechnologies for Optical Neural Interfaces

    Science.gov (United States)

    Pisanello, Ferruccio; Sileo, Leonardo; De Vittorio, Massimo

    2016-01-01

    In last decade, the possibility to optically interface with the mammalian brain in vivo has allowed unprecedented investigation of functional connectivity of neural circuitry. Together with new genetic and molecular techniques to optically trigger and monitor neural activity, a new generation of optical neural interfaces is being developed, mainly thanks to the exploitation of both bottom-up and top-down nanofabrication approaches. This review highlights the role of nanotechnologies for optical neural interfaces, with particular emphasis on new devices and methodologies for optogenetic control of neural activity and unconventional methods for detection and triggering of action potentials using optically-active colloidal nanoparticles. PMID:27013939

  15. Tourism Methodologies

    DEFF Research Database (Denmark)

    This volume offers methodological discussions within the multidisciplinary field of tourism and shows how tourism researchers develop and apply new tourism methodologies. The book is presented as an anthology, giving voice to many diverse researchers who reflect on tourism methodology in different...... ways. Several contributions draw on a critical perspective that pushes the boundaries of traditional methods and techniques for studying tourists and their experiences. In particular, the traditional qualitative interview is challenged, not only regarding the typical questions asked, but also regarding...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  18. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  19. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  1. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  2. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  3. Neural computations in spatial orientation

    NARCIS (Netherlands)

    Vingerhoets, R.A.A.

    2008-01-01

    This thesis describes the results of a research project that focused on how visual and vestibular signals are used by the human brain to maintain spatial orientation and visual stability. Given the limitations of the vestibular sensors in terms of bandwidth and precision, outlined in chapter 1,

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  5. Fuzzy neural networks: theory and applications

    Science.gov (United States)

    Gupta, Madan M.

    1994-10-01

    During recent years, significant advances have been made in two distinct technological areas: fuzzy logic and computational neural networks. The theory of fuzzy logic provides a mathematical framework to capture the uncertainties associated with human cognitive processes, such as thinking and reasoning. It also provides a mathematical morphology to emulate certain perceptual and linguistic attributes associated with human cognition. On the other hand, the computational neural network paradigms have evolved in the process of understanding the incredible learning and adaptive features of neuronal mechanisms inherent in certain biological species. Computational neural networks replicate, on a small scale, some of the computational operations observed in biological learning and adaptation. The integration of these two fields, fuzzy logic and neural networks, have given birth to an emerging technological field -- fuzzy neural networks. Fuzzy neural networks, have the potential to capture the benefits of these two fascinating fields, fuzzy logic and neural networks, into a single framework. The intent of this tutorial paper is to describe the basic notions of biological and computational neuronal morphologies, and to describe the principles and architectures of fuzzy neural networks. Towards this goal, we develop a fuzzy neural architecture based upon the notion of T-norm and T-conorm connectives. An error-based learning scheme is described for this neural structure.

  6. Courseware Engineering Methodology.

    Science.gov (United States)

    Uden, Lorna

    2002-01-01

    Describes development of the Courseware Engineering Methodology (CEM), created to guide novices in designing effective courseware. Discusses CEM's four models: pedagogical (concerned with the courseware's pedagogical aspects), conceptual (dealing with software engineering), interface (relating to human-computer interaction), and hypermedia…

  7. 11th International Conference on Computer and Information Science

    CERN Document Server

    Computer and Information 2012

    2012-01-01

    The series "Studies in Computational Intelligence" (SCI) publishes new developments and advances in the various areas of computational intelligence – quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life science, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Critical to both contributors and readers are the short publication time and world-wide distribution - this permits a rapid and broad dissemination of research results.   The purpose of the 11th IEEE/ACIS International Conference on Computer and Information Science (ICIS 2012...

  8. Application of artificial neural network with extreme learning machine for economic growth estimation

    Science.gov (United States)

    Milačić, Ljubiša; Jović, Srđan; Vujović, Tanja; Miljković, Jovica

    2017-01-01

    The purpose of this research is to develop and apply the artificial neural network (ANN) with extreme learning machine (ELM) to forecast gross domestic product (GDP) growth rate. The economic growth forecasting was analyzed based on agriculture, manufacturing, industry and services value added in GDP. The results were compared with ANN with back propagation (BP) learning approach since BP could be considered as conventional learning methodology. The reliability of the computational models was accessed based on simulation results and using several statistical indicators. Based on results, it was shown that ANN with ELM learning methodology can be applied effectively in applications of GDP forecasting.

  9. Tourism Methodologies

    DEFF Research Database (Denmark)

    This volume offers methodological discussions within the multidisciplinary field of tourism and shows how tourism researchers develop and apply new tourism methodologies. The book is presented as an anthology, giving voice to many diverse researchers who reflect on tourism methodology in different...... in interview and field work situations, and how do we engage with the performative aspects of tourism as a field of study? The book acknowledges that research is also performance and that it constitutes an aspect of intervention in the situations and contexts it is trying to explore. This is an issue dealt...

  10. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  11. A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets.

    Science.gov (United States)

    Antropova, Natalia; Huynh, Benjamin Q; Giger, Maryellen L

    2017-10-01

    Deep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing. We aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features. We present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]). From ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]). We proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing. © 2017 American Association of Physicists in Medicine.

  12. Metamodels for Computer-Based Engineering Design: Survey and Recommendations

    Science.gov (United States)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.

  13. Comparing between predicted output temperature of flat-plate solar collector and experimental results: computational fluid dynamics and artificial neural network

    Directory of Open Access Journals (Sweden)

    F Nadi

    2017-05-01

    Full Text Available Introduction The significant of solar energy as a renewable energy source, clean and without damage to the environment, for the production of electricity and heat is of great importance. Furthermore, due to the oil crisis as well as reducing the cost of home heating by 70%, solar energy in the past two decades has been a favorite of many researchers. Solar collectors are devices for collecting solar radiant energy through which this energy is converted into heat and then heat is transferred to a fluid (usually air or water. Therefore, a key component in performance improvement of solar heating system is a solar collector optimization under different testing conditions. However, estimation of output parameters under different testing conditions is costly, time consuming and mostly impossible. As a result, smart use of neural networks as well as CFD (computational fluid dynamics to predict the properties with which desired output would have been acquired is valuable. To the best of our knowledge, there are no any studies that compare experimental results with CFD and ANN. Materials and Methods A corrugated galvanized iron sheet of 2 m length, 1 m wide and 0.5 mm in thickness was used as an absorber plate for absorbing the incident solar radiation (Fig. 1 and 2. Corrugations in absorber were caused turbulent air and improved heat transfer coefficient. Computational fluid dynamics K-ε turbulence model was used for simulation. The following assumptions are made in the analysis. (1 Air is a continuous medium and incompressible. (2 The flow is steady and possesses have turbulent flow characteristics, due to the high velocity of flow. (3 The thermal-physical properties of the absorber sheet and the absorber tube are constant with respect to the operating temperature. (4 The bottom side of the absorber tube and the absorber plate are assumed to be adiabatic. Artificial neural network In this research a one-hidden-layer feed-forward network based on the

  14. Empirical Aesthetics, Computational Cognitive Modeling, and Experimental Phenomenology: Methodological remarks on “Shaping and Co-Shaping Forms of Vitality in Music: Beyond Cognitivist and Emotivist Approaches to Musical Expressiveness” by Jin Hyun Kim

    Directory of Open Access Journals (Sweden)

    Uwe Seifert

    2013-12-01

    Full Text Available The core ideas of the proposed framework for empirical aesthetics are interpreted as focusing on processes, interaction, and phenomenological experience. This commentary first touches on some methodological impediments to developing theories of processing and interaction, and emphasizes the necessity of computational cognitive modeling using robots to test the empirical adequacy of such theories. Further, the importance of developing and integrating phenomenological methods into current experimental research is stressed, using experimental phenomenology as reference. Situated cognition, affective computing, human-robot interaction research, computational cognitive modeling and social and cultural neuroscience are noted as providing relevant insight into the empirical adequacy of current theories of cognitive and emotional processing. In the near future these fields will have a stimulating impact on empirical aesthetics and research on music and the mind.

  15. Artificial Neural Networks in Evaluation and Optimization of Modified Release Solid Dosage Forms

    Directory of Open Access Journals (Sweden)

    Zorica Djurić

    2012-10-01

    Full Text Available Implementation of the Quality by Design (QbD approach in pharmaceutical development has compelled researchers in the pharmaceutical industry to employ Design of Experiments (DoE as a statistical tool, in product development. Among all DoE techniques, response surface methodology (RSM is the one most frequently used. Progress of computer science has had an impact on pharmaceutical development as well. Simultaneous with the implementation of statistical methods, machine learning tools took an important place in drug formulation. Twenty years ago, the first papers describing application of artificial neural networks in optimization of modified release products appeared. Since then, a lot of work has been done towards implementation of new techniques, especially Artificial Neural Networks (ANN in modeling of production, drug release and drug stability of modified release solid dosage forms. The aim of this paper is to review artificial neural networks in evaluation and optimization of modified release solid dosage forms.

  16. Unlocking the Barriers to Women and Minorities in Computer Science and Information Systems Studies: Results from a Multi-Methodological Study Conducted at Two Minority Serving Institutions

    Science.gov (United States)

    Buzzetto-More, Nicole; Ukoha, Ojiabo; Rustagi, Narendra

    2010-01-01

    The under representation of women and minorities in undergraduate computer science and information systems programs is a pervasive and persistent problem in the United States. Needed is a better understanding of the background and psychosocial factors that attract, or repel, minority students from computing disciplines. An examination of these…

  17. Examining the Effects of Learning Styles, Epistemic Beliefs and the Computational Experiment Methodology on Learners' Performance Using the Easy Java Simulator Tool in STEM Disciplines

    Science.gov (United States)

    Psycharis, Sarantos; Botsari, Evanthia; Chatzarakis, George

    2014-01-01

    Learning styles are increasingly being integrated into computational-enhanced earning environments and a great deal of recent research work is taking place in this area. The purpose of this study was to examine the impact of the computational experiment approach, learning styles, epistemic beliefs, and engagement with the inquiry process on the…

  18. Tourism Methodologies

    DEFF Research Database (Denmark)

    This volume offers methodological discussions within the multidisciplinary field of tourism and shows how tourism researchers develop and apply new tourism methodologies. The book is presented as an anthology, giving voice to many diverse researchers who reflect on tourism methodology in different...... in interview and field work situations, and how do we engage with the performative aspects of tourism as a field of study? The book acknowledges that research is also performance and that it constitutes an aspect of intervention in the situations and contexts it is trying to explore. This is an issue dealt...... with in different ways, depending on the ontological and epistemological stands of the researcher. The book suggests new methods and approaches, with innovative ways of collecting and creating empirical materials, by expanding the approaches to tried and tested methods, including digital innovations, digital...

  19. On methodology

    DEFF Research Database (Denmark)

    Cheesman, Robin; Faraone, Roque

    2002-01-01

    This is an English version of the methodology chapter in the authors' book "El caso Berríos: Estudio sobre información errónea, desinformación y manipulación de la opinión pública".......This is an English version of the methodology chapter in the authors' book "El caso Berríos: Estudio sobre información errónea, desinformación y manipulación de la opinión pública"....

  20. Artificial neural network modelling

    CERN Document Server

    Samarasinghe, Sandhya

    2016-01-01

    This book covers theoretical aspects as well as recent innovative applications of Artificial Neural networks (ANNs) in natural, environmental, biological, social, industrial and automated systems. It presents recent results of ANNs in modelling small, large and complex systems under three categories, namely, 1) Networks, Structure Optimisation, Robustness and Stochasticity 2) Advances in Modelling Biological and Environmental Systems and 3) Advances in Modelling Social and Economic Systems. The book aims at serving undergraduates, postgraduates and researchers in ANN computational modelling. .