WorldWideScience

Sample records for neural net approach

  1. Building Neural Net Software

    OpenAIRE

    Neto, João Pedro; Costa, José Félix

    1999-01-01

    In a recent paper [Neto et al. 97] we showed that programming languages can be translated on recurrent (analog, rational weighted) neural nets. The goal was not efficiency but simplicity. Indeed we used a number-theoretic approach to machine programming, where (integer) numbers were coded in a unary fashion, introducing a exponential slow down in the computations, with respect to a two-symbol tape Turing machine. Implementation of programming languages in neural nets turns to be not only theo...

  2. Geometrical approach to neural net control of movements and posture

    Science.gov (United States)

    Pellionisz, A. J.; Ramos, C. F.

    1993-01-01

    In one approach to modeling brain function, sensorimotor integration is described as geometrical mapping among coordinates of non-orthogonal frames that are intrinsic to the system; in such a case sensors represent (covariant) afferents and motor effectors represent (contravariant) motor efferents. The neuronal networks that perform such a function are viewed as general tensor transformations among different expressions and metric tensors determining the geometry of neural functional spaces. Although the non-orthogonality of a coordinate system does not impose a specific geometry on the space, this "Tensor Network Theory of brain function" allows for the possibility that the geometry is non-Euclidean. It is suggested that investigation of the non-Euclidean nature of the geometry is the key to understanding brain function and to interpreting neuronal network function. This paper outlines three contemporary applications of such a theoretical modeling approach. The first is the analysis and interpretation of multi-electrode recordings. The internal geometries of neural networks controlling external behavior of the skeletomuscle system is experimentally determinable using such multi-unit recordings. The second application of this geometrical approach to brain theory is modeling the control of posture and movement. A preliminary simulation study has been conducted with the aim of understanding the control of balance in a standing human. The model appears to unify postural control strategies that have previously been considered to be independent of each other. Third, this paper emphasizes the importance of the geometrical approach for the design and fabrication of neurocomputers that could be used in functional neuromuscular stimulation (FNS) for replacing lost motor control.

  3. Kunstige neurale net

    DEFF Research Database (Denmark)

    Hørning, Annette

    1994-01-01

    Artiklen beskæftiger sig med muligheden for at anvende kunstige neurale net i forbindelse med datamatisk procession af naturligt sprog, specielt automatisk talegenkendelse.......Artiklen beskæftiger sig med muligheden for at anvende kunstige neurale net i forbindelse med datamatisk procession af naturligt sprog, specielt automatisk talegenkendelse....

  4. Texture Based Image Analysis With Neural Nets

    Science.gov (United States)

    Ilovici, Irina S.; Ong, Hoo-Tee; Ostrander, Kim E.

    1990-03-01

    In this paper, we combine direct image statistics and spatial frequency domain techniques with a neural net model to analyze texture based images. The resultant optimal texture features obtained from the direct and transformed image form the exemplar pattern of the neural net. The proposed approach introduces an automated texture analysis applied to metallography for determining the cooling rate and mechanical working of the materials. The results suggest that the proposed method enhances the practical applications of neural nets and texture extraction features.

  5. Tracking by Neural Nets

    CERN Document Server

    Jofrehei, Arash

    2015-01-01

    Current track reconstruction methods start with two points and then for each layer loop through all possible hits to find proper hits to add to that track. Another idea would be to use this large number of already reconstructed events and/or simulated data and train a machine on this data to find tracks given hit pixels. Training time could be long but real time tracking is really fast. Simulation might not be as realistic as real data but tracking efficiency is 100 percent for that while by using real data we would probably be limited to current efficiency. The fact that this approach can be a lot faster and even more efficient than current methods by using simulation data can make it a great alternative for current track reconstruction methods used in both triggering and tracking.

  6. Classification using Bayesian neural nets

    NARCIS (Netherlands)

    J.C. Bioch (Cor); O. van der Meer; R. Potharst (Rob)

    1995-01-01

    textabstractRecently, Bayesian methods have been proposed for neural networks to solve regression and classification problems. These methods claim to overcome some difficulties encountered in the standard approach such as overfitting. However, an implementation of the full Bayesian approach to

  7. Document analysis with neural net circuits

    Science.gov (United States)

    Graf, Hans Peter

    1994-01-01

    Document analysis is one of the main applications of machine vision today and offers great opportunities for neural net circuits. Despite more and more data processing with computers, the number of paper documents is still increasing rapidly. A fast translation of data from paper into electronic format is needed almost everywhere, and when done manually, this is a time consuming process. Markets range from small scanners for personal use to high-volume document analysis systems, such as address readers for the postal service or check processing systems for banks. A major concern with present systems is the accuracy of the automatic interpretation. Today's algorithms fail miserably when noise is present, when print quality is poor, or when the layout is complex. A common approach to circumvent these problems is to restrict the variations of the documents handled by a system. In our laboratory, we had the best luck with circuits implementing basic functions, such as convolutions, that can be used in many different algorithms. To illustrate the flexibility of this approach, three applications of the NET32K circuit are described in this short viewgraph presentation: locating address blocks, cleaning document images by removing noise, and locating areas of interest in personal checks to improve image compression. Several of the ideas realized in this circuit that were inspired by neural nets, such as analog computation with a low resolution, resulted in a chip that is well suited for real-world document analysis applications and that compares favorably with alternative, 'conventional' circuits.

  8. CDMA and TDMA based neural nets.

    Science.gov (United States)

    Herrero, J C

    2001-06-01

    CDMA and TDMA telecommunication techniques were established long time ago, but they have acquired a renewed presence due to the rapidly increasing mobile phones demand. In this paper, we are going to see they are suitable for neural nets, if we leave the concept "connection" between processing units and we adopt the concept "messages" exchanged between them. This may open the door to neural nets with a higher number of processing units and flexible configuration.

  9. Neural Nets for Scene Analysis

    Science.gov (United States)

    1992-09-01

    decision boundaries produced for the arificial database when prototypes are Se- feature 1 lected from reduced training set. ly selected from the 383...CLASSIFIER HIT MISS MOPOGIA CORRELATION LOW-LEVEL VISION IVARL&MCE NEURAL NE. (O D ILER) SE CORRELATION REUCE ETC.(OR I F RS)DI4ENSIONAIM AND TRAINING...A) = J11’, + tOi2Z2 + 61311’ (4) SPE Vol. 1608 mitalwg’t Robots and Coniutef Vision X (991)/501 - "X,, ,v ) X 1112 1P Pa P2 P2 .. 2 33 CL AS INPUT

  10. Accelerator diagnosis and control by Neural Nets

    Energy Technology Data Exchange (ETDEWEB)

    Spencer, J.E.

    1989-01-01

    Neural Nets (NN) have been described as a solution looking for a problem. In the last conference, Artificial Intelligence (AI) was considered in the accelerator context. While good for local surveillance and control, its use for large complex systems (LCS) was much more restricted. By contrast, NN provide a good metaphor for LCS. It can be argued that they are logically equivalent to multi-loop feedback/forward control of faulty systems, and therefore provide an ideal adaptive control system. Thus, where AI may be good for maintaining a 'golden orbit,' NN should be good for obtaining it via a quantitative approach to 'look and adjust' methods like operator tweaking which use pattern recognition to deal with hardware and software limitations, inaccuracies or errors as well as imprecise knowledge or understanding of effects like annealing and hysteresis. Further, insights from NN allow one to define feasibility conditions for LCS in terms of design constraints and tolerances. Hardware and software implications are discussed and several LCS of current interest are compared and contrasted. 15 refs., 5 figs.

  11. Beyond Pattern Recognition With Neural Nets

    Science.gov (United States)

    Arsenault, Henri H.; Macukow, Bohdan

    1989-02-01

    Neural networks are finding many areas of application. Although they are particularly well-suited for applications related to associative recall such as content-addressable memories, neural nets can perform many other applications ranging from logic operations to the solution of optimization problems. The training of a recently introduced model to perform boolean logical operations such as XOR is described. Such simple systems can be combined to perform any complex boolean operation. Any complex task consisting of parallel and serial operations including fuzzy logic that can be described in terms of input-output relations can be accomplished by combining modules such as the ones described here. The fact that some modules can carry out their functions even when their inputs contain erroneous data, and the fact that each module can carry out its functions in parallel with itself and other modules promises some interesting applications.

  12. Unfolding code for neutron spectrometry based on neural nets technology

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Apdo. Postal 336, 98000 Zacatecas (Mexico)

    2012-10-15

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the {sup R}obust Design of Artificial Neural Networks Methodology{sup .} The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a {sup 6}Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  13. Real-time applications of neural nets

    Energy Technology Data Exchange (ETDEWEB)

    Spencer, J.E.

    1989-05-01

    Producing, accelerating and colliding very high power, low emittance beams for long periods is a formidable problem in real-time control. As energy has grown exponentially in time so has the complexity of the machines and their control systems. Similar growth rates have occurred in many areas, e.g., improved integrated circuits have been paid for with comparable increases in complexity. However, in this case, reliability, capability and cost have improved due to reduced size, high production and increased integration which allow various kinds of feedback. In contrast, most large complex systems (LCS) are perceived to lack such possibilities because only one copy is made. Neural nets, as a metaphor for LCS, suggest ways to circumvent such limitations. It is argued that they are logically equivalent to multi-loop feedback/forward control of faulty systems. While complimentary to AI, they mesh nicely with characteristics desired for real-time systems. Such issues are considered, examples given and possibilities discussed. 21 refs., 6 figs.

  14. Temporal Modeling of Neural Net Input/Output Behaviors: The Case of XOR

    Directory of Open Access Journals (Sweden)

    Bernard P. Zeigler

    2017-01-01

    Full Text Available In the context of the modeling and simulation of neural nets, we formulate definitions for the behavioral realization of memoryless functions. The definitions of realization are substantively different for deterministic and stochastic systems constructed of neuron-inspired components. In contrast to earlier generations of neural net models, third generation spiking neural nets exhibit important temporal and dynamic properties, and random neural nets provide alternative probabilistic approaches. Our definitions of realization are based on the Discrete Event System Specification (DEVS formalism that fundamentally include temporal and probabilistic characteristics of neuron system inputs, state, and outputs. The realizations that we construct—in particular for the Exclusive Or (XOR logic gate—provide insight into the temporal and probabilistic characteristics that real neural systems might display. Our results provide a solid system-theoretical foundation and simulation modeling framework for the high-performance computational support of such applications.

  15. Computation and control with neural nets

    Energy Technology Data Exchange (ETDEWEB)

    Corneliusen, A.; Terdal, P.; Knight, T.; Spencer, J.

    1989-10-04

    As energies have increased exponentially with time so have the size and complexity of accelerators and control systems. NN may offer the kinds of improvements in computation and control that are needed to maintain acceptable functionality. For control their associative characteristics could provide signal conversion or data translation. Because they can do any computation such as least squares, they can close feedback loops autonomously to provide intelligent control at the point of action rather than at a central location that requires transfers, conversions, hand-shaking and other costly repetitions like input protection. Both computation and control can be integrated on a single chip, printed circuit or an optical equivalent that is also inherently faster through full parallel operation. For such reasons one expects lower costs and better results. Such systems could be optimized by integrating sensor and signal processing functions. Distributed nets of such hardware could communicate and provide global monitoring and multiprocessing in various ways e.g. via token, slotted or parallel rings (or Steiner trees) for compatibility with existing systems. Problems and advantages of this approach such as an optimal, real-time Turing machine are discussed. Simple examples are simulated and hardware implemented using discrete elements that demonstrate some basic characteristics of learning and parallelism. Future microprocessors' are predicted and requested on this basis. 19 refs., 18 figs.

  16. 22nd Italian Workshop on Neural Nets

    CERN Document Server

    Bassis, Simone; Esposito, Anna; Morabito, Francesco

    2013-01-01

    This volume collects a selection of contributions which has been presented at the 22nd Italian Workshop on Neural Networks, the yearly meeting of the Italian Society for Neural Networks (SIREN). The conference was held in Italy, Vietri sul Mare (Salerno), during May 17-19, 2012. The annual meeting of SIREN is sponsored by International Neural Network Society (INNS), European Neural Network Society (ENNS) and IEEE Computational Intelligence Society (CIS). The book – as well as the workshop-  is organized in three main components, two special sessions and a group of regular sessions featuring different aspects and point of views of artificial neural networks and natural intelligence, also including applications of present compelling interest.

  17. Optical neural net for classifying imaging spectrometer data

    Science.gov (United States)

    Barnard, Etienne; Casasent, David P.

    1989-01-01

    The problem of determining the composition of an unknown input mixture from its measured spectrum, given the spectra of a number of elements, is studied. The Hopfield minimization procedure was used to express the determination of the compositions as a problem suitable for solution by neural nets. A mathematical description of the problem was developed and used as a basis for a neural network solution and an optical implementation.

  18. Classification of handwritten digits using a RAM neural net architecture

    DEFF Research Database (Denmark)

    Jørgensen, T.M.

    1997-01-01

    Results are reported on the task of recognizing handwritten digits without any advanced pre-processing. The result are obtained using a RAM-based neural network, making use of small receptive fields. Furthermore, a technique that introduces negative weights into the RAM net is reported. The results...

  19. Translating feedforward neural nets to SOM-like maps

    NARCIS (Netherlands)

    van der Zwaag, B.J.; Spaanenburg, Lambert; Slump, Cornelis H.

    A major disadvantage of feedforward neural networks is still the difficulty to gain insight into their internal functionality. This is much less the case for, e.g., nets that are trained unsupervised, such as Kohonen’s self-organizing feature maps (SOMs). These offer a direct view into the stored

  20. The Development of Animal Behavior: From Lorenz to Neural Nets

    Science.gov (United States)

    Bolhuis, Johan J.

    In the study of behavioral development both causal and functional approaches have been used, and they often overlap. The concept of ontogenetic adaptations suggests that each developmental phase involves unique adaptations to the environment of the developing animal. The functional concept of optimal outbreeding has led to further experimental evidence and theoretical models concerning the role of sexual imprinting in the evolutionary process of sexual selection. From a causal perspective it has been proposed that behavioral ontogeny involves the development of various kinds of perceptual, motor, and central mechanisms and the formation of connections among them. This framework has been tested for a number of complex behavior systems such as hunger and dustbathing. Imprinting is often seen as a model system for behavioral development in general. Recent advances in imprinting research have been the result of an interdisciplinary effort involving ethology, neuroscience, and experimental psychology, with a continual interplay between these approaches. The imprinting results are consistent with Lorenz' early intuitive suggestions and are also reflected in the architecture of recent neural net models.

  1. Neural-net based real-time economic dispatch for thermal power plants

    Energy Technology Data Exchange (ETDEWEB)

    Djukanovic, M.; Milosevic, B. [Inst. Nikola Tesla, Belgrade (Yugoslavia). Dept. of Power Systems; Calovic, M. [Univ. of Belgrade (Yugoslavia). Dept. of Electrical Engineering; Sobajic, D.J. [Electric Power Research Inst., Palo Alto, CA (United States)

    1996-12-01

    This paper proposes the application of artificial neural networks to real-time optimal generation dispatch of thermal units. The approach can take into account the operational requirements and network losses. The proposed economic dispatch uses an artificial neural network (ANN) for generation of penalty factors, depending on the input generator powers and identified system load change. Then, a few additional iterations are performed within an iterative computation procedure for the solution of coordination equations, by using reference-bus penalty-factors derived from the Newton-Raphson load flow. A coordination technique for environmental and economic dispatch of pure thermal systems, based on the neural-net theory for simplified solution algorithms and improved man-machine interface is introduced. Numerical results on two test examples show that the proposed algorithm can efficiently and accurately develop optimal and feasible generator output trajectories, by applying neural-net forecasts of system load patterns.

  2. Multilayer neural-net robot controller with guaranteed tracking performance.

    Science.gov (United States)

    Lewis, F L; Yegildirek, A; Liu, K

    1996-01-01

    A multilayer neural-net (NN) controller for a general serial-link rigid robot arm is developed. The structure of the NN controller is derived using a filtered error/passivity approach. No off-line learning phase is needed for the proposed NN controller and the weights are easily initialized. The nonlinear nature of the NN, plus NN functional reconstruction inaccuracies and robot disturbances, mean that the standard delta rule using backpropagation tuning does not suffice for closed-loop dynamic control. Novel online weight tuning algorithms, including correction terms to the delta rule plus an added robust signal, guarantee bounded tracking errors as well as bounded NN weights. Specific bounds are determined, and the tracking error bound can be made arbitrarily small by increasing a certain feedback gain. The correction terms involve a second-order forward-propagated wave in the backpropagation network. New NN properties including the notions of a passive NN, a dissipative NN, and a robust NN are introduced.

  3. Fast neural net simulation with a DSP processor array.

    Science.gov (United States)

    Muller, U A; Gunzinger, A; Guggenbuhl, W

    1995-01-01

    This paper describes the implementation of a fast neural net simulator on a novel parallel distributed-memory computer. A 60-processor system, named MUSIC (multiprocessor system with intelligent communication), is operational and runs the backpropagation algorithm at a speed of 330 million connection updates per second (continuous weight update) using 32-b floating-point precision. This is equal to 1.4 Gflops sustained performance. The complete system with 3.8 Gflops peak performance consumes less than 800 W of electrical power and fits into a 19-in rack. While reaching the speed of modern supercomputers, MUSIC still can be used as a personal desktop computer at a researcher's own disposal. In neural net simulation, this gives a computing performance to a single user which was unthinkable before. The system's real-time interfaces make it especially useful for embedded applications.

  4. -Net Approach to Sensor -Coverage

    Directory of Open Access Journals (Sweden)

    Fusco Giordano

    2010-01-01

    Full Text Available Wireless sensors rely on battery power, and in many applications it is difficult or prohibitive to replace them. Hence, in order to prolongate the system's lifetime, some sensors can be kept inactive while others perform all the tasks. In this paper, we study the -coverage problem of activating the minimum number of sensors to ensure that every point in the area is covered by at least sensors. This ensures higher fault tolerance, robustness, and improves many operations, among which position detection and intrusion detection. The -coverage problem is trivially NP-complete, and hence we can only provide approximation algorithms. In this paper, we present an algorithm based on an extension of the classical -net technique. This method gives an -approximation, where is the number of sensors in an optimal solution. We do not make any particular assumption on the shape of the areas covered by each sensor, besides that they must be closed, connected, and without holes.

  5. Artificial neural nets application in the cotton yarn industry

    Directory of Open Access Journals (Sweden)

    Gilberto Clóvis Antoneli

    2016-06-01

    Full Text Available The competitiveness in the yarn production sector has led companies to search for solutions to attain quality yarn at a low cost. Today, the difference between them, and thus the sector, is in the raw material, meaning processed cotton and its characteristics. There are many types of cotton with different characteristics due to its production region, harvest, storage and transportation. Yarn industries work with cotton mixtures, which makes it difficult to determine the quality of the yarn produced from the characteristics of the processed fibers. This study uses data from a conventional spinning, from a raw material made of 100% cotton, and presents a solution with artificial neural nets that determine the thread quality information, using the fibers’ characteristics values and settings of some process adjustments. In this solution a neural net of the type MultiLayer Perceptron with 11 entry neurons (8 characteristics of the fiber and 3 process adjustments, 7 output neurons (yarn quality and two types of training, Back propagation and Conjugate gradient descent. The selection and organization of the production data of the yarn industry of the cocamar® indústria de fios company are described, to apply the artificial neural nets developed. In the application of neural nets to determine yarn quality, one concludes that, although the ideal precision of absolute values is lacking, the presented solution represents an excellent tool to define yarn quality variations when modifying the raw material composition. The developed system enables a simulation to define the raw material percentage mixture to be processed in the plant using the information from the stocked cotton packs, thus obtaining a mixture that maintains the stability of the entire productive process.

  6. Neural Net Gains Estimation Based on an Equivalent Model

    Directory of Open Access Journals (Sweden)

    Karen Alicia Aguilar Cruz

    2016-01-01

    Full Text Available A model of an Equivalent Artificial Neural Net (EANN describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN. The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB the factors based on the functional error and the reference signal built with the past information of the system.

  7. Neural system modeling and simulation using Hybrid Functional Petri Net.

    Science.gov (United States)

    Tang, Yin; Wang, Fei

    2012-02-01

    The Petri net formalism has been proved to be powerful in biological modeling. It not only boasts of a most intuitive graphical presentation but also combines the methods of classical systems biology with the discrete modeling technique. Hybrid Functional Petri Net (HFPN) was proposed specially for biological system modeling. An array of well-constructed biological models using HFPN yielded very interesting results. In this paper, we propose a method to represent neural system behavior, where biochemistry and electrical chemistry are both included using the Petri net formalism. We built a model for the adrenergic system using HFPN and employed quantitative analysis. Our simulation results match the biological data well, showing that the model is very effective. Predictions made on our model further manifest the modeling power of HFPN and improve the understanding of the adrenergic system. The file of our model and more results with their analysis are available in our supplementary material.

  8. Perineuronal net, CSPG receptor and their regulation of neural plasticity.

    Science.gov (United States)

    Miao, Qing-Long; Ye, Qian; Zhang, Xiao-Hui

    2014-08-25

    Perineuronal nets (PNNs) are reticular structures resulting from the aggregation of extracellular matrix (ECM) molecules around the cell body and proximal neurite of specific population of neurons in the central nervous system (CNS). Since the first description of PNNs by Camillo Golgi in 1883, the molecular composition, developmental formation and potential functions of these specialized extracellular matrix structures have only been intensively studied over the last few decades. The main components of PNNs are hyaluronan (HA), chondroitin sulfate proteoglycans (CSPGs) of the lectican family, link proteins and tenascin-R. PNNs appear late in neural development, inversely correlating with the level of neural plasticity. PNNs have long been hypothesized to play a role in stabilizing the extracellular milieu, which secures the characteristic features of enveloped neurons and protects them from the influence of malicious agents. Aberrant PNN signaling can lead to CNS dysfunctions like epilepsy, stroke and Alzheimer's disease. On the other hand, PNNs create a barrier which constrains the neural plasticity and counteracts the regeneration after nerve injury. Digestion of PNNs with chondroitinase ABC accelerates functional recovery from the spinal cord injury and restores activity-dependent mechanisms for modifying neuronal connections in the adult animals, indicating that PNN is an important regulator of neural plasticity. Here, we review recent progress in the studies on the formation of PNNs during early development and the identification of CSPG receptor - an essential molecular component of PNN signaling, along with a discussion on their unique regulatory roles in neural plasticity.

  9. Neuron-Glia Interactions in Neural Plasticity: Contributions of Neural Extracellular Matrix and Perineuronal Nets

    Directory of Open Access Journals (Sweden)

    Egor Dzyubenko

    2016-01-01

    Full Text Available Synapses are specialized structures that mediate rapid and efficient signal transmission between neurons and are surrounded by glial cells. Astrocytes develop an intimate association with synapses in the central nervous system (CNS and contribute to the regulation of ion and neurotransmitter concentrations. Together with neurons, they shape intercellular space to provide a stable milieu for neuronal activity. Extracellular matrix (ECM components are synthesized by both neurons and astrocytes and play an important role in the formation, maintenance, and function of synapses in the CNS. The components of the ECM have been detected near glial processes, which abut onto the CNS synaptic unit, where they are part of the specialized macromolecular assemblies, termed perineuronal nets (PNNs. PNNs have originally been discovered by Golgi and represent a molecular scaffold deposited in the interface between the astrocyte and subsets of neurons in the vicinity of the synapse. Recent reports strongly suggest that PNNs are tightly involved in the regulation of synaptic plasticity. Moreover, several studies have implicated PNNs and the neural ECM in neuropsychiatric diseases. Here, we highlight current concepts relating to neural ECM and PNNs and describe an in vitro approach that allows for the investigation of ECM functions for synaptogenesis.

  10. Webs, cell assemblies, and chunking in neural nets: introduction.

    Science.gov (United States)

    Wickelgren, W A

    1999-03-01

    This introduction to Wickelgren (1992), describes a theory of idea representation and learning in the cerebral cortex and seven properties of Hebb's (1949) formulation of cell assemblies that have played a major role in all such neural net models. Ideas are represented in the cerebral cortex by webs (innate cell assemblies), using sparse coding with sparse, all-or-none, innate linking. Recruiting a web to represent a new idea is called chunking. The innate links that bind the neurons of a web are basal dendritic synapses. Learning modifies the apical dendritic synapses that associate neurons in one web to neurons in another web.

  11. Neural Network Approach

    African Journals Online (AJOL)

    Efficient management of hydropower reservoir can only be realized when there is sufficient understanding of interactions existing between reservoir variables and energy generation. Reservoir inflow, storage, reservoir elevation, turbine release, net generating had, plant use coefficient, tail race level and evaporation losses ...

  12. Stability Training for Convolutional Neural Nets in LArTPC

    Science.gov (United States)

    Lindsay, Matt; Wongjirad, Taritree

    2017-01-01

    Convolutional Neural Nets (CNNs) are the state of the art for many problems in computer vision and are a promising method for classifying interactions in Liquid Argon Time Projection Chambers (LArTPCs) used in neutrino oscillation experiments. Despite the good performance of CNN's, they are not without drawbacks, chief among them is vulnerability to noise and small perturbations to the input. One solution to this problem is a modification to the learning process called Stability Training developed by Zheng et al. We verify existing work and demonstrate volatility caused by simple Gaussian noise and also that the volatility can be nearly eliminated with Stability Training. We then go further and show that a traditional CNN is also vulnerable to realistic experimental noise and that a stability trained CNN remains accurate despite noise. This further adds to the optimism for CNNs for work in LArTPCs and other applications.

  13. Neural Approaches to Machine Consciousness

    Science.gov (United States)

    Aleksander, Igor; Eng., F. R.

    2008-10-01

    `Machine Consciousness', which some years ago might have been suppressed as an inappropriate pursuit, has come out of the closet and is now a legitimate area of research concern. This paper briefly surveys the last few years of worldwide research in this area which divides into rule-based and neural approaches and then reviews the work of the author's laboratory during the last ten years. The paper develops a fresh perspective on this work: it is argued that neural approaches, in this case, digital neural systems, can address phenomenological consciousness. Important clarifications of phenomenology and virtuality which enter this modelling are explained in the early parts of the paper. In neural models, phenomenology is a form of depictive inner representation that has five specific axiomatic features: a sense of self-presence in an external world; a sense of imagination of past experience and fiction; a sense of attention; a capacity for planning; a sense of emotion-based volition that influences planning. It is shown that these five features have separate but integrated support in dynamic neural systems.

  14. A taxonomy of Deep Convolutional Neural Nets for Computer Vision

    Directory of Open Access Journals (Sweden)

    Suraj eSrinivas

    2016-01-01

    Full Text Available Traditional architectures for solving computer vision problems and the degree of success they enjoyed have been heavily reliant on hand-crafted features. However, of late, deep learning techniques have offered a compelling alternative -- that of automatically learning problem-specific features. With this new paradigm, every problem in computer vision is now being re-examined from a deep learning perspective. Therefore, it has become important to understand what kind of deep networks are suitable for a given problem. Although general surveys of this fast-moving paradigm (i.e. deep-networks exist, a survey specific to computer vision is missing. We specifically consider one form of deep networks widely used in computer vision - convolutional neural networks (CNNs. We start with AlexNet'' as our base CNN and then examine the broad variations proposed over time to suit different applications. We hope that our recipe-style survey will serve as a guide, particularly for novice practitioners intending to use deep-learning techniques for computer vision.

  15. k-Same-Net: k-Anonymity with Generative Deep Neural Networks for Face Deidentification

    Directory of Open Access Journals (Sweden)

    Blaž Meden

    2018-01-01

    Full Text Available Image and video data are today being shared between government entities and other relevant stakeholders on a regular basis and require careful handling of the personal information contained therein. A popular approach to ensure privacy protection in such data is the use of deidentification techniques, which aim at concealing the identity of individuals in the imagery while still preserving certain aspects of the data after deidentification. In this work, we propose a novel approach towards face deidentification, called k-Same-Net, which combines recent Generative Neural Networks (GNNs with the well-known k-Anonymitymechanism and provides formal guarantees regarding privacy protection on a closed set of identities. Our GNN is able to generate synthetic surrogate face images for deidentification by seamlessly combining features of identities used to train the GNN model. Furthermore, it allows us to control the image-generation process with a small set of appearance-related parameters that can be used to alter specific aspects (e.g., facial expressions, age, gender of the synthesized surrogate images. We demonstrate the feasibility of k-Same-Net in comprehensive experiments on the XM2VTS and CK+ datasets. We evaluate the efficacy of the proposed approach through reidentification experiments with recent recognition models and compare our results with competing deidentification techniques from the literature. We also present facial expression recognition experiments to demonstrate the utility-preservation capabilities of k-Same-Net. Our experimental results suggest that k-Same-Net is a viable option for facial deidentification that exhibits several desirable characteristics when compared to existing solutions in this area.

  16. Examples of Current and Future Uses of Neural-Net Image Processing for Aerospace Applications

    Science.gov (United States)

    Decker, Arthur J.

    2004-01-01

    Feed forward artificial neural networks are very convenient for performing correlated interpolation of pairs of complex noisy data sets as well as detecting small changes in image data. Image-to-image, image-to-variable and image-to-index applications have been tested at Glenn. Early demonstration applications are summarized including image-directed alignment of optics, tomography, flow-visualization control of wind-tunnel operations and structural-model-trained neural networks. A practical application is reviewed that employs neural-net detection of structural damage from interference fringe patterns. Both sensor-based and optics-only calibration procedures are available for this technique. These accomplishments have generated the knowledge necessary to suggest some other applications for NASA and Government programs. A tomography application is discussed to support Glenn's Icing Research tomography effort. The self-regularizing capability of a neural net is shown to predict the expected performance of the tomography geometry and to augment fast data processing. Other potential applications involve the quantum technologies. It may be possible to use a neural net as an image-to-image controller of an optical tweezers being used for diagnostics of isolated nano structures. The image-to-image transformation properties also offer the potential for simulating quantum computing. Computer resources are detailed for implementing the black box calibration features of the neural nets.

  17. ER fluid applications to vibration control devices and an adaptive neural-net controller

    Science.gov (United States)

    Morishita, Shin; Ura, Tamaki

    1993-07-01

    Four applications of electrorheological (ER) fluid to vibration control actuators and an adaptive neural-net control system suitable for the controller of ER actuators are described: a shock absorber system for automobiles, a squeeze film damper bearing for rotational machines, a dynamic damper for multidegree-of-freedom structures, and a vibration isolator. An adaptive neural-net control system composed of a forward model network for structural identification and a controller network is introduced for the control system of these ER actuators. As an example study of intelligent vibration control systems, an experiment was performed in which the ER dynamic damper was attached to a beam structure and controlled by the present neural-net controller so that the vibration in several modes of the beam was reduced with a single dynamic damper.

  18. Deep Deformable Registration: Enhancing Accuracy by Fully Convolutional Neural Net

    OpenAIRE

    Ghosal, Sayan; Ray, Nilanjan

    2016-01-01

    Deformable registration is ubiquitous in medical image analysis. Many deformable registration methods minimize sum of squared difference (SSD) as the registration cost with respect to deformable model parameters. In this work, we construct a tight upper bound of the SSD registration cost by using a fully convolutional neural network (FCNN) in the registration pipeline. The upper bound SSD (UB-SSD) enhances the original deformable model parameter space by adding a heatmap output from FCNN. Nex...

  19. Fast neural-net based fake track rejection

    CERN Document Server

    De Cian, Michel; Seyfert, Paul; Stahl, Sascha

    2017-01-01

    A neural-network based algorithm to identify fake tracks in the LHCb pattern recognition is presented. This algorithm, called ghost probability, is fast enough to fit into the CPU time budget of the software trigger farm. It allows reducing the fake rate and consequently the combinatorics of the decay reconstructions, as well as the number of tracks that need to be processed by the particle identification algorithms. As a result, it strongly contributes to the achievement of having the same reconstruction online and offline in the LHCb experiment.

  20. ChemNet: A Transferable and Generalizable Deep Neural Network for Small-Molecule Property Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Goh, Garrett B.; Siegel, Charles M.; Vishnu, Abhinav; Hodas, Nathan O.

    2017-12-08

    With access to large datasets, deep neural networks through representation learning have been able to identify patterns from raw data, achieving human-level accuracy in image and speech recognition tasks. However, in chemistry, availability of large standardized and labelled datasets is scarce, and with a multitude of chemical properties of interest, chemical data is inherently small and fragmented. In this work, we explore transfer learning techniques in conjunction with the existing Chemception CNN model, to create a transferable and generalizable deep neural network for small-molecule property prediction. Our latest model, ChemNet learns in a semi-supervised manner from inexpensive labels computed from the ChEMBL database. When fine-tuned to the Tox21, HIV and FreeSolv dataset, which are 3 separate chemical tasks that ChemNet was not originally trained on, we demonstrate that ChemNet exceeds the performance of existing Chemception models, contemporary MLP models that trains on molecular fingerprints, and it matches the performance of the ConvGraph algorithm, the current state-of-the-art. Furthermore, as ChemNet has been pre-trained on a large diverse chemical database, it can be used as a universal “plug-and-play” deep neural network, which accelerates the deployment of deep neural networks for the prediction of novel small-molecule chemical properties.

  1. Intelligent control based on fuzzy logic and neural net theory

    Science.gov (United States)

    Lee, Chuen-Chien

    1991-01-01

    In the conception and design of intelligent systems, one promising direction involves the use of fuzzy logic and neural network theory to enhance such systems' capability to learn from experience and adapt to changes in an environment of uncertainty and imprecision. Here, an intelligent control scheme is explored by integrating these multidisciplinary techniques. A self-learning system is proposed as an intelligent controller for dynamical processes, employing a control policy which evolves and improves automatically. One key component of the intelligent system is a fuzzy logic-based system which emulates human decision making behavior. It is shown that the system can solve a fairly difficult control learning problem. Simulation results demonstrate that improved learning performance can be achieved in relation to previously described systems employing bang-bang control. The proposed system is relatively insensitive to variations in the parameters of the system environment.

  2. Development of a neural net paradigm that predicts simulator sickness

    Energy Technology Data Exchange (ETDEWEB)

    Allgood, G.O.

    1993-03-01

    A disease exists that affects pilots and aircrew members who use Navy Operational Flight Training Systems. This malady, commonly referred to as simulator sickness and whose symptomatology closely aligns with that of motion sickness, can compromise the use of these systems because of a reduced utilization factor, negative transfer of training, and reduction in combat readiness. A report is submitted that develops an artificial neural network (ANN) and behavioral model that predicts the onset and level of simulator sickness in the pilots and aircrews who sue these systems. It is proposed that the paradigm could be implemented in real time as a biofeedback monitor to reduce the risk to users of these systems. The model captures the neurophysiological impact of use (human-machine interaction) by developing a structure that maps the associative and nonassociative behavioral patterns (learned expectations) and vestibular (otolith and semicircular canals of the inner ear) and tactile interaction, derived from system acceleration profiles, onto an abstract space that predicts simulator sickness for a given training flight.

  3. NIRFaceNet: A Convolutional Neural Network for Near-Infrared Face Identification

    Directory of Open Access Journals (Sweden)

    Min Peng

    2016-10-01

    Full Text Available Near-infrared (NIR face recognition has attracted increasing attention because of its advantage of illumination invariance. However, traditional face recognition methods based on NIR are designed for and tested in cooperative-user applications. In this paper, we present a convolutional neural network (CNN for NIR face recognition (specifically face identification in non-cooperative-user applications. The proposed NIRFaceNet is modified from GoogLeNet, but has a more compact structure designed specifically for the Chinese Academy of Sciences Institute of Automation (CASIA NIR database and can achieve higher identification rates with less training time and less processing time. The experimental results demonstrate that NIRFaceNet has an overall advantage compared to other methods in the NIR face recognition domain when image blur and noise are present. The performance suggests that the proposed NIRFaceNet method may be more suitable for non-cooperative-user applications.

  4. BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment.

    Science.gov (United States)

    Kawahara, Jeremy; Brown, Colin J; Miller, Steven P; Booth, Brian G; Chau, Vann; Grunau, Ruth E; Zwicker, Jill G; Hamarneh, Ghassan

    2017-02-01

    We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Intelligent control aspects of fuzzy logic and neural nets

    CERN Document Server

    Harris, C J; Brown, M

    1993-01-01

    With increasing demands for high precision autonomous control over wide operating envelopes, conventional control engineering approaches are unable to adequately deal with system complexity, nonlinearities, spatial and temporal parameter variations, and with uncertainty. Intelligent Control or self-organising/learning control is a new emerging discipline that is designed to deal with problems. Rather than being model based, it is experiential based. Intelligent Control is the amalgam of the disciplines of Artificial Intelligence, Systems Theory and Operations Research. It uses most recent expe

  6. TopologyNet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions.

    Directory of Open Access Journals (Sweden)

    Zixuan Cang

    2017-07-01

    Full Text Available Although deep learning approaches have had tremendous success in image, video and audio processing, computer vision, and speech recognition, their applications to three-dimensional (3D biomolecular structural data sets have been hindered by the geometric and biological complexity. To address this problem we introduce the element-specific persistent homology (ESPH method. ESPH represents 3D complex geometry by one-dimensional (1D topological invariants and retains important biological information via a multichannel image-like representation. This representation reveals hidden structure-function relationships in biomolecules. We further integrate ESPH and deep convolutional neural networks to construct a multichannel topological neural network (TopologyNet for the predictions of protein-ligand binding affinities and protein stability changes upon mutation. To overcome the deep learning limitations from small and noisy training sets, we propose a multi-task multichannel topological convolutional neural network (MM-TCNN. We demonstrate that TopologyNet outperforms the latest methods in the prediction of protein-ligand binding affinities, mutation induced globular protein folding free energy changes, and mutation induced membrane protein folding free energy changes.weilab.math.msu.edu/TDL/.

  7. TopologyNet: Topology based deep convolutional and multi-task neural networks for biomolecular property predictions

    Science.gov (United States)

    2017-01-01

    Although deep learning approaches have had tremendous success in image, video and audio processing, computer vision, and speech recognition, their applications to three-dimensional (3D) biomolecular structural data sets have been hindered by the geometric and biological complexity. To address this problem we introduce the element-specific persistent homology (ESPH) method. ESPH represents 3D complex geometry by one-dimensional (1D) topological invariants and retains important biological information via a multichannel image-like representation. This representation reveals hidden structure-function relationships in biomolecules. We further integrate ESPH and deep convolutional neural networks to construct a multichannel topological neural network (TopologyNet) for the predictions of protein-ligand binding affinities and protein stability changes upon mutation. To overcome the deep learning limitations from small and noisy training sets, we propose a multi-task multichannel topological convolutional neural network (MM-TCNN). We demonstrate that TopologyNet outperforms the latest methods in the prediction of protein-ligand binding affinities, mutation induced globular protein folding free energy changes, and mutation induced membrane protein folding free energy changes. Availability: weilab.math.msu.edu/TDL/ PMID:28749969

  8. Self-Organizing Neural-Net Control of Ship's Horizontal Motion

    Energy Technology Data Exchange (ETDEWEB)

    Yang, X J; Zhao, X R [Automation College of Harbin Engineering University, Harbin 150001 (China)

    2006-10-15

    This paper describes the concept and an example of an adaptive neural-net controller system for ship's horizontal motion. The system consists of two parts, a real-world part and an imaginary-world part. The real-world part is a feedback control system for the actual ship. In the imaginary-world part, the model of ship and the controller are adjusted continuously in order to deal with changes of dynamic properties caused by disturbances and so on. In this paper, the adaptability of the controller system is investigated by controlling ship's horizontal motion including roll, yaw and sway. The results of simulation indicate that with selforganizing neural-net control, the mean square error of roll angle and yaw angle reduce to 0.92{sup 0}, and 0.74{sup 0} respectively. The control effect of SONC is better than conventional LQG controller.

  9. Deep neural nets as a method for quantitative structure-activity relationships.

    Science.gov (United States)

    Ma, Junshui; Sheridan, Robert P; Liaw, Andy; Dahl, George E; Svetnik, Vladimir

    2015-02-23

    Neural networks were widely used for quantitative structure-activity relationships (QSAR) in the 1990s. Because of various practical issues (e.g., slow on large problems, difficult to train, prone to overfitting, etc.), they were superseded by more robust methods like support vector machine (SVM) and random forest (RF), which arose in the early 2000s. The last 10 years has witnessed a revival of neural networks in the machine learning community thanks to new methods for preventing overfitting, more efficient training algorithms, and advancements in computer hardware. In particular, deep neural nets (DNNs), i.e. neural nets with more than one hidden layer, have found great successes in many applications, such as computer vision and natural language processing. Here we show that DNNs can routinely make better prospective predictions than RF on a set of large diverse QSAR data sets that are taken from Merck's drug discovery effort. The number of adjustable parameters needed for DNNs is fairly large, but our results show that it is not necessary to optimize them for individual data sets, and a single set of recommended parameters can achieve better performance than RF for most of the data sets we studied. The usefulness of the parameters is demonstrated on additional data sets not used in the calibration. Although training DNNs is still computationally intensive, using graphical processing units (GPUs) can make this issue manageable.

  10. Clustering: a neural network approach.

    Science.gov (United States)

    Du, K-L

    2010-01-01

    Clustering is a fundamental data analysis method. It is widely used for pattern recognition, feature extraction, vector quantization (VQ), image segmentation, function approximation, and data mining. As an unsupervised classification technique, clustering identifies some inherent structures present in a set of objects based on a similarity measure. Clustering methods can be based on statistical model identification (McLachlan & Basford, 1988) or competitive learning. In this paper, we give a comprehensive overview of competitive learning based clustering methods. Importance is attached to a number of competitive learning based clustering neural networks such as the self-organizing map (SOM), the learning vector quantization (LVQ), the neural gas, and the ART model, and clustering algorithms such as the C-means, mountain/subtractive clustering, and fuzzy C-means (FCM) algorithms. Associated topics such as the under-utilization problem, fuzzy clustering, robust clustering, clustering based on non-Euclidean distance measures, supervised clustering, hierarchical clustering as well as cluster validity are also described. Two examples are given to demonstrate the use of the clustering methods.

  11. MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense

    OpenAIRE

    Sengupta, Sailik; Chakraborti, Tathagata; Kambhampati, Subbarao

    2017-01-01

    Recent works on gradient-based attacks and universal perturbations can adversarially modify images to bring down the accuracy of state-of-the-art classification techniques based on deep neural networks to as low as 10\\% on popular datasets like MNIST and ImageNet. The design of general defense strategies against a wide range of such attacks remains a challenging problem. In this paper, we derive inspiration from recent advances in the fields of cybersecurity and multi-agent systems and propos...

  12. Neural network approaches for noisy language modeling.

    Science.gov (United States)

    Li, Jun; Ouazzane, Karim; Kazemian, Hassan B; Afzal, Muhammad Sajid

    2013-11-01

    Text entry from people is not only grammatical and distinct, but also noisy. For example, a user's typing stream contains all the information about the user's interaction with computer using a QWERTY keyboard, which may include the user's typing mistakes as well as specific vocabulary, typing habit, and typing performance. In particular, these features are obvious in disabled users' typing streams. This paper proposes a new concept called noisy language modeling by further developing information theory and applies neural networks to one of its specific application-typing stream. This paper experimentally uses a neural network approach to analyze the disabled users' typing streams both in general and specific ways to identify their typing behaviors and subsequently, to make typing predictions and typing corrections. In this paper, a focused time-delay neural network (FTDNN) language model, a time gap model, a prediction model based on time gap, and a probabilistic neural network model (PNN) are developed. A 38% first hitting rate (HR) and a 53% first three HR in symbol prediction are obtained based on the analysis of a user's typing history through the FTDNN language modeling, while the modeling results using the time gap prediction model and the PNN model demonstrate that the correction rates lie predominantly in between 65% and 90% with the current testing samples, and 70% of all test scores above basic correction rates, respectively. The modeling process demonstrates that a neural network is a suitable and robust language modeling tool to analyze the noisy language stream. The research also paves the way for practical application development in areas such as informational analysis, text prediction, and error correction by providing a theoretical basis of neural network approaches for noisy language modeling.

  13. Neural Network Approach to Locating Cryptography in Object Code

    Energy Technology Data Exchange (ETDEWEB)

    Jason L. Wright; Milos Manic

    2009-09-01

    Finding and identifying cryptography is a growing concern in the malware analysis community. In this paper, artificial neural networks are used to classify functional blocks from a disassembled program as being either cryptography related or not. The resulting system, referred to as NNLC (Neural Net for Locating Cryptography) is presented and results of applying this system to various libraries are described.

  14. Fuzzy Document Clustering Approach using WordNet Lexical Categories

    Science.gov (United States)

    Gharib, Tarek F.; Fouad, Mohammed M.; Aref, Mostafa M.

    Text mining refers generally to the process of extracting interesting information and knowledge from unstructured text. This area is growing rapidly mainly because of the strong need for analysing the huge and large amount of textual data that reside on internal file systems and the Web. Text document clustering provides an effective navigation mechanism to organize this large amount of data by grouping their documents into a small number of meaningful classes. In this paper we proposed a fuzzy text document clustering approach using WordNet lexical categories and Fuzzy c-Means algorithm. Some experiments are performed to compare efficiency of the proposed approach with the recently reported approaches. Experimental results show that Fuzzy clustering leads to great performance results. Fuzzy c-means algorithm overcomes other classical clustering algorithms like k-means and bisecting k-means in both clustering quality and running time efficiency.

  15. Auto-context Convolutional Neural Network (Auto-Net) for Brain Extraction in Magnetic Resonance Imaging.

    Science.gov (United States)

    Salehi, Seyed Sadegh Mohseni; Erdogmus, Deniz; Gholipour, Ali

    2017-06-28

    Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and robustness of brain extraction, therefore, is crucial for the accuracy of the entire brain analysis process. State-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry; therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent and registration-free brain extraction tool in this study, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3D image information without the need for computationally expensive 3D convolutions, and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark datasets, namely LPBA40 and OASIS, in which we obtained Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily-oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI

  16. Flood estimation: a neural network approach

    Energy Technology Data Exchange (ETDEWEB)

    Swain, P.C.; Seshachalam, C.; Umamahesh, N.V. [Regional Engineering Coll., Warangal (India). Water and Environment Div.

    2000-07-01

    The artificial neural network (ANN) approach described in this study aims at predicting the flood flow into a reservoir. This differs from the traditional methods of flow prediction in the sense that it belongs to a class of data driven approaches, where as the traditional methods are model driven. Physical processes influencing the occurrences of streamflow in a river are highly complex, and are very difficult to be modelled by available statistical or deterministic models. ANNs provide model free solutions and hence can be expected to be appropriate in these conditions. Non-linearity, adaptivity, evidential response and fault tolerance are additional properties and capabilities of the neural networks. This paper highlights the applicability of neural networks for predicting daily flood flow taking the Hirakud reservoir on river Mahanadi in Orissa, India as the case study. The correlation between the observed and predicted flows and the relative error are considered to measure the performance of the model. The correlation between the observed and the modelled flows are computed to be 0.9467 in testing phase of the model. (orig.)

  17. Neural-Net Based Optical NDE Method for Structural Health Monitoring

    Science.gov (United States)

    Decker, Arthur J.; Weiland, Kenneth E.

    2003-01-01

    This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.

  18. Investigation of neural-net based control strategies for improved power system dynamic performance

    Energy Technology Data Exchange (ETDEWEB)

    Sobajic, D.J. [Electric Power Research Institute, Palo Alto, CA (United States)

    1995-12-31

    The ability to accurately predict the behavior of a dynamic system is of essential importance in monitoring and control of complex processes. In this regard recent advances in neural-net base system identification represent a significant step toward development and design of a new generation of control tools for increased system performance and reliability. The enabling functionality is the one of accurate representation of a model of a nonlinear and nonstationary dynamic system. This functionality provides valuable new opportunities including: (1) The ability to predict future system behavior on the basis of actual system observations, (2) On-line evaluation and display of system performance and design of early warning systems, and (3) Controller optimization for improved system performance. In this presentation, we discuss the issues involved in definition and design of learning control systems and their impact on power system control. Several numerical examples are provided for illustrative purpose.

  19. Accelerometer signal-based human activity recognition using augmented autoregressive model coefficients and artificial neural nets.

    Science.gov (United States)

    Khan, A M; Lee, Y K; Kim, T S

    2008-01-01

    Automatic recognition of human activities is one of the important and challenging research areas in proactive and ubiquitous computing. In this work, we present some preliminary results of recognizing human activities using augmented features extracted from the activity signals measured using a single triaxial accelerometer sensor and artificial neural nets. The features include autoregressive (AR) modeling coefficients of activity signals, signal magnitude areas (SMA), and title angles (TA). We have recognized four human activities using AR coefficients (ARC) only, ARC with SMA, and ARC with SMA and TA. With the last augmented features, we have achieved the recognition rate above 99% for all four activities including lying, standing, walking, and running. With our proposed technique, real time recognition of some human activities is possible.

  20. LOGIC WITH EXCEPTION ON THE ALGEBRA OF FOURIER-DUAL OPERATIONS: NEURAL NET MECHANISM OF COGNITIVE DISSONANCE REDUCING

    Directory of Open Access Journals (Sweden)

    A. V. Pavlov

    2014-01-01

    Full Text Available A mechanism of cognitive dissonance reducing is demonstrated with approach for non-monotonic fuzzy-valued logics by Fourier-holography technique implementation developing. Cognitive dissonance occurs under perceiving of new information that contradicts to the existing subjective pattern of the outside world, represented by double Fourier-transform cascade with a hologram – neural layers interconnections matrix of inner information representation and logical conclusion. The hologram implements monotonic logic according to “General Modus Ponens” rule. New information is represented by a hologram of exclusion that implements interconnections of logical conclusion and exclusion for neural layers. The latter are linked by Fourier transform that determines duality of the algebra forming operations of conjunction and disjunction. Hologram of exclusion forms conclusion that is dual to the “General Modus Ponens” conclusion. It is shown, that trained for the main rule and exclusion system can be represented by two-layered neural network with separate interconnection matrixes for direct and inverse iterations. The network energy function is involved determining the cyclic dynamics character; dissipative factor causing convergence type of the dynamics is analyzed. Both “General Modus Ponens” and exclusion holograms recording conditions on the dynamics and convergence of the system are demonstrated. The system converges to a stable status, in which logical conclusion doesn’t depend on the inner information. Such kind of dynamics, leading to tolerance forming, is typical for ordinary kind of thinking, aimed at inner pattern of outside world stability. For scientific kind of thinking, aimed at adequacy of the inner pattern of the world, a mechanism is needed to stop the net relaxation; the mechanism has to be external relative to the model of logic. Computer simulation results for the learning conditions adequate to real holograms recording are

  1. Door and cabinet recognition using convolutional neural nets and real-time method fusion for handle detection and grasping

    DEFF Research Database (Denmark)

    Maurin, Adrian Llopart; Ravn, Ole; Andersen, Nils Axel

    2017-01-01

    In this paper we present a new method that robustly identifies doors, cabinets and their respective handles, with special emphasis on extracting useful features from handles to be then manipulated. The novelty of this system relies on the combination of a Convolutional Neural Net (CNN), as a form...

  2. Maximum Entropy Approaches to Living Neural Networks

    Directory of Open Access Journals (Sweden)

    John M. Beggs

    2010-01-01

    Full Text Available Understanding how ensembles of neurons collectively interact will be a key step in developing a mechanistic theory of cognitive processes. Recent progress in multineuron recording and analysis techniques has generated tremendous excitement over the physiology of living neural networks. One of the key developments driving this interest is a new class of models based on the principle of maximum entropy. Maximum entropy models have been reported to account for spatial correlation structure in ensembles of neurons recorded from several different types of data. Importantly, these models require only information about the firing rates of individual neurons and their pairwise correlations. If this approach is generally applicable, it would drastically simplify the problem of understanding how neural networks behave. Given the interest in this method, several groups now have worked to extend maximum entropy models to account for temporal correlations. Here, we review how maximum entropy models have been applied to neuronal ensemble data to account for spatial and temporal correlations. We also discuss criticisms of the maximum entropy approach that argue that it is not generally applicable to larger ensembles of neurons. We conclude that future maximum entropy models will need to address three issues: temporal correlations, higher-order correlations, and larger ensemble sizes. Finally, we provide a brief list of topics for future research.

  3. Neural hypernetwork approach for pulmonary embolism diagnosis.

    Science.gov (United States)

    Rucco, Matteo; Sousa-Rodrigues, David; Merelli, Emanuela; Johnson, Jeffrey H; Falsetti, Lorenzo; Nitti, Cinzia; Salvi, Aldo

    2015-10-29

    Hypernetworks are based on topological simplicial complexes and generalize the concept of two-body relation to many-body relation. Furthermore, Hypernetworks provide a significant generalization of network theory, enabling the integration of relational structure, logic and analytic dynamics. A pulmonary embolism is a blockage of the main artery of the lung or one of its branches, frequently fatal. Our study uses data on 28 diagnostic features of 1427 people considered to be at risk of pulmonary embolism enrolled in the Department of Internal and Subintensive Medicine of an Italian National Hospital "Ospedali Riuniti di Ancona". Patients arrived in the department after a first screening executed by the emergency room. The resulting neural hypernetwork correctly recognized 94% of those developing pulmonary embolism. This is better than previous results obtained with other methods (statistical selection of features, partial least squares regression, topological data analysis in a metric space). In this work we successfully derived a new integrative approach for the analysis of partial and incomplete datasets that is based on Q-analysis with machine learning. The new approach, called Neural Hypernetwork, has been applied to a case study of pulmonary embolism diagnosis. The novelty of this method is that it does not use clinical parameters extracted by imaging analysis.

  4. Parametric Identification of Aircraft Loads: An Artificial Neural Network Approach

    Science.gov (United States)

    2016-03-30

    Undergraduate Student Paper Postgraduate Student Paper Parametric Identification of Aircraft Loads: An Artificial Neural Network Approach...monitoring, flight parameter, nonlinear modeling, Artificial Neural Network , typical loadcase. Introduction Aircraft load monitoring is an... Neural Networks (ANN), i.e. the BP network and Kohonen Clustering Network , are applied and revised by Kalman Filter and Genetic Algorithm to build

  5. A Cellular Approach to Net-Zero Energy Cities

    Directory of Open Access Journals (Sweden)

    Miguel Amado

    2017-11-01

    Full Text Available Recent growth in the use of photovoltaic technology and a rapid reduction in its cost confirms the potential of solar power on a large scale. In this context, planning for the deployment of smart grids is among the most important challenges to support the increased penetration of solar energy in urban areas and to ensure the resilience of the electricity system. As part this effort, the present paper describes a cellular approach to a Net-Zero energy concept, based on the balance between the potential solar energy supply and the existing consumption patterns at the urban unit scale. To do that, the Geographical Urban Units Delimitation model (GUUD has been developed and tested on a case study. By applying the GUUD model, which combines Geographic Information Systems (GIS, parametric modelling, and solar dynamic analysis, the whole area of the city was divided into urban cells, categorized as solar producers and energy consumers. The discussion around three theoretical scenarios permits us to explore how smart grids can be approached and promoted from an urban planning perspective. The paper provides insights into how urban planning can be a driver to optimize and manage energy balance across the city if the deployment of smart grids is correctly integrated in its operative process.

  6. Neural Networks in Antennas and Microwaves: A Practical Approach

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2001-12-01

    Full Text Available Neural networks are electronic systems which can be trained toremember behavior of a modeled structure in given operational points,and which can be used to approximate behavior of the structure out ofthe training points. These approximation abilities of neural nets aredemonstrated on modeling a frequency-selective surface, a microstriptransmission line and a microstrip dipole. Attention is turned to theaccuracy and to the efficiency of neural models. The association ofneural models and genetic algorithms, which can provide a global designtool, is discussed.

  7. Neural-net based coordinated stabilizing control for the exciter and governor loops of low head hydropower plants

    Energy Technology Data Exchange (ETDEWEB)

    Djukanovic, M.; Novicevic, M.; Dobrijevic, D.; Babic, B. [Electrical Engineering Inst. Nikola Tesla, Belgrade (Yugoslavia); Sobajic, D.J. [Electric Power Research Inst., Palo Alto, CA (United States); Pao, Y.H. [Case Western Reserve Univ., Cleveland, OH (United States)]|[AI WARE, Inc., Cleveland, OH (United States)

    1995-12-01

    This paper presents a design technique of a new adaptive optimal controller of the low head hydropower plant using artificial neural networks (ANN). The adaptive controller is to operate in real time to improve the generating unit transients through the exciter input, the guide vane position and the runner blade position. The new design procedure is based on self-organization and the predictive estimation capabilities of neural-nets implemented through the cluster-wise segmented associative memory scheme. The developed neural-net based controller (NNC) whose control signals are adjusted using the on-line measurements, can offer better damping effects for generator oscillations over a wide range of operating conditions than conventional controllers. Digital simulations of hydropower plant equipped with low head Kaplan turbine are performed and the comparisons of conventional excitation-governor control, state-space optimal control and neural-net based control are presented. Results obtained on the non-linear mathematical model demonstrate that the effects of the NNC closely agree with those obtained using the state-space multivariable discrete-time optimal controllers.

  8. Building insightful simulation models using Petri Nets - A structured approach

    NARCIS (Netherlands)

    van der Zee, D.J.

    Petri Nets have essential strengths in capturing a system's static structure and dynamics, its mathematical underpinning, and providing a graphical representation. However, visual simulation models of realistic systems based on Petri Nets are often perceived as too large and too complex to be easily

  9. NIRExpNet: Three-Stream 3D Convolutional Neural Network for Near Infrared Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Zhan Wu

    2017-11-01

    Full Text Available Facial expression recognition (FER under active near-infrared (NIR illumination has the advantages of illumination invariance. In this paper, we propose a three-stream 3D convolutional neural network, named as NIRExpNet for NIR FER. The 3D structure of NIRExpNet makes it possible to extract automatically, not just spatial features, but also, temporal features. The design of multiple streams of the NIRExpNet enables it to fuse local and global facial expression features. To avoid over-fitting, the NIRExpNet has a moderate size to suit the Oulu-CASIA NIR facial expression database that is a medium-size database. Experimental results show that the proposed NIRExpNet outperforms some previous state-of-art methods, such as Histogram of Oriented Gradient to 3D (HOG 3D, Local binary patterns from three orthogonal planes (LBP-TOP, deep temporal appearance-geometry network (DTAGN, and adapt 3D Convolutional Neural Networks (3D CNN DAP.

  10. Design of efficient and safe neural stimulators a multidisciplinary approach

    CERN Document Server

    van Dongen, Marijn

    2016-01-01

    This book discusses the design of neural stimulator systems which are used for the treatment of a wide variety of brain disorders such as Parkinson’s, depression and tinnitus. Whereas many existing books treating neural stimulation focus on one particular design aspect, such as the electrical design of the stimulator, this book uses a multidisciplinary approach: by combining the fields of neuroscience, electrophysiology and electrical engineering a thorough understanding of the complete neural stimulation chain is created (from the stimulation IC down to the neural cell). This multidisciplinary approach enables readers to gain new insights into stimulator design, while context is provided by presenting innovative design examples. Provides a single-source, multidisciplinary reference to the field of neural stimulation, bridging an important knowledge gap among the fields of bioelectricity, neuroscience, neuroengineering and microelectronics;Uses a top-down approach to understanding the neural activation proc...

  11. Modularity and Sparsity: Evolution of Neural Net Controllers in Physically Embodied Robots

    Directory of Open Access Journals (Sweden)

    Nicholas Livingston

    2016-12-01

    Full Text Available While modularity is thought to be central for the evolution of complexity and evolvability, it remains unclear how systems boot-strap themselves into modularity from random or fully integrated starting conditions. Clune et al. (2013 suggested that a positive correlation between sparsity and modularity is the prime cause of this transition. We sought to test the generality of this modularity-sparsity hypothesis by testing it for the first time in physically embodied robots. A population of ten Tadros — autonomous, surface-swimming robots propelled by a flapping tail — was used. Individuals varied only in the structure of their neural net control, a 2 x 6 x 2 network with recurrence in the hidden layer. Each of the 60 possible connections was coded in the genome, and could achieve one of three states: -1, 0, 1. Inputs were two light-dependent resistors and outputs were two motor control variables to the flapping tail, one for the frequency of the flapping and the other for the turning offset. Each Tadro was tested separately in a circular tank lit by a single overhead light source. Fitness was the amount of light gathered by a vertically oriented sensor that was disconnected from the controller net. Reproduction was asexual, with the top performer cloned and then all individuals entered into a roulette wheel selection process, with genomes mutated to create the offspring. The starting population of networks was randomly generated. Over ten generations, the population’s mean fitness increased two-fold. This evolution occurred in spite of an unintentional integer overflow problem in recurrent nodes in the hidden layer that caused outputs to oscillate. Our investigation of the oscillatory behavior showed that the mutual information of inputs and outputs was sufficient for the reactive behaviors observed. While we had predicted that both modularity and sparsity would follow the same trend as fitness, neither did so. Instead, selection gradients

  12. Prediction of Disease Causing Non-Synonymous SNPs by the Artificial Neural Network Predictor NetDiseaseSNP

    DEFF Research Database (Denmark)

    Johansen, Morten Bo; Gonzalez-Izarzugaza, Jose Maria; Brunak, Søren

    2013-01-01

    We have developed a sequence conservation-based artificial neural network predictor called NetDiseaseSNP which classifies nsSNPs as disease-causing or neutral. Our method uses the excellent alignment generation algorithm of SIFT to identify related sequences and a combination of 31 features...... assessing sequence conservation and the predicted surface accessibility to produce a single score which can be used to rank nsSNPs based on their potential to cause disease. NetDiseaseSNP classifies successfully disease-causing and neutral mutations. In addition, we show that NetDiseaseSNP discriminates...... cancer driver and passenger mutations satisfactorily. Our method outperforms other state-of-the-art methods on several disease/neutral datasets as well as on cancer driver/passenger mutation datasets and can thus be used to pinpoint and prioritize plausible disease candidates among nsSNPs for further...

  13. An efficient neural network approach to dynamic robot motion planning.

    Science.gov (United States)

    Yang, S X; Meng, M

    2000-03-01

    In this paper, a biologically inspired neural network approach to real-time collision-free motion planning of mobile robots or robot manipulators in a nonstationary environment is proposed. Each neuron in the topologically organized neural network has only local connections, whose neural dynamics is characterized by a shunting equation. Thus the computational complexity linearly depends on the neural network size. The real-time robot motion is planned through the dynamic activity landscape of the neural network without any prior knowledge of the dynamic environment, without explicitly searching over the free workspace or the collision paths, and without any learning procedures. Therefore it is computationally efficient. The global stability of the neural network is guaranteed by qualitative analysis and the Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies.

  14. Neural network approach to parton distributions fitting

    CERN Document Server

    Piccione, Andrea; Forte, Stefano; Latorre, Jose I.; Rojo, Joan; Piccione, Andrea; Rojo, Joan

    2006-01-01

    We will show an application of neural networks to extract information on the structure of hadrons. A Monte Carlo over experimental data is performed to correctly reproduce data errors and correlations. A neural network is then trained on each Monte Carlo replica via a genetic algorithm. Results on the proton and deuteron structure functions, and on the nonsinglet parton distribution will be shown.

  15. BrainSegNet: a convolutional neural network architecture for automated segmentation of human brain structures.

    Science.gov (United States)

    Mehta, Raghav; Majumdar, Aabhas; Sivaswamy, Jayanthi

    2017-04-01

    Automated segmentation of cortical and noncortical human brain structures has been hitherto approached using nonrigid registration followed by label fusion. We propose an alternative approach for this using a convolutional neural network (CNN) which classifies a voxel into one of many structures. Four different kinds of two-dimensional and three-dimensional intensity patches are extracted for each voxel, providing local and global (context) information to the CNN. The proposed approach is evaluated on five different publicly available datasets which differ in the number of labels per volume. The obtained mean Dice coefficient varied according to the number of labels, for example, it is [Formula: see text] and [Formula: see text] for datasets with the least (32) and the most (134) number of labels, respectively. These figures are marginally better or on par with those obtained with the current state-of-the-art methods on nearly all datasets, at a reduced computational time. The consistently good performance of the proposed method across datasets and no requirement for registration make it attractive for many applications where reduced computational time is necessary.

  16. Estimação do volume de árvores utilizando redes neurais artificiais Estimate of tree volume using artificial neural nets

    Directory of Open Access Journals (Sweden)

    Eric Bastos Gorgens

    2009-12-01

    Full Text Available Rede neural artificial consiste em um conjunto de unidades que contêm funções matemáticas, unidas por pesos. As redes são capazes de aprender, mediante modificação dos pesos sinápticos, e generalizar o aprendizado para outros arquivos desconhecidos. O projeto de redes neurais é composto por três etapas: pré-processamento, processamento e, por fim, pós-processamento dos dados. Um dos problemas clássicos que podem ser abordados por redes é a aproximação de funções. Nesse grupo, pode-se incluir a estimação do volume de árvores. Foram utilizados quatro arquiteturas diferentes, cinco pré-processamentos e duas funções de ativação. As redes que se apresentaram estatisticamente iguais aos dados observados também foram analisadas quanto ao resíduo e à distribuição dos volumes e comparadas com a estimação de volume pelo modelo de Schumacher e Hall. As redes neurais formadas por neurônios, cuja função de ativação era exponencial, apresentaram estimativas estatisticamente iguais aos dados observados. As redes treinadas com os dados normalizados pelo método da interpolação linear e equalizados tiveram melhor desempenho na estimação.The artificial neural network consists of a set of units containing mathematical functions connected by weights. Such nets are capable of learning by means of synaptic weight modification, generalizing learning for other unknown archives. The neural network project comprises three stages: pre-processing, processing and post-processing of data. One of the classical problems approached by networks is function approximation. Tree volume estimate can be included in this group. Four different architectures, five pre-processings and two activation functions were used. The nets which were statistically similar to the observed data were also analyzed in relation to residue and volume and compared to the volume estimate provided by the Schumacher and Hall equation. The neural nets formed by

  17. A first approach towards an Urdu WordNet

    OpenAIRE

    Ahmed, Tafseer; Hautli, Annette

    2011-01-01

    This paper reports on a first experiment with developing a lexical knowledge resource for Urdu on the basis of Hindi WordNet. Due to the structural similarity of Urdu and Hindi, we can focus on overcoming the differences in the scriptual systems of the two languages by using transliterators. Various natural language processing tools, among them a computational semantics based on the Urdu ParGram grammar, can use the resulting basic lexical knowledge base for Urdu.

  18. Data Normalization to Accelerate Training for Linear Neural Net to Predict Tropical Cyclone Tracks

    Directory of Open Access Journals (Sweden)

    Jian Jin

    2015-01-01

    Full Text Available When pure linear neural network (PLNN is used to predict tropical cyclone tracks (TCTs in South China Sea, whether the data is normalized or not greatly affects the training process. In this paper, min.-max. method and normal distribution method, instead of standard normal distribution, are applied to TCT data before modeling. We propose the experimental schemes in which, with min.-max. method, the min.-max. value pair of each variable is mapped to (−1, 1 and (0, 1; with normal distribution method, each variable’s mean and standard deviation pair is set to (0, 1 and (100, 1. We present the following results: (1 data scaled to the similar intervals have similar effects, no matter the use of min.-max. or normal distribution method; (2 mapping data to around 0 gains much faster training speed than mapping them to the intervals far away from 0 or using unnormalized raw data, although all of them can approach the same lower level after certain steps from their training error curves. This could be useful to decide data normalization method when PLNN is used individually.

  19. Neural-Net Processing of Characteristic Patterns From Electronic Holograms of Vibrating Blades

    Science.gov (United States)

    Decker, Arthur J.

    1999-01-01

    Finite-element-model-trained artificial neural networks can be used to process efficiently the characteristic patterns or mode shapes from electronic holograms of vibrating blades. The models used for routine design may not yet be sufficiently accurate for this application. This document discusses the creation of characteristic patterns; compares model generated and experimental characteristic patterns; and discusses the neural networks that transform the characteristic patterns into strain or damage information. The current potential to adapt electronic holography to spin rigs, wind tunnels and engines provides an incentive to have accurate finite element models lor training neural networks.

  20. Oregon Sustainability Center: Weighing Approaches to Net Zero

    Energy Technology Data Exchange (ETDEWEB)

    Regnier, Cindy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Robinson, Alastair [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Settlemyre, Kevin [Sustainable IQ, Inc., Arlington, MA (United States); Bosnic, Zorana [HOK, San Francisco, CA (United States)

    2013-10-01

    The Oregon Sustainability Center (OSC) was to represent a unique public/private partnership between the city of Portland, Oregon, state government, higher education, non-profit organizations, and the business community. A unique group of stakeholders partnered with the U.S. Department of Energy (DOE) technical expert team (TET) to collaboratively identify, analyze, and evaluate solutions to enable the OSC to become a high-performance sustainability landmark in downtown Portland. The goal was to build a new, low-energy mixed-use urban high-rise that consumes at least 50 percent less energy than requirements set by Energy Standard 90.1-2007 of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE), the American National Standards Institute (ANSI), and the Illuminating Engineering Society of America (IESNA) as part of DOE’s Commercial Building Partnerships (CBP) program.1 In addition, the building design was to incorporate renewable energy sources that would account for the remaining energy consumption, resulting in a net zero building. The challenge for the CBP DOE technical team was to evaluate factors of risk and components of resiliency in the current net zero energy design and analyze that design to see if the same high performance could be achieved by alternative measures at lower costs. In addition, the team was to use a “lens of scalability” to assess whether or not the strategies could be applied to more projects. However, a key component of the required project funding did not pass, and therefore this innovative building design was discontinued while it was in the design development stage.

  1. Competition and Cooperation in Neural Nets : U.S.-Japan Joint Seminar

    CERN Document Server

    Arbib, Michael

    1982-01-01

    The human brain, wi th its hundred billion or more neurons, is both one of the most complex systems known to man and one of the most important. The last decade has seen an explosion of experimental research on the brain, but little theory of neural networks beyond the study of electrical properties of membranes and small neural circuits. Nonetheless, a number of workers in Japan, the United States and elsewhere have begun to contribute to a theory which provides techniques of mathematical analysis and computer simulation to explore properties of neural systems containing immense numbers of neurons. Recently, it has been gradually recognized that rather independent studies of the dynamics of pattern recognition, pattern format::ion, motor control, self-organization, etc. , in neural systems do in fact make use of common methods. We find that a "competition and cooperation" type of interaction plays a fundamental role in parallel information processing in the brain. The present volume brings together 23 papers ...

  2. A Dynamic Neural Network Approach to CBM

    Science.gov (United States)

    2011-03-15

    Therefore post-processing is needed to extract the time difference between corresponding events from which to calculate the crankshaft rotational speed...potentially already available from existing sensors (such as a crankshaft timing device) and a Neural Network processor to carry out the calculation . As...files are designated with the “_genmod” suffix. These files were the sources for the training and testing sets and made the extraction process easy

  3. Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring

    Science.gov (United States)

    Decker, Arthur J.

    2004-01-01

    A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.

  4. ε-Net Approach to Sensor k-Coverage

    Directory of Open Access Journals (Sweden)

    Giordano Fusco

    2010-01-01

    Full Text Available Wireless sensors rely on battery power, and in many applications it is difficult or prohibitive to replace them. Hence, in order to prolongate the system's lifetime, some sensors can be kept inactive while others perform all the tasks. In this paper, we study the k-coverage problem of activating the minimum number of sensors to ensure that every point in the area is covered by at least k sensors. This ensures higher fault tolerance, robustness, and improves many operations, among which position detection and intrusion detection. The k-coverage problem is trivially NP-complete, and hence we can only provide approximation algorithms. In this paper, we present an algorithm based on an extension of the classical ε-net technique. This method gives an O(log⁡M-approximation, where M is the number of sensors in an optimal solution. We do not make any particular assumption on the shape of the areas covered by each sensor, besides that they must be closed, connected, and without holes.

  5. A Constructive Neural-Network Approach to Modeling Psychological Development

    Science.gov (United States)

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  6. Applying Artificial Neural Networks to Estimate Net Radiation at Surface Using the Synergy between GERB-SEVIRI and Ground Data

    Science.gov (United States)

    Geraldo Ferreira, A.; Soria, Emilio; Lopez-Baeza, Ernesto; Vila, Joan; Serrano, Antonio J.; Martinez, Marcelino; Velazquez Blazquez, Almudena; Clerbaux, Nicolas

    This paper describes the results obtained using Artificial Neural Networks (AAN) models to estimate the diurnal cycle of net radiation (Rn) at surface. The data used as input parameter in the AAN model were that measured by Geostationary Earth Radiation Budget (GERB-1) instrument, on board Meteosat 9 satellite. The data concerning Rn at the surface were collected at the Valencia Anchor Station (VAS), a ground reference meteorological station for the validation of low spatial resolution sensors situated near de city of Valencia, Spain. This data refers to the periods July 31st -August 6th 2006 and June 19th -August 18th 2007. Both, GERB-1 and VAS data are used to train and validate the AAN model. The same data set is also used to develop and validate a Multivariate Linear Regression (MLR) model. A comparison between the estimates provided by the AAN and the MLR models has been carried out; the results obtained with the neural model outperform the linear model. Moreover, the low values of the error indexes show that neural models can be used as an alternative methodology to make atmospheric corrections.

  7. Neural substrates of approach-avoidance conflict decision-making.

    Science.gov (United States)

    Aupperle, Robin L; Melrose, Andrew J; Francisco, Alex; Paulus, Martin P; Stein, Murray B

    2015-02-01

    Animal approach-avoidance conflict paradigms have been used extensively to operationalize anxiety, quantify the effects of anxiolytic agents, and probe the neural basis of fear and anxiety. Results from human neuroimaging studies support that a frontal-striatal-amygdala neural circuitry is important for approach-avoidance learning. However, the neural basis of decision-making is much less clear in this context. Thus, we combined a recently developed human approach-avoidance paradigm with functional magnetic resonance imaging (fMRI) to identify neural substrates underlying approach-avoidance conflict decision-making. Fifteen healthy adults completed the approach-avoidance conflict (AAC) paradigm during fMRI. Analyses of variance were used to compare conflict to nonconflict (avoid-threat and approach-reward) conditions and to compare level of reward points offered during the decision phase. Trial-by-trial amplitude modulation analyses were used to delineate brain areas underlying decision-making in the context of approach/avoidance behavior. Conflict trials as compared to the nonconflict trials elicited greater activation within bilateral anterior cingulate cortex, anterior insula, and caudate, as well as right dorsolateral prefrontal cortex (PFC). Right caudate and lateral PFC activation was modulated by level of reward offered. Individuals who showed greater caudate activation exhibited less approach behavior. On a trial-by-trial basis, greater right lateral PFC activation related to less approach behavior. Taken together, results suggest that the degree of activation within prefrontal-striatal-insula circuitry determines the degree of approach versus avoidance decision-making. Moreover, the degree of caudate and lateral PFC activation related to individual differences in approach-avoidance decision-making. Therefore, the approach-avoidance conflict paradigm is ideally suited to probe anxiety-related processing differences during approach-avoidance decision

  8. Neural Network Approach To Sensory Fusion

    Science.gov (United States)

    Pearson, John C.; Gelfand, Jack J.; Sullivan, W. E.; Peterson, Richard M.; Spence, Clay D.

    1988-08-01

    We present a neural network model for sensory fusion based on the design of the visual/acoustic target localiza-tion system of the barn owl. This system adaptively fuses its separate visual and acoustic representations of object position into a single joint representation used for head orientation. The building block in this system, as in much of the brain, is the neuronal map. Neuronal maps are large arrays of locally interconnected neurons that represent information in a map-like form, that is, parameter values are systematically encoded by the position of neural activation in the array. The computational load is distributed to a hierarchy of maps, and the computation is performed in stages by transforming the representation from map to map via the geometry of the projections between the maps and the local interactions within the maps. For example, azimuthal position is computed from the frequency and binaural phase information encoded in the signals of the acoustic sensors, while elevation is computed in a separate stream using binaural intensity information. These separate streams are merged in their joint projection onto the external nucleus of the inferior colliculus, a two dimensional array of cells which contains a map of acoustic space. This acoustic map, and the visual map of the retina, jointly project onto the optic tectum, creating a fused visual/acoustic representation of position in space that is used for object localization. In this paper we describe our mathematical model of the stage of visual/acoustic fusion in the optic tectum. The model assumes that the acoustic projection from the external nucleus onto the tectum is roughly topographic and one-to-many, while the visual projection from the retina onto the tectum is topographic and one-to-one. A simple process of self-organization alters the strengths of the acoustic connections, effectively forming a focused beam of strong acoustic connections whose inputs are coincident with the visual inputs

  9. Multiple neural network approaches to clinical expert systems

    Science.gov (United States)

    Stubbs, Derek F.

    1990-08-01

    We briefly review the concept of computer aided medical diagnosis and more extensively review the the existing literature on neural network applications in the field. Neural networks can function as simple expert systems for diagnosis or prognosis. Using a public database we develop a neural network for the diagnosis of a major presenting symptom while discussing the development process and possible approaches. MEDICAL EXPERTS SYSTEMS COMPUTER AIDED DIAGNOSIS Biomedicine is an incredibly diverse and multidisciplinary field and it is not surprising that neural networks with their many applications are finding more and more applications in the highly non-linear field of biomedicine. I want to concentrate on neural networks as medical expert systems for clinical diagnosis or prognosis. Expert Systems started out as a set of computerized " ifthen" rules. Everything was reduced to boolean logic and the promised land of computer experts was said to be in sight. It never came. Why? First the computer code explodes as the number of " ifs" increases. All the " ifs" have to interact. Second experts are not very good at reducing expertise to language. It turns out that experts recognize patterns and have non-verbal left-brain intuition decision processes. Third learning by example rather than learning by rule is the way natural brains works and making computers work by rule-learning is hideously labor intensive. Neural networks can learn from example. They learn the results

  10. Probabilistic and Other Neural Nets in Multi-Hole Probe Calibration and Flow Angularity Pattern Recognition

    Science.gov (United States)

    Baskaran, Subbiah; Ramachandran, Narayanan; Noever, David

    1998-01-01

    The use of probabilistic (PNN) and multilayer feed forward (MLFNN) neural networks are investigated for calibration of multi-hole pressure probes and the prediction of associated flow angularity patterns in test flow fields. Both types of networks are studied in detail for their calibration and prediction characteristics. The current formalism can be applied to any multi-hole probe, however the test results for the most commonly used five-hole Cone and Prism probe types alone are reported in this article.

  11. Neural network approaches to dynamic collision-free trajectory generation.

    Science.gov (United States)

    Yang, S X; Meng, M

    2001-01-01

    In this paper, dynamic collision-free trajectory generation in a nonstationary environment is studied using biologically inspired neural network approaches. The proposed neural network is topologically organized, where the dynamics of each neuron is characterized by a shunting equation or an additive equation. The state space of the neural network can be either the Cartesian workspace or the joint space of multi-joint robot manipulators. There are only local lateral connections among neurons. The real-time optimal trajectory is generated through the dynamic activity landscape of the neural network without explicitly searching over the free space nor the collision paths, without explicitly optimizing any global cost functions, without any prior knowledge of the dynamic environment, and without any learning procedures. Therefore the model algorithm is computationally efficient. The stability of the neural network system is guaranteed by the existence of a Lyapunov function candidate. In addition, this model is not very sensitive to the model parameters. Several model variations are presented and the differences are discussed. As examples, the proposed models are applied to generate collision-free trajectories for a mobile robot to solve a maze-type of problem, to avoid concave U-shaped obstacles, to track a moving target and at the same to avoid varying obstacles, and to generate a trajectory for a two-link planar robot with two targets. The effectiveness and efficiency of the proposed approaches are demonstrated through simulation and comparison studies.

  12. Using artificial neural network approach for modelling rainfall–runoff ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 122; Issue 2. Using artificial neural network approach for ... In Taiwan, owing to the nonuniform temporal and spatial distribution of rainfall and high mountains all over the country, hydrologic systems are very complex. Therefore, preventing and controlling flood ...

  13. The process of learning in neural net models with Poisson and Gauss connectivities.

    Science.gov (United States)

    Sivridis, L; Kotini, A; Anninos, P

    2008-01-01

    In this study we examined the dynamic behavior of isolated and non-isolated neural networks with chemical markers that follow a Poisson or Gauss distribution of connectivity. The Poisson distribution shows higher activity in comparison to the Gauss distribution although the latter has more connections that obliterated due to randomness. We examined 57 hematoxylin and eosin stained sections from an equal number of autopsy specimens with a diagnosis of "cerebral matter within normal limits". Neural counting was carried out in 5 continuous optic fields, with the use of a simple optical microscope connected to a computer (software programmer Nikon Act-1 vers-2). The number of neurons that corresponded to a surface was equal to 0.15 mm(2). There was a gradual reduction in the number of neurons as age increased. A mean value of 45.8 neurons /0.15 mm(2) was observed within the age range 21-25, 33 neurons /0.15 mm(2) within the age range 41-45, 19.3 neurons /0.15 mm(2) within the age range 56-60 years. After the age of 60 it was observed that the number of neurons per unit area stopped decreasing. A correlation was observed between these experimental findings and the theoretical neural model developed by professor Anninos and his colleagues. Equivalence between the mean numbers of neurons of the above mentioned age groups and the highest possible number of synaptic connections per neuron (highest number of synaptic connections corresponded to the age group 21-25) was created. We then used both inhibitory and excitatory post-synaptic potentials and applied these values to the Poisson and Gauss distributions, whereas the neuron threshold was varied between 3 and 5. According to the obtained phase diagrams, the hysteresis loops decrease as age increases. These findings were significant as the hysteresis loops can be regarded as the basis for short-term memory.

  14. Fast neural-net based fake track rejection in the LHCb reconstruction

    CERN Document Server

    De Cian, Michel; Seyfert, Paul; Stahl, Sascha

    2017-01-01

    A neural-network based algorithm to identify fake tracks in the LHCb pattern recognition is presented. This algorithm, called ghost probability, retains more than 99 % of well reconstructed tracks while reducing the number of fake tracks by 60 %. It is fast enough to fit into the CPU time budget of the software trigger farm and thus reduces the combinatorics of the decay reconstructions, as well as the number of tracks that need to be processed by the particle identification algorithms. As a result, it strongly contributes to the achievement of having the same reconstruction online and offline in the LHCb experiment in Run II of the LHC.

  15. An Improved Approach for Estimating Daily Net Radiation over the Heihe River Basin

    Directory of Open Access Journals (Sweden)

    Bingfang Wu

    2017-01-01

    Full Text Available Net radiation plays an essential role in determining the thermal conditions of the Earth’s surface and is an important parameter for the study of land-surface processes and global climate change. In this paper, an improved satellite-based approach to estimate the daily net radiation is presented, in which sunshine duration were derived from the geostationary meteorological satellite (FY-2D cloud classification product, the monthly empirical as and bs Angstrom coefficients for net shortwave radiation were calibrated by spatial fitting of the ground data from 1997 to 2006, and the daily net longwave radiation was calibrated with ground data from 2007 to 2010 over the Heihe River Basin in China. The estimated daily net radiation values were validated against ground data for 12 months in 2008 at four stations with different underlying surface types. The average coefficient of determination (R2 was 0.8489, and the averaged Nash-Sutcliffe equation (NSE was 0.8356. The close agreement between the estimated daily net radiation and observations indicates that the proposed method is promising, especially given the comparison between the spatial distribution and the interpolation of sunshine duration. Potential applications include climate research, energy balance studies and the estimation of global evapotranspiration.

  16. From image edges to geons to viewpoint-invariant object models: a neural net implementation

    Science.gov (United States)

    Biederman, Irving; Hummel, John E.; Gerhardstein, Peter C.; Cooper, Eric E.

    1992-03-01

    Three striking and fundamental characteristics of human shape recognition are its invariance with viewpoint in depth (including scale), its tolerance of unfamiliarity, and its robustness with the actual contours present in an image (as long as the same convex parts [geons] can be activated). These characteristics are expressed in an implemented neural network model (Hummel & Biederman, 1992) that takes a line drawing of an object as input and generates a structural description of geons and their relations which is then used for object classification. The model's capacity for structural description derives from its solution to the dynamic binding problem of neural networks: independent units representing an object's parts (in terms of their shape attributes and interrelations) are bound temporarily when those attributes occur in conjunction in the system's input. Temporary conjunctions of attributes are represented by synchronized activity among the units representing those attributes. Specifically, the model induces temporal correlation in the firing of activated units to: (1) parse images into their constituent parts; (2) bind together the attributes of a part; and (3) determine the relations among the parts and bind them to the parts to which they apply. Because it conjoins independent units temporarily, dynamic binding allows tremendous economy of representation, and permits the representation to reflect an object's attribute structure. The model's recognition performance conforms well to recent results from shape priming experiments. Moreover, the manner in which the model's performance degrades due to accidental synchrony produced by an excess of phase sets suggests a basis for a theory of visual attention.

  17. Current approaches to model extracellular electrical neural microstimulation

    Directory of Open Access Journals (Sweden)

    Sébastien eJoucla

    2014-02-01

    Full Text Available Nowadays, high-density microelectrode arrays provide unprecedented possibilities to precisely activate spatially well-controlled central nervous system (CNS areas. However, this requires optimizing stimulating devices, which in turn requires a good understanding of the effects of microstimulation on cells and tissues. In this context, modeling approaches provide flexible ways to predict the outcome of electrical stimulation in terms of CNS activation. In this paper, we present state-of-the-art modeling methods with sufficient details to allow the reader to rapidly build numerical models of neuronal extracellular microstimulation. These include 1 the computation of the electrical potential field created by the stimulation in the tissue, and 2 the response of a target neuron to this field. Two main approaches are described: First we describe the classical hybrid approach that combines the finite element modeling of the potential field with the calculation of the neuron’s response in a cable equation framework (compartmentalized neuron models. Then, we present a whole finite element approach allows the simultaneous calculation of the extracellular and intracellular potentials, by representing the neuronal membrane with a thin-film approximation. This approach was previously introduced in the frame of neural recording, but has never been implemented to determine the effect of extracellular stimulation on the neural response at a sub-compartment level. Here, we show on an example that the latter modeling scheme can reveal important sub-compartment behavior of the neural membrane that cannot be resolved using the hybrid approach. The goal of this paper is also to describe in detail the practical implementation of these methods to allow the reader to easily build new models using standard software packages. These modeling paradigms, depending on the situation, should help build more efficient high-density neural prostheses for CNS rehabilitation.

  18. ASP.NET MVC 4 recipes a problem-solution approach

    CERN Document Server

    Ciliberti, John

    2013-01-01

    ASP.NET MVC 4 Recipes is a practical guide for developers creating modern web applications, cutting through the complexities of ASP.NET, jQuery, Knockout.js and HTML 5 to provide straightforward solutions to common web development problems using proven methods based on best practices. The problem-solution approach gets you in, out, and back to work quickly while deepening your understanding of the underlying platform and how to develop with it. Author John Ciliberti guides you through the framework and development tools, presenting typical challenges, along with code solutions and clear, conci

  19. Neural Correlates of Attentional Flexibility during Approach and Avoidance Motivation

    Science.gov (United States)

    Calcott, Rebecca D.; Berkman, Elliot T.

    2015-01-01

    Dynamic, momentary approach or avoidance motivational states have downstream effects on eventual goal success and overall well being, but there is still uncertainty about how those states affect the proximal neurocognitive processes (e.g., attention) that mediate the longer-term effects. Attentional flexibility, or the ability to switch between different attentional foci, is one such neurocognitive process that influences outcomes in the long run. The present study examined how approach and avoidance motivational states affect the neural processes involved in attentional flexibility using fMRI with the aim of determining whether flexibility operates via different neural mechanisms under these different states. Attentional flexibility was operationalized as subjects’ ability to switch between global and local stimulus features. In addition to subjects’ motivational state, the task context was manipulated by varying the ratio of global to local trials in a block in light of recent findings about the moderating role of context on motivation-related differences in attentional flexibility. The neural processes involved in attentional flexibility differ under approach versus avoidance states. First, differences in the preparatory activity in key brain regions suggested that subjects’ preparedness to switch was influenced by motivational state (anterior insula) and the interaction between motivation and context (superior temporal gyrus, inferior parietal lobule). Additionally, we observed motivation-related differences the anterior cingulate cortex during switching. These results provide initial evidence that motivation-induced behavioral changes may arise via different mechanisms in approach versus avoidance motivational states. PMID:26000735

  20. Stabilizing patterns in time: Neural network approach.

    Science.gov (United States)

    Ben-Shushan, Nadav; Tsodyks, Misha

    2017-12-01

    Recurrent and feedback networks are capable of holding dynamic memories. Nonetheless, training a network for that task is challenging. In order to do so, one should face non-linear propagation of errors in the system. Small deviations from the desired dynamics due to error or inherent noise might have a dramatic effect in the future. A method to cope with these difficulties is thus needed. In this work we focus on recurrent networks with linear activation functions and binary output unit. We characterize its ability to reproduce a temporal sequence of actions over its output unit. We suggest casting the temporal learning problem to a perceptron problem. In the discrete case a finite margin appears, providing the network, to some extent, robustness to noise, for which it performs perfectly (i.e. producing a desired sequence for an arbitrary number of cycles flawlessly). In the continuous case the margin approaches zero when the output unit changes its state, hence the network is only able to reproduce the sequence with slight jitters. Numerical simulation suggest that in the discrete time case, the longest sequence that can be learned scales, at best, as square root of the network size. A dramatic effect occurs when learning several short sequences in parallel, that is, their total length substantially exceeds the length of the longest single sequence the network can learn. This model easily generalizes to an arbitrary number of output units, which boost its performance. This effect is demonstrated by considering two practical examples for sequence learning. This work suggests a way to overcome stability problems for training recurrent networks and further quantifies the performance of a network under the specific learning scheme.

  1. HAWC Analysis of the Crab Nebula Using Neural-Net Energy Reconstruction

    Science.gov (United States)

    Marinelli, Samuel; HAWC Collaboration

    2017-01-01

    The HAWC (High-Altitude Water-Cherenkov) experiment is a TeV γ-ray observatory located 4100 m above sea level on the Sierra Negra mountain in Puebla, Mexico. The detector consists of 300 water-filled tanks, each instrumented with 4 photomuliplier tubes that utilize the water-Cherenkov technique to detect atmospheric air showers produced by cosmic γ rays. Construction of HAWC was completed in March, 2015. The experiment's wide field of view (2 sr) and high duty cycle (> 95 %) make it a powerful survey instrument sensitive to pulsar wind nebulae, supernova remnants, active galactic nuclei, and other γ-ray sources. The mechanisms of particle acceleration at these sources can be studied by analyzing their energy spectra. To this end, we have developed an event-by-event energy-reconstruction algorithm employing an artificial neural network to estimate energies of primary γ rays. The Crab Nebula, the brightest source of TeV photons, makes an excellent calibration source for this technique. We will present preliminary results from an analysis of the Crab energy spectrum using this new energy-reconstruction method. This work was supported by the National Science Foundation.

  2. Hidden Treasures and Secret Pitfalls: Application of the Capability Approach to ParkinsonNet.

    Science.gov (United States)

    Canoy, Marcel; Faber, Marjan J; Munneke, Marten; Oortwijn, Wija; Nijkrake, Maarten J; Bloem, Bastiaan R

    2015-01-01

    In the Netherlands, the largest health technology assessment (HTA) program funds mainly (cost-)effectiveness studies and implementation research. The cost-effectiveness studies are usually controlled clinical trials which simultaneously collect cost data. The success of a clinical trial typically depends on the effect size for the primary outcome, such as health gains or mortality rates. A drawback is that in case of a negative primary outcome, relevant other (and perhaps more implicit) benefits might be missed. Conversely, positive trials can contain adverse outcomes that may also remain hidden. The capability approach (developed by Nobel Prize winner and philosopher Sen) is an instrument that may reveal such "hidden treasures and secret pitfalls" that lie embedded within clinical trials, beyond the more traditional outcomes. Here, we exemplify the possible merits of the capability approach using a large clinical trial (funded by the HTA program in the Netherlands) that aimed to evaluate the ParkinsonNet concept, an innovative network approach for Parkinson patients. This trial showed no effects for the primary outcome, but the ParkinsonNet concept tested in this study was nevertheless met with great enthusiasm and was rapidly implemented throughout an entire country, and meanwhile also internationally. We applied the capability approach to the ParkinsonNet concept, and this analysis yielded additional benefits within several capability domains. These findings seems to substantiate the claim that richer policy debates may ensue by applying the capability approach to clinical trial data, in addition to traditional outcomes.

  3. Generation of daily solar irradiation by means of artificial neural net works

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, Adalberto N.; Tiba, Chigueru; Fraidenraich, Naum [Departamento de Energia Nuclear, da Universidade Federal de Pernambuco, Av. Prof. Luiz Freire, 1000 - CDU, CEP 50.740-540 Recife, Pernambuco (Brazil)

    2010-11-15

    The present study proposes the utilization of Artificial Neural Networks (ANN) as an alternative for generating synthetic series of daily solar irradiation. The sequences were generated from the use of daily temporal series of a group of meteorological variables that were measured simultaneously. The data used were measured between the years of 1998 and 2006 in two temperate climate localities of Brazil, Ilha Solteira (Sao Paulo) and Pelotas (Rio Grande do Sul). The estimates were taken for the months of January, April, July and October, through two models which are distinguished regarding the use or nonuse of measured bright sunshine hours as an input variable. An evaluation of the performance of the 56 months of solar irradiation generated by way of ANN showed that by using the measured bright sunshine hours as an input variable (model 1), the RMSE obtained were less or equal to 23.2% being that of those, although 43 of those months presented RMSE less or equal to 12.3%. In the case of the model that did not use the measured bright sunshine hours but used a daylight length (model 2), RMSE were obtained that varied from 8.5% to 37.5%, although 38 of those months presented RMSE less or equal to 20.0%. A comparison of the monthly series for all of the years, achieved by means of the Kolmogorov-Smirnov test (to a confidence level of 99%), demonstrated that of the 16 series generated by ANN model only two, obtained by model 2 for the months of April and July in Pelotas, presented significant difference in relation to the distributions of the measured series and that all mean deviations obtained were inferior to 0.39 MJ/m{sup 2}. It was also verified that the two ANN models were able to reproduce the principal statistical characteristics of the frequency distributions of the measured series such as: mean, mode, asymmetry and Kurtosis. (author)

  4. A 3D Active Learning Application for NeMO-Net, the NASA Neural Multi-Modal Observation and Training Network for Global Coral Reef Assessment

    Science.gov (United States)

    van den Bergh, Jarrett; Schutz, Joey; Li, Alan; Chirayath, Ved

    2017-01-01

    NeMO-Net, the NASA neural multi-modal observation and training network for global coral reef assessment, is an open-source deep convolutional neural network and interactive active learning training software aiming to accurately assess the present and past dynamics of coral reef ecosystems through determination of percent living cover and morphology as well as mapping of spatial distribution. We present an interactive video game prototype for tablet and mobile devices where users interactively label morphology classifications over mm-scale 3D coral reef imagery captured using fluid lensing to create a dataset that will be used to train NeMO-Nets convolutional neural network. The application currently allows for users to classify preselected regions of coral in the Pacific and will be expanded to include additional regions captured using our NASA FluidCam instrument, presently the highest-resolution remote sensing benthic imaging technology capable of removing ocean wave distortion, as well as lower-resolution airborne remote sensing data from the ongoing NASA CORAL campaign. Active learning applications present a novel methodology for efficiently training large-scale Neural Networks wherein variances in identification can be rapidly mitigated against control data. NeMO-Net periodically checks users input against pre-classified coral imagery to gauge their accuracy and utilize in-game mechanics to provide classification training. Users actively communicate with a server and are requested to classify areas of coral for which other users had conflicting classifications and contribute their input to a larger database for ranking. In partnering with Mission Blue and IUCN, NeMO-Net leverages an international consortium of subject matter experts to classify areas of confusion identified by NeMO-Net and generate additional labels crucial for identifying decision boundary locations in coral reef assessment.

  5. Log Defect Recognition Using CT-images and Neural Net Classifiers

    Science.gov (United States)

    Daniel L. Schmoldt; Pei Li; A. Lynn Abbott

    1995-01-01

    Although several approaches have been introduced to automatically identify internal log defects using computed tomography (CT) imagery, most of these have been feasibility efforts and consequently have had several limitations: (1) reports of classification accuracy are largely subjective, not statistical, (2) there has been no attempt to achieve real-time operation,...

  6. Artificial neural network based approach to EEG signal simulation.

    Science.gov (United States)

    Tomasevic, Nikola M; Neskovic, Aleksandar M; Neskovic, Natasa J

    2012-06-01

    In this paper a new approach to the electroencephalogram (EEG) signal simulation based on the artificial neural networks (ANN) is proposed. The aim was to simulate the spontaneous human EEG background activity based solely on the experimentally acquired EEG data. Therefore, an EEG measurement campaign was conducted on a healthy awake adult in order to obtain an adequate ANN training data set. As demonstration of the performance of the ANN based approach, comparisons were made against autoregressive moving average (ARMA) filtering based method. Comprehensive quantitative and qualitative statistical analysis showed clearly that the EEG process obtained by the proposed method was in satisfactory agreement with the one obtained by measurements.

  7. Use of genetic programming, logistic regression, and artificial neural nets to predict readmission after coronary artery bypass surgery.

    Science.gov (United States)

    Engoren, Milo; Habib, Robert H; Dooner, John J; Schwann, Thomas A

    2013-08-01

    As many as 14 % of patients undergoing coronary artery bypass surgery are readmitted within 30 days. Readmission is usually the result of morbidity and may lead to death. The purpose of this study is to develop and compare statistical and genetic programming models to predict readmission. Patients were divided into separate Construction and Validation populations. Using 88 variables, logistic regression, genetic programs, and artificial neural nets were used to develop predictive models. Models were first constructed and tested on the Construction populations, then validated on the Validation population. Areas under the receiver operator characteristic curves (AU ROC) were used to compare the models. Two hundred and two patients (7.6 %) in the 2,644 patient Construction group and 216 (8.0 %) of the 2,711 patient Validation group were re-admitted within 30 days of CABG surgery. Logistic regression predicted readmission with AU ROC = .675 ± .021 in the Construction group. Genetic programs significantly improved the accuracy, AU ROC = .767 ± .001, p genetic programming (AU ROC = .654 ± .001) was still trivially but statistically non-significantly better than that of the logistic regression (AU ROC = .644 ± .020, p = .61). Genetic programming and logistic regression provide alternative methods to predict readmission that are similarly accurate.

  8. Maximizng the sensitivity of a low threshold VHE gamma ray telescope by the use of neural nets and other methods

    Energy Technology Data Exchange (ETDEWEB)

    Kertzman, M.P. (Department of Physics and Astronomy, DePauw University Greencastle, Indiana 46135 (USA)); Sembroski, G.H. (Department of Physcis, Purdue University West Lafayette, Indiana 47907 (USA))

    1991-04-05

    Detailed 3-dimensional Monte-Carlo computer simulations of the Cherenkov photons produced by VHE (10 GeV to 10 TeV) gamma ray and proton induced air shower cascades are used to calculate the sensitivity and threshold of a ground-based, single-mount, multi-mirror, single photo-electron sensitive gamma ray telescope. Such a telescope is designed to have the lowest possible energy threshold for gamma ray induced air showers for a given light collection area. The sensitivity and energy threshold of this design are determined for various triggering configurations, and the sources and properties of background triggers are investigated. In particular, it is found that up to 40% of the background triggers are due to single muons produced by proton induced showers with primary energies in the 25 to 75 GeV range. Two methods for increasing the sensitivity of such a telescope by discrimination against the single muon induced triggers are investigated. The first uses small outrider telescopes triggering in coincidence with the main telescope. The second uses software implemented neural nets trained to identify muon induced triggers by use of the temporal shape of the Cherenkov light pulse.

  9. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  10. An Enhanced Probabilistic Neural Network Approach Applied to Text Classification

    Science.gov (United States)

    Marques Ciarelli, Patrick; Oliveira, Elias

    Text classification is still a quite difficult problem to be dealt with both by the academia and by the industrial areas. On the top of that, the importance of aggregating a set of related amount of text documents is steadily growing in importance these days. The presence of multi-labeled texts and great quantity of classes turn this problem even more challenging. In this article we present an enhanced version of Probabilistic Neural Network using centroids to tackle the multi-label classification problem. We carried out some experiments comparing our proposed classifier against the other well known classifiers in the literature which were specially designed to treat this type of problem. By the achieved results, we observed that our novel approach were superior to the other classifiers and faster than the Probabilistic Neural Network without the use of centroids.

  11. Psychological Processing in Chronic Pain: A Neural Systems Approach

    Science.gov (United States)

    Simons, Laura; Elman, Igor; Borsook, David

    2014-01-01

    Our understanding of chronic pain involves complex brain circuits that include sensory, emotional, cognitive and interoceptive processing. The feed-forward interactions between physical (e.g., trauma) and emotional pain and the consequences of altered psychological status on the expression of pain have made the evaluation and treatment of chronic pain a challenge in the clinic. By understanding the neural circuits involved in psychological processes, a mechanistic approach to the implementation of psychology-based treatments may be better understood. In this review we evaluate some of the principle processes that may be altered as a consequence of chronic pain in the context of localized and integrated neural networks. These changes are ongoing, vary in their magnitude, and their hierarchical manifestations, and may be temporally and sequentially altered by treatments, and all contribute to an overall pain phenotype. Furthermore, we link altered psychological processes to specific evidence-based treatments to put forth a model of pain neuroscience psychology. PMID:24374383

  12. An Approach to Distributed State Space Exploration for Coloured Petri Nets

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Petrucci, Laure

    2004-01-01

    We present an approach and associated computer tool support for conducting distributed state space exploration for Coloured Petri Nets (CPNs). The distributed state space exploration is based on the introduction of a coordinating process and a number of worker processes. The worker processes...... Tools. This makes the distributed state space exploration and analysis largely transparent to the analyst. We illustrate the use of the developed tool on an example....

  13. Spiking modular neural networks: A neural network modeling approach for hydrological processes

    National Research Council Canada - National Science Library

    Kamban Parasuraman; Amin Elshorbagy; Sean K. Carey

    2006-01-01

    .... In this study, a novel neural network model called the spiking modular neural networks (SMNNs) is proposed. An SMNN consists of an input layer, a spiking layer, and an associator neural network layer...

  14. A neural network approach to dynamic task assignment of multirobots.

    Science.gov (United States)

    Zhu, Anmin; Yang, Simon X

    2006-09-01

    In this paper, a neural network approach to task assignment, based on a self-organizing map (SOM), is proposed for a multirobot system in dynamic environments subject to uncertainties. It is capable of dynamically controlling a group of mobile robots to achieve multiple tasks at different locations, so that the desired number of robots will arrive at every target location from arbitrary initial locations. In the proposed approach, the robot motion planning is integrated with the task assignment, thus the robots start to move once the overall task is given. The robot navigation can be dynamically adjusted to guarantee that each target location has the desired number of robots, even under uncertainties such as when some robots break down. The proposed approach is capable of dealing with changing environments. The effectiveness and efficiency of the proposed approach are demonstrated by simulation studies.

  15. NeMO-Net: The Neural Multi-Modal Observation and Training Network for Global Coral Reef Assessment

    Science.gov (United States)

    Chirayath, Ved

    2017-01-01

    In the past decade, coral reefs worldwide have experienced unprecedented stresses due to climate change, ocean acidification, and anthropomorphic pressures, instigating massive bleaching and die-off of these fragile and diverse ecosystems. Furthermore, remote sensing of these shallow marine habitats is hindered by ocean wave distortion, refraction and optical attenuation, leading invariably to data products that are often of low resolution and signal-to-noise (SNR) ratio. However, recent advances in UAV and Fluid Lensing technology have allowed us to capture multispectral 3D imagery of these systems at sub-cm scales from above the water surface, giving us an unprecedented view of their growth and decay. Exploiting the fine-scaled features of these datasets, machine learning methods such as MAP, PCA, and SVM can not only accurately classify the living cover and morphology of these reef systems (below 8 percent error), but are also able to map the spectral space between airborne and satellite imagery, augmenting and improving the classification accuracy of previously low-resolution datasets. We are currently implementing NeMO-Net, the first open-source deep convolutional neural network (CNN) and interactive active learning and training software to accurately assess the present and past dynamics of coral reef ecosystems through determination of percent living cover and morphology. NeMO-Net will be built upon the QGIS platform to ingest UAV, airborne and satellite datasets from various sources and sensor capabilities, and through data-fusion determine the coral reef ecosystem makeup globally at unprecedented spatial and temporal scales. To achieve this, we will exploit virtual data augmentation, the use of semi-supervised learning, and active learning through a tablet platform allowing for users to manually train uncertain or difficult to classify datasets. The project will make use of Pythons extensive libraries for machine learning, as well as extending integration

  16. Elastic-net regularization approaches for genome-wide association studies of rheumatoid arthritis.

    Science.gov (United States)

    Cho, Seoae; Kim, Haseong; Oh, Sohee; Kim, Kyunga; Park, Taesung

    2009-12-15

    The current trend in genome-wide association studies is to identify regions where the true disease-causing genes may lie by evaluating thousands of single-nucleotide polymorphisms (SNPs) across the whole genome. However, many challenges exist in detecting disease-causing genes among the thousands of SNPs. Examples include multicollinearity and multiple testing issues, especially when a large number of correlated SNPs are simultaneously tested. Multicollinearity can often occur when predictor variables in a multiple regression model are highly correlated, and can cause imprecise estimation of association. In this study, we propose a simple stepwise procedure that identifies disease-causing SNPs simultaneously by employing elastic-net regularization, a variable selection method that allows one to address multicollinearity. At Step 1, the single-marker association analysis was conducted to screen SNPs. At Step 2, the multiple-marker association was scanned based on the elastic-net regularization. The proposed approach was applied to the rheumatoid arthritis (RA) case-control data set of Genetic Analysis Workshop 16. While the selected SNPs at the screening step are located mostly on chromosome 6, the elastic-net approach identified putative RA-related SNPs on other chromosomes in an increased proportion. For some of those putative RA-related SNPs, we identified the interactions with sex, a well known factor affecting RA susceptibility.

  17. Statistical learning of parts and wholes: A neural network approach.

    Science.gov (United States)

    Plaut, David C; Vande Velde, Anna K

    2017-03-01

    Statistical learning is often considered to be a means of discovering the units of perception, such as words and objects, and representing them as explicit "chunks." However, entities are not undifferentiated wholes but often contain parts that contribute systematically to their meanings. Studies of incidental auditory or visual statistical learning suggest that, as participants learn about wholes they become insensitive to parts embedded within them, but this seems difficult to reconcile with a broad range of findings in which parts and wholes work together to contribute to behavior. Bayesian approaches provide a principled description of how parts and wholes can contribute simultaneously to performance, but are generally not intended to model the computations that actually give rise to this performance. In the current work, we develop an account based on learning in artificial neural networks in which the representation of parts and wholes is a matter of degree, and the extent to which they cooperate or compete arises naturally through incidental learning. We show that the approach accounts for a wide range of findings concerning the relationship between parts and wholes in auditory and visual statistical learning, including some findings previously thought to be problematic for neural network approaches. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. AN ECONOMETRICAL APPROACH OF THE RELATIONSHIP BETWEEN INNOVATION AND NET OUTWARD INVESTMENT POSITION

    Directory of Open Access Journals (Sweden)

    Viorela IACOVOIU

    2015-02-01

    Full Text Available Starting from the theory of the Investment Development Path (IDP and competitive advantages, this study presents an econometrical approach of the relationship between net outward investment position, given by the net outward investment per capita (NOI, and innovation capabilities, reflected by the global innovation index (GII. The results of the analysis carried out for the worldwide economies, in the year 2013, using five models demonstrate that there is no significant correlation between NOI, as dependent variable, and GII as independent one. Thus, the highest coefficient of determination value was .201 (cubic model, reflecting the fact that only 20.1% of the variation in the NOI is explained by GII. Therefore, the level of country’s innovation capacities is not one of the main forces that determine its NOI position.

  19. Symbiosis of a telemedicine and neural net's project as a new way of the decision of medical problems

    Science.gov (United States)

    Kasimov, Oleg V.; Karchenova, Elena V.; Maximova, Irina L.

    2007-05-01

    The new approach to training doctors which specialty means skill to distinguish various images - for example, doctors of beam diagnostics, pathologists, hematologists is possible. Telemedicine by means of opportunities of the Internet and video-conference is capable to create expert databases in the several world centers. Neural Networks (the Programs, being a part of the Artificial Intellect) - are trained to give out variants of possible interpretations of the set image on the basis of these expert databases. And the doctors trained the above-named specialties, spend not years and not tens years for achievement of an expert level of professionalism, saving time and greater means and societies for training. Having an opportunity diagnostics at the highest level, the medicine improves quality of a life of the patient, also saving its means.

  20. Crystal Structure Representation for Neural Networks using Topological Approach.

    Science.gov (United States)

    Fedorov, Aleksandr V; Shamanaev, Ivan V

    2017-08-01

    In the present work we describe a new approach, which uses topology of crystals for physicochemical properties prediction using artificial neural networks (ANN). The topologies of 268 crystal structures were determined using ToposPro software. Quotient graphs were used to identify topological centers and their neighbors. The topological approach was illustrated by training ANN to predict molar heat capacity, standard molar entropy and lattice energy of 268 crystals with different compositions and structures (metals, inorganic salts, oxides, etc.). ANN was trained using Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Mean absolute percentage error of predicted properties was ≤8 %. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Social power and approach-related neural activity

    Science.gov (United States)

    Smolders, Ruud; Cremer, David De

    2012-01-01

    It has been argued that power activates a general tendency to approach whereas powerlessness activates a tendency to inhibit. The assumption is that elevated power involves reward-rich environments, freedom and, as a consequence, triggers an approach-related motivational orientation and attention to rewards. In contrast, reduced power is associated with increased threat, punishment and social constraint and thereby activates inhibition-related motivation. Moreover, approach motivation has been found to be associated with increased relative left-sided frontal brain activity, while withdrawal motivation has been associated with increased right sided activations. We measured EEG activity while subjects engaged in a task priming either high or low social power. Results show that high social power is indeed associated with greater left-frontal brain activity compared to low social power, providing the first neural evidence for the theory that high power is associated with approach-related motivation. We propose a framework accounting for differences in both approach motivation and goal-directed behaviour associated with different levels of power. PMID:19304842

  2. When opportunity meets motivation: Neural engagement during social approach is linked to high approach motivation.

    Science.gov (United States)

    Radke, Sina; Seidel, Eva-Maria; Eickhoff, Simon B; Gur, Ruben C; Schneider, Frank; Habel, Ute; Derntl, Birgit

    2016-02-15

    Social rewards are processed by the same dopaminergic-mediated brain networks as non-social rewards, suggesting a common representation of subjective value. Individual differences in personality and motivation influence the reinforcing value of social incentives, but it remains open whether the pursuit of social incentives is analogously supported by the neural reward system when positive social stimuli are connected to approach behavior. To test for a modulation of neural activation by approach motivation, individuals with high and low approach motivation (BAS) completed implicit and explicit social approach-avoidance paradigms during fMRI. High approach motivation was associated with faster implicit approach reactions as well as a trend for higher approach ratings, indicating increased approach tendencies. Implicit and explicit positive social approach was accompanied by stronger recruitment of the nucleus accumbens, middle cingulate cortex, and (pre-)cuneus for individuals with high compared to low approach motivation. These results support and extend prior research on social reward processing, self-other distinctions and affective judgments by linking approach motivation to the engagement of reward-related circuits during motivational reactions to social incentives. This interplay between motivational preferences and motivational contexts might underlie the rewarding experience during social interactions. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Mapping the Spatial Distribution of Winter Crops at Sub-Pixel Level Using AVHRR NDVI Time Series and Neural Nets

    Directory of Open Access Journals (Sweden)

    Felix Rembold

    2013-03-01

    Full Text Available For large areas, it is difficult to assess the spatial distribution and inter-annual variation of crop acreages through field surveys. Such information, however, is of great value for governments, land managers, planning authorities, commodity traders and environmental scientists. Time series of coarse resolution imagery offer the advantage of global coverage at low costs, and are therefore suitable for large-scale crop type mapping. Due to their coarse spatial resolution, however, the problem of mixed pixels has to be addressed. Traditional hard classification approaches cannot be applied because of sub-pixel heterogeneity. We evaluate neural networks as a modeling tool for sub-pixel crop acreage estimation. The proposed methodology is based on the assumption that different cover type proportions within coarse pixels prompt changes in time profiles of remotely sensed vegetation indices like the Normalized Difference Vegetation Index (NDVI. Neural networks can learn the relation between temporal NDVI signatures and the sought crop acreage information. This learning step permits a non-linear unmixing of the temporal information provided by coarse resolution satellite sensors. For assessing the feasibility and accuracy of the approach, a study region in central Italy (Tuscany was selected. The task consisted of mapping the spatial distribution of winter crops abundances within 1 km AVHRR pixels between 1988 and 2001. Reference crop acreage information for network training and validation was derived from high resolution Thematic Mapper/Enhanced Thematic Mapper (TM/ETM+ images and official agricultural statistics. Encouraging results were obtained demonstrating the potential of the proposed approach. For example, the spatial distribution of winter crop acreage at sub-pixel level was mapped with a cross-validated coefficient of determination of 0.8 with respect to the reference information from high resolution imagery. For the eight years for which

  4. Using fuzzy logic to integrate neural networks and knowledge-based systems

    Science.gov (United States)

    Yen, John

    1991-01-01

    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.

  5. Patterns of work attitudes: A neural network approach

    Science.gov (United States)

    Mengov, George D.; Zinovieva, Irina L.; Sotirov, George R.

    2000-05-01

    In this paper we introduce a neural networks based approach to analyzing empirical data and models from work and organizational psychology (WOP), and suggest possible implications for the practice of managers and business consultants. With this method it becomes possible to have quantitative answers to a bunch of questions like: What are the characteristics of an organization in terms of its employees' motivation? What distinct attitudes towards the work exist? Which pattern is most desirable from the standpoint of productivity and professional achievement? What will be the dynamics of behavior as quantified by our method, during an ongoing organizational change or consultancy intervention? Etc. Our investigation is founded on the theoretical achievements of Maslow (1954, 1970) in human motivation, and of Hackman & Oldham (1975, 1980) in job diagnostics, and applies the mathematical algorithm of the dARTMAP variation (Carpenter et al., 1998) of the Adaptive Resonance Theory (ART) neural networks introduced by Grossberg (1976). We exploit the ART capabilities to visualize the knowledge accumulated in the network's long-term memory in order to interpret the findings in organizational research.

  6. A neural network based reputation bootstrapping approach for service selection

    Science.gov (United States)

    Wu, Quanwang; Zhu, Qingsheng; Li, Peng

    2015-10-01

    With the concept of service-oriented computing becoming widely accepted in enterprise application integration, more and more computing resources are encapsulated as services and published online. Reputation mechanism has been studied to establish trust on prior unknown services. One of the limitations of current reputation mechanisms is that they cannot assess the reputation of newly deployed services as no record of their previous behaviours exists. Most of the current bootstrapping approaches merely assign default reputation values to newcomers. However, by this kind of methods, either newcomers or existing services will be favoured. In this paper, we present a novel reputation bootstrapping approach, where correlations between features and performance of existing services are learned through an artificial neural network (ANN) and they are then generalised to establish a tentative reputation when evaluating new and unknown services. Reputations of services published previously by the same provider are also incorporated for reputation bootstrapping if available. The proposed reputation bootstrapping approach is seamlessly embedded into an existing reputation model and implemented in the extended service-oriented architecture. Empirical studies of the proposed approach are shown at last.

  7. Assessment of net community production and calcification of a coral reef using a boundary layer approach

    Science.gov (United States)

    Takeshita, Yuichiro; McGillis, Wade; Briggs, Ellen M.; Carter, Amanda L.; Donham, Emily M.; Martz, Todd R.; Price, Nichole N.; Smith, Jennifer E.

    2016-08-01

    Coral reefs are threatened worldwide, and there is a need to develop new approaches to monitor reef health under natural conditions. Because simultaneous measurements of net community production (NCP) and net community calcification (NCC) are used as important indicators of reef health, tools are needed to assess them in situ. Here we present the Benthic Ecosystem and Acidification Measurement System (BEAMS) to provide the first fully autonomous approach capable of sustained, simultaneous measurements of reef NCP and NCC under undisturbed, natural conditions on time scales ranging from tens of minutes to weeks. BEAMS combines the chemical and velocity gradient in the benthic boundary layer to quantify flux from the benthos for a variety of parameters to measure NCP and NCC. Here BEAMS was used to measure these rates from two different sites with different benthic communities on the western reef terrace at Palmyra Atoll for 2 weeks in September 2014. Measurements were made every ˜15 min. The trends in metabolic rates were consistent with the benthic communities between the two sites with one dominated by fleshy organisms and the other dominated by calcifiers (degraded and healthy reefs, respectively). This demonstrates the potential utility of BEAMS as a reef health monitoring tool. NCP and NCC were tightly coupled on time scales of minutes to days, and light was the primary driver for the variability of daily integrated metabolic rates. No correlation between CO2 levels and daily integrated NCC was observed, indicating that NCC at these sites were not significantly affected by CO2.

  8. Estimating plant root water uptake using a neural network approach

    DEFF Research Database (Denmark)

    Qiao, D M; Shi, H B; Pang, H B

    2010-01-01

    Water uptake by plant roots is an important process in the hydrological cycle, not only for plant growth but also for the role it plays in shaping microbial community and bringing in physical and biochemical changes to soils. The ability of roots to extract water is determined by combined soil...... and plant characteristics, and how to model it has been of interest for many years. Most macroscopic models for water uptake operate at soil profile scale under the assumption that the uptake rate depends on root density and soil moisture. Whilst proved appropriate, these models need spatio-temporal root...... but has not yet been addressed. This paper presents and tests such an approach. The method is based on a neural network model, estimating the water uptake using different types of data that are easy to measure in the field. Sunflower grown in a sandy loam subjected to water stress and salinity was taken...

  9. A PetriNet-Based Approach for Supporting Traceability in Cyber-Physical Manufacturing Systems

    Directory of Open Access Journals (Sweden)

    Jiwei Huang

    2016-03-01

    Full Text Available With the growing popularity of complex dynamic activities in manufacturing processes, traceability of the entire life of every product has drawn significant attention especially for food, clinical materials, and similar items. This paper studies the traceability issue in cyber-physical manufacturing systems from a theoretical viewpoint. Petri net models are generalized for formulating dynamic manufacturing processes, based on which a detailed approach for enabling traceability analysis is presented. Models as well as algorithms are carefully designed, which can trace back the lifecycle of a possibly contaminated item. A practical prototype system for supporting traceability is designed, and a real-life case study of a quality control system for bee products is presented to validate the effectiveness of the approach.

  10. A methodological approach for using high-level Petri Nets to model the immune system response.

    Science.gov (United States)

    Pennisi, Marzio; Cavalieri, Salvatore; Motta, Santo; Pappalardo, Francesco

    2016-12-22

    Mathematical and computational models showed to be a very important support tool for the comprehension of the immune system response against pathogens. Models and simulations allowed to study the immune system behavior, to test biological hypotheses about diseases and infection dynamics, and to improve and optimize novel and existing drugs and vaccines. Continuous models, mainly based on differential equations, usually allow to qualitatively study the system but lack in description; conversely discrete models, such as agent based models and cellular automata, permit to describe in detail entities properties at the cost of losing most qualitative analyses. Petri Nets (PN) are a graphical modeling tool developed to model concurrency and synchronization in distributed systems. Their use has become increasingly marked also thanks to the introduction in the years of many features and extensions which lead to the born of "high level" PN. We propose a novel methodological approach that is based on high level PN, and in particular on Colored Petri Nets (CPN), that can be used to model the immune system response at the cellular scale. To demonstrate the potentiality of the approach we provide a simple model of the humoral immune system response that is able of reproducing some of the most complex well-known features of the adaptive response like memory and specificity features. The methodology we present has advantages of both the two classical approaches based on continuous and discrete models, since it allows to gain good level of granularity in the description of cells behavior without losing the possibility of having a qualitative analysis. Furthermore, the presented methodology based on CPN allows the adoption of the same graphical modeling technique well known to life scientists that use PN for the modeling of signaling pathways. Finally, such an approach may open the floodgates to the realization of multi scale models that integrate both signaling pathways (intra

  11. A complex-valued neural dynamical optimization approach and its stability analysis.

    Science.gov (United States)

    Zhang, Songchuan; Xia, Youshen; Zheng, Weixing

    2015-01-01

    In this paper, we propose a complex-valued neural dynamical method for solving a complex-valued nonlinear convex programming problem. Theoretically, we prove that the proposed complex-valued neural dynamical approach is globally stable and convergent to the optimal solution. The proposed neural dynamical approach significantly generalizes the real-valued nonlinear Lagrange network completely in the complex domain. Compared with existing real-valued neural networks and numerical optimization methods for solving complex-valued quadratic convex programming problems, the proposed complex-valued neural dynamical approach can avoid redundant computation in a double real-valued space and thus has a low model complexity and storage capacity. Numerical simulations are presented to show the effectiveness of the proposed complex-valued neural dynamical approach.

  12. A high performance k-NN approach using binary neural networks

    OpenAIRE

    Hodge, V J; Lees, K J; Austin, J L

    2004-01-01

    This paper evaluates a novel k-nearest neighbour (k-NN) classifier built from binary neural networks. The binary neural approach uses robust encoding to map standard ordinal, categorical and numeric data sets onto a binary neural network. The binary neural network uses high speed pattern matching to recall a candidate set of matching records, which are then processed by a conventional k-NN approach to determine the k-best matches. We compare various configurations of the binary approach to a ...

  13. Neural network-based estimates of Southern Ocean net community production from in situ O2 / Ar and satellite observation: a methodological study

    Science.gov (United States)

    Chang, C.-H.; Johnson, N. C.; Cassar, N.

    2014-06-01

    Southern Ocean organic carbon export plays an important role in the global carbon cycle, yet its basin-scale climatology and variability are uncertain due to limited coverage of in situ observations. In this study, a neural network approach based on the self-organizing map (SOM) is adopted to construct weekly gridded (1° × 1°) maps of organic carbon export for the Southern Ocean from 1998 to 2009. The SOM is trained with in situ measurements of O2 / Ar-derived net community production (NCP) that are tightly linked to the carbon export in the mixed layer on timescales of one to two weeks and with six potential NCP predictors: photosynthetically available radiation (PAR), particulate organic carbon (POC), chlorophyll (Chl), sea surface temperature (SST), sea surface height (SSH), and mixed layer depth (MLD). This nonparametric approach is based entirely on the observed statistical relationships between NCP and the predictors and, therefore, is strongly constrained by observations. A thorough cross-validation yields three retained NCP predictors, Chl, PAR, and MLD. Our constructed NCP is further validated by good agreement with previously published, independent in situ derived NCP of weekly or longer temporal resolution through real-time and climatological comparisons at various sampling sites. The resulting November-March NCP climatology reveals a pronounced zonal band of high NCP roughly following the Subtropical Front in the Atlantic, Indian, and western Pacific sectors, and turns southeastward shortly after the dateline. Other regions of elevated NCP include the upwelling zones off Chile and Namibia, the Patagonian Shelf, the Antarctic coast, and areas surrounding the Islands of Kerguelen, South Georgia, and Crozet. This basin-scale NCP climatology closely resembles that of the satellite POC field and observed air-sea CO2 flux. The long-term mean area-integrated NCP south of 50° S from our dataset, 17.9 mmol C m-2 d-1, falls within the range of 8.3 to 24 mmol

  14. Neural network-based estimates of Southern Ocean net community production from in-situ O2 / Ar and satellite observation: a methodological study

    Science.gov (United States)

    Chang, C.-H.; Johnson, N. C.; Cassar, N.

    2013-10-01

    Southern Ocean organic carbon export plays an important role in the global carbon cycle, yet its basin-scale climatology and variability are uncertain due to limited coverage of in situ observations. In this study, a neural network approach based on the self-organizing map (SOM) is adopted to construct weekly gridded (1° × 1°) maps of organic carbon export for the Southern Ocean from 1998 to 2009. The SOM is trained with in situ measurements of O2 / Ar-derived net community production (NCP) that are tightly linked to the carbon export in the mixed layer on timescales of 1-2 weeks, and six potential NCP predictors: photosynthetically available radiation (PAR), particulate organic carbon (POC), chlorophyll (Chl), sea surface temperature (SST), sea surface height (SSH), and mixed layer depth (MLD). This non-parametric approach is based entirely on the observed statistical relationships between NCP and the predictors, and therefore is strongly constrained by observations. A thorough cross-validation yields three retained NCP predictors, Chl, PAR, and MLD. Our constructed NCP is further validated by good agreement with previously published independent in situ derived NCP of weekly or longer temporal resolution through real-time and climatological comparisons at various sampling sites. The resulting November-March NCP climatology reveals a pronounced zonal band of high NCP roughly following the subtropical front in the Atlantic, Indian and western Pacific sectors, and turns southeastward shortly after the dateline. Other regions of elevated NCP include the upwelling zones off Chile and Namibia, Patagonian Shelf, Antarctic coast, and areas surrounding the Islands of Kerguelen, South Georgia, and Crozet. This basin-scale NCP climatology closely resembles that of the satellite POC field and observed air-sea CO2 flux. The long-term mean area-integrated NCP south of 50° S from our dataset, 14 mmol C m-2 d-1, falls within the range of 8.3-24 mmol C m-2 d-1 from other model

  15. Development of an ensemble-adjoint optimization approach to derive uncertainties in net carbon fluxes

    Directory of Open Access Journals (Sweden)

    T. Ziehn

    2011-11-01

    Full Text Available Accurate modelling of the carbon cycle strongly depends on the parametrization of its underlying processes. The Carbon Cycle Data Assimilation System (CCDAS can be used as an estimator algorithm to derive posterior parameter values and uncertainties for the Biosphere Energy Transfer and Hydrology scheme (BETHY. However, the simultaneous optimization of all process parameters can be challenging, due to the complexity and non-linearity of the BETHY model. Therefore, we propose a new concept that uses ensemble runs and the adjoint optimization approach of CCDAS to derive the full probability density function (PDF for posterior soil carbon parameters and the net carbon flux at the global scale. This method allows us to optimize only those parameters that can be constrained best by atmospheric carbon dioxide (CO2 data. The prior uncertainties of the remaining parameters are included in a consistent way through ensemble runs, but are not constrained by data. The final PDF for the optimized parameters and the net carbon flux are then derived by superimposing the individual PDFs for each ensemble member. We find that the optimization with CCDAS converges much faster, due to the smaller number of processes involved. Faster convergence also gives us much increased confidence that we find the global minimum in the reduced parameter space.

  16. A novel neural dynamical approach to convex quadratic program and its efficient applications.

    Science.gov (United States)

    Xia, Youshen; Sun, Changyin

    2009-12-01

    This paper proposes a novel neural dynamical approach to a class of convex quadratic programming problems where the number of variables is larger than the number of equality constraints. The proposed continuous-time and proposed discrete-time neural dynamical approach are guaranteed to be globally convergent to an optimal solution. Moreover, the number of its neurons is equal to the number of equality constraints. In contrast, the number of neurons in existing neural dynamical methods is at least the number of the variables. Therefore, the proposed neural dynamical approach has a low computational complexity. Compared with conventional numerical optimization methods, the proposed discrete-time neural dynamical approach reduces multiplication operation per iteration and has a large computational step length. Computational examples and two efficient applications to signal processing and robot control further confirm the good performance of the proposed approach.

  17. Deep convolutional neural network approach for forehead tissue thickness estimation

    Directory of Open Access Journals (Sweden)

    Manit Jirapong

    2017-09-01

    Full Text Available In this paper, we presented a deep convolutional neural network (CNN approach for forehead tissue thickness estimation. We use down sampled NIR laser backscattering images acquired from a novel marker-less near-infrared laser-based head tracking system, combined with the beam’s incident angle parameter. These two-channel augmented images were constructed for the CNN input, while a single node output layer represents the estimated value of the forehead tissue thickness. The models were – separately for each subject – trained and tested on datasets acquired from 30 subjects (high resolution MRI data is used as ground truth. To speed up training, we used a pre-trained network from the first subject to bootstrap training for each of the other subjects. We could show a clear improvement for the tissue thickness estimation (mean RMSE of 0.096 mm. This proposed CNN model outperformed previous support vector regression (mean RMSE of 0.155 mm or Gaussian processes learning approaches (mean RMSE of 0.114 mm and eliminated their restrictions for future research.

  18. Multispectral confocal microscopy images and artificial neural nets to monitor the photosensitizer uptake and degradation in Candida albicans cells

    Science.gov (United States)

    Romano, Renan A.; Pratavieira, Sebastião.; da Silva, Ana P.; Kurachi, Cristina; Guimarães, Francisco E. G.

    2017-07-01

    This study clearly demonstrates that multispectral confocal microscopy images analyzed by artificial neural networks provides a powerful tool to real-time monitoring photosensitizer uptake, as well as photochemical transformations occurred.

  19. Predicting Energy Performance of a Net-Zero Energy Building: A Statistical Approach

    Science.gov (United States)

    Kneifel, Joshua; Webb, David

    2016-01-01

    Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid climate zone, and compares these estimates to the results from already existing EnergyPlus whole building energy simulations. This regression model exhibits agreement with EnergyPlus predictive trends in energy production and net consumption, but differs greatly in energy consumption. The model can be used as a framework for alternative and more complex models based on the

  20. Net analyte signal-based simultaneous determination of antazoline and naphazoline using wavelength region selection by experimental design-neural networks.

    Science.gov (United States)

    Hemmateenejad, Bahram; Ghavami, Raoof; Miri, Ramin; Shamsipur, Majtaba

    2006-02-15

    Net analyte signal (NAS)-based multivariate calibration methods were employed for simultaneous determination of anthazoline and naphazoline. The NAS vectors calculated from the absorbance data of the drugs mixture were used as input for classical least squares (CLS), principal component and partial least squares regression PCR and PLS methods. A wavelength selection strategy was used to find the best wavelength region for each drug separately. As a new procedure, we proposed an experimental design-neural network strategy for wavelength region optimization. By use of a full factorial design method, some different wavelength regions were selected by taking into account different spectral parameters including the starting wavelength, the ending wavelength and the wavelength interval. The performance of all the multivariate calibration methods, in all selected wavelength regions for both drugs, was evaluated by calculating a fitness function based on the root mean square error of calibration and validation. A three-layered feed-forward artificial neural network (ANN) model with back-propagation learning algorithm was employed to model the nonlinear relationship between the spectral parameters and fitness of each regression method. From the resulted ANN models, the spectral regions in which lowest fitness could be obtained were chosen. Comparison of the results revealed that the net NAS-PLS resulted in lower prediction error than the other models. The proposed NAS-based calibration method was successfully applied to the simultaneous analyses of anthazoline and naphazoline in a commercial eye drop sample.

  1. NetTurnP – Neural Network Prediction of Beta-turns by Use of Evolutionary Information and Predicted Protein Sequence Features

    DEFF Research Database (Denmark)

    Petersen, Bent; Lundegaard, Claus; Petersen, Thomas Nordahl

    2010-01-01

    is the highest reported performance on a two-class prediction of β-turn and not-β-turn. Furthermore NetTurnP shows improved performance on some of the specific β-turn types. In the present work, neural network methods have been trained to predict β-turn or not and individual β-turn types from the primary amino......β-turns are the most common type of non-repetitive structures, and constitute on average 25% of the amino acids in proteins. The formation of β-turns plays an important role in protein folding, protein stability and molecular recognition processes. In this work we present the neural network method...... NetTurnP, for prediction of two-class β-turns and prediction of the individual β-turn types, by use of evolutionary information and predicted protein sequence features. It has been evaluated against a commonly used dataset BT426, and achieves a Matthews correlation coefficient of 0.50, which...

  2. A neural network based approach to social touch classification

    NARCIS (Netherlands)

    van Wingerden, Siewart; Uebbing, Tobias J.; Jung, Merel Madeleine; Poel, Mannes

    2014-01-01

    Touch is an important interaction modality in social interaction, for instance touch can communicate emotions and can intensify emotions communicated by other modalities. In this paper we explore the use of Neural Networks for the classification of touch. The exploration and assessment of Neural

  3. A Neural Information Field Approach to Computational Cognition

    Science.gov (United States)

    2016-11-18

    of irrelevant information) during the task. The spiking neural model accounts for the probability of first recall, recency effects , primacy effects ...neuron models, allowing the simulated testing of drug effects on cognitive performance; demonstrated a scalable neural model of motor planning... effects of distraction in working memory; shown a hippocampal model able to perform context sensitive sequence encoding and retrieval; proposed what is

  4. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)

    OpenAIRE

    Ahmed R. J. Almusawi; L. Canan Dülger; Sadettin Kapucu

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional...

  5. Artificial Neural Network Approach for Mapping Contrasting Tillage Practices

    Directory of Open Access Journals (Sweden)

    Terry Howell

    2010-02-01

    Full Text Available Tillage information is crucial for environmental modeling as it directly affects evapotranspiration, infiltration, runoff, carbon sequestration, and soil losses due to wind and water erosion from agricultural fields. However, collecting this information can be time consuming and costly. Remote sensing approaches are promising for rapid collection of tillage information on individual fields over large areas. Numerous regression-based models are available to derive tillage information from remote sensing data. However, these models require information about the complex nature of underlying watershed characteristics and processes. Unlike regression-based models, Artificial Neural Network (ANN provides an efficient alternative to map complex nonlinear relationships between an input and output datasets without requiring a detailed knowledge of underlying physical relationships. Limited or no information currently exist quantifying ability of ANN models to identify contrasting tillage practices from remote sensing data. In this study, a set of Landsat TM-based ANN models was developed to identify contrasting tillage practices in the Texas High Plains. Observed tillage data from Moore and Ochiltree Counties were used to develop and evaluate the models, respectively. The overall classification accuracy for the 15 models developed with the Moore County dataset varied from 74% to 91%. Statistical evaluation of these models against the Ochiltree County dataset produced results with an overall classification accuracy varied from 66% to 80%. The ANN models based on TM band 5 or indices of TM Band 5 may provide consistent and accurate tillage information when applied to the Texas High Plains.

  6. NetTurnP--neural network prediction of beta-turns by use of evolutionary information and predicted protein sequence features.

    Directory of Open Access Journals (Sweden)

    Bent Petersen

    Full Text Available UNLABELLED: β-turns are the most common type of non-repetitive structures, and constitute on average 25% of the amino acids in proteins. The formation of β-turns plays an important role in protein folding, protein stability and molecular recognition processes. In this work we present the neural network method NetTurnP, for prediction of two-class β-turns and prediction of the individual β-turn types, by use of evolutionary information and predicted protein sequence features. It has been evaluated against a commonly used dataset BT426, and achieves a Matthews correlation coefficient of 0.50, which is the highest reported performance on a two-class prediction of β-turn and not-β-turn. Furthermore NetTurnP shows improved performance on some of the specific β-turn types. In the present work, neural network methods have been trained to predict β-turn or not and individual β-turn types from the primary amino acid sequence. The individual β-turn types I, I', II, II', VIII, VIa1, VIa2, VIba and IV have been predicted based on classifications by PROMOTIF, and the two-class prediction of β-turn or not is a superset comprised of all β-turn types. The performance is evaluated using a golden set of non-homologous sequences known as BT426. Our two-class prediction method achieves a performance of: MCC=0.50, Qtotal=82.1%, sensitivity=75.6%, PPV=68.8% and AUC=0.864. We have compared our performance to eleven other prediction methods that obtain Matthews correlation coefficients in the range of 0.17-0.47. For the type specific β-turn predictions, only type I and II can be predicted with reasonable Matthews correlation coefficients, where we obtain performance values of 0.36 and 0.31, respectively. CONCLUSION: The NetTurnP method has been implemented as a webserver, which is freely available at http://www.cbs.dtu.dk/services/NetTurnP/. NetTurnP is the only available webserver that allows submission of multiple sequences.

  7. Comparing digraph and Petri net approaches to deadlock avoidance in FMS.

    Science.gov (United States)

    Fanti, M P; Maione, B; Turchiano, B

    2000-01-01

    Flexible manufacturing systems (FMSs) are modern production facilities with easy adaptability to variable production plans and goals. These systems may exhibit deadlock situations occurring when a circular wait arises because each piece in a set requires a resource currently held by another job in the same set. Several authors have proposed different policies to control resource allocation in order to avoid deadlock problems. These approaches are mainly based on some formal models of manufacturing systems, such as Petri nets (PNs), directed graphs, etc. Since they describe various peculiarities of the FMS operation in a modular and systematic way, PNs are the most extensively used tool to model such systems. On the other hand, digraphs are more synthetic than PNs because their vertices are just the system resources. So, digraphs describe the interactions between jobs and resources only, while neglecting other details on the system operation. The aim of this paper is to show the tight connections between the two approaches to the deadlock problem, by proposing a unitary framework that links graph-theoretic and PN models and results. In this context, we establish a direct correspondence between the structural elements of the PN (empty siphons) and those of the digraphs (maximal-weight zero-outdegree strong components) characterizing a deadlock occurrence. The paper also shows that the avoidance policies derived from digraphs can be implemented by controlled PNs.

  8. lpNet: a linear programming approach to reconstruct signal transduction networks.

    Science.gov (United States)

    Matos, Marta R A; Knapp, Bettina; Kaderali, Lars

    2015-10-01

    With the widespread availability of high-throughput experimental technologies it has become possible to study hundreds to thousands of cellular factors simultaneously, such as coding- or non-coding mRNA or protein concentrations. Still, extracting information about the underlying regulatory or signaling interactions from these data remains a difficult challenge. We present a flexible approach towards network inference based on linear programming. Our method reconstructs the interactions of factors from a combination of perturbation/non-perturbation and steady-state/time-series data. We show both on simulated and real data that our methods are able to reconstruct the underlying networks fast and efficiently, thus shedding new light on biological processes and, in particular, into disease's mechanisms of action. We have implemented the approach as an R package available through bioconductor. This R package is freely available under the Gnu Public License (GPL-3) from bioconductor.org (http://bioconductor.org/packages/release/bioc/html/lpNet.html) and is compatible with most operating systems (Windows, Linux, Mac OS) and hardware architectures. bettina.knapp@helmholtz-muenchen.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Grey Communities : A Scientometric Approach to Grey Literature, In and Outside of GreyNet

    OpenAIRE

    Prost, Hélène; Schöpfel, Joachim

    2014-01-01

    International audience; The paper explores grey communities outside the Grey Literature Network Service (GreyNet) and identifies potential members for GreyNet. GreyNet can be compared to a Learned Society or a special interest group specialised in grey literature as a particular field of library and information sciences (LIS). Its relevance is related to its capacity to enforce the terminology and definition of grey literature in LIS research and publications, and its impact and outreach can ...

  10. An Approach to Stable Gradient-Descent Adaptation of Higher Order Neural Units.

    Science.gov (United States)

    Bukovsky, Ivo; Homma, Noriyasu

    2017-09-01

    Stability evaluation of a weight-update system of higher order neural units (HONUs) with polynomial aggregation of neural inputs (also known as classes of polynomial neural networks) for adaptation of both feedforward and recurrent HONUs by a gradient descent method is introduced. An essential core of the approach is based on the spectral radius of a weight-update system, and it allows stability monitoring and its maintenance at every adaptation step individually. Assuring the stability of the weight-update system (at every single adaptation step) naturally results in the adaptation stability of the whole neural architecture that adapts to the target data. As an aside, the used approach highlights the fact that the weight optimization of HONU is a linear problem, so the proposed approach can be generally extended to any neural architecture that is linear in its adaptable parameters.

  11. A regional and multi-faceted approach to postgraduate water education - the WaterNet experience in Southern Africa

    Science.gov (United States)

    Jonker, L.; van der Zaag, P.; Gumbo, B.; Rockström, J.; Love, D.; Savenije, H. H. G.

    2012-11-01

    This paper reports the experience of a regional network of academic departments involved in water education that started as a project and evolved, over a period of 12 yr, into an independent network organisation. The paper pursues three objectives. First, it argues that it makes good sense to organise postgraduate education and research on water resources on a regional scale and presents the WaterNet experience as an example that a regional approach can work. Second, it presents preliminary findings and conclusions that the regional approach presented by WaterNet did make a contribution to the capacity needs of the region both in terms of management and research capacity. Third, it draws two generalised lessons from the WaterNet experience. Lesson one pertains to the importance of legitimate ownership and an accountability structure for network effectiveness. Lesson two is related to the financial and intellectual resources required to jointly developing educational programmes through shared experience.

  12. Petri Net Decomposition Approach for Bi-Objective Routing for AGV Systems Minimizing Total Traveling Time and Equalizing Delivery Time

    Science.gov (United States)

    Eda, Shuhei; Nishi, Tatsushi; Mariyama, Toshisada; Kataoka, Satomi; Shoda, Kazuya; Matsumura, Katsuhiko

    In this paper, we propose Petri net decomposition approach for bi-objective optimization of conflict-free routing for AGV systems. The objective is minimizing total traveling time and equalizing delivery time simultaneously. The dispatching and conflict-free routing problem for AGVs is represented as a bi-objective optimal firing sequence problem for Petri Net. A Petri net decomposition approach is proposed to solve the bi-objective optimization problem efficiently. The convergence of the proposed algorithm is improved reducing search region by the proposed coordination method. The effectiveness of the proposed method is compared with that of a nearest neighborhood dispatching method. Computational results are provided to show the effectiveness of the proposed method.

  13. Fuzzy neural approach for colon cancer prediction | Obi | Scientia ...

    African Journals Online (AJOL)

    fuzzy inference procedure. The proposed system which is self-learning and adaptive is able to handle the uncertainties often associated with the diagnosis and analysis of colon cancer. Keywords: Neural Network, Fuzzy logic, Neuro Fuzzy System, ...

  14. Using artificial neural network approach for modelling rainfall–runoff ...

    Indian Academy of Sciences (India)

    driven techniques, the artificial neural .... inputs from the environment), one or more inter- mediate layers and an output layer (producing the ... three-layer learning network consisting of an input layer, a hidden layer and an output layer as illus-.

  15. iWordNet: A New Approach to Cognitive Science and Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Mark Chang

    2017-01-01

    Full Text Available One of the main challenges in artificial intelligence or computational linguistics is understanding the meaning of a word or concept. We argue that the connotation of the term “understanding,” or the meaning of the word “meaning,” is merely a word mapping game due to unavoidable circular definitions. These circular definitions arise when an individual defines a concept, the concepts in its definition, and so on, eventually forming a personalized network of concepts, which we call an iWordNet. Such an iWordNet serves as an external representation of an individual’s knowledge and state of mind at the time of the network construction. As a result, “understanding” and knowledge can be regarded as a calculable statistical property of iWordNet topology. We will discuss the construction and analysis of the iWordNet, as well as the proposed “Path of Understanding” in an iWordNet that characterizes an individual’s understanding of a complex concept such as a written passage. In our pilot study of 20 subjects we used a regression model to demonstrate that the topological properties of an individual’s iWordNet are related to his IQ score, a relationship that suggests iWordNets as a potential new methodology to studying cognitive science and artificial intelligence.

  16. Quantifying Migration Behaviour Using Net Squared Displacement Approach: Clarifications and Caveats.

    Directory of Open Access Journals (Sweden)

    Navinder J Singh

    Full Text Available Estimating migration parameters of individuals and populations is vital for their conservation and management. Studies on animal movements and migration often depend upon location data from tracked animals and it is important that such data are appropriately analyzed for reliable estimates of migration and effective management of moving animals. The Net Squared Displacement (NSD approach for modelling animal movement is being increasingly used as it can objectively quantify migration characteristics and separate different types of movements from migration. However, the ability of NSD to properly classify the movement patterns of individuals has been criticized and issues related to study design arise with respect to starting locations of the data/animals, data sampling regime and extent of movement of species. We address the issues raised over NSD using tracking data from 319 moose (Alces alces in Sweden. Moose is an ideal species to test this approach, as it can be sedentary, nomadic, dispersing or migratory and individuals vary in their extent, timing and duration of migration. We propose a two-step process of using the NSD approach by first classifying movement modes using mean squared displacement (MSD instead of NSD and then estimating the extent, duration and timing of migration using NSD. We show that the NSD approach is robust to the choice of starting dates except when the start date occurs during the migratory phase. We also show that the starting location of the animal has a marginal influence on the correct quantification of migration characteristics. The number of locations per day (1-48 did not significantly affect the performance of non-linear mixed effects models, which correctly distinguished migration from other movement types, however, high-resolution data had a significant negative influence on estimates for the timing of migrations. The extent of movement, however, had an effect on the classification of movements, and

  17. Controlling the elements: an optogenetic approach to understanding the neural circuits of fear.

    Science.gov (United States)

    Johansen, Joshua P; Wolff, Steffen B E; Lüthi, Andreas; LeDoux, Joseph E

    2012-06-15

    Neural circuits underlie our ability to interact in the world and to learn adaptively from experience. Understanding neural circuits and how circuit structure gives rise to neural firing patterns or computations is fundamental to our understanding of human experience and behavior. Fear conditioning is a powerful model system in which to study neural circuits and information processing and relate them to learning and behavior. Until recently, technological limitations have made it difficult to study the causal role of specific circuit elements during fear conditioning. However, newly developed optogenetic tools allow researchers to manipulate individual circuit components such as anatomically or molecularly defined cell populations, with high temporal precision. Applying these tools to the study of fear conditioning to control specific neural subpopulations in the fear circuit will facilitate a causal analysis of the role of these circuit elements in fear learning and memory. By combining this approach with in vivo electrophysiological recordings in awake, behaving animals, it will also be possible to determine the functional contribution of specific cell populations to neural processing in the fear circuit. As a result, the application of optogenetics to fear conditioning could shed light on how specific circuit elements contribute to neural coding and to fear learning and memory. Furthermore, this approach may reveal general rules for how circuit structure and neural coding within circuits gives rise to sensory experience and behavior. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  18. Derivation of Surface Net Radiation at the Valencia Anchor Station from Top of the Atmosphere Gerb Fluxes by Means of Linear Models and Neural Networks

    Science.gov (United States)

    Geraldo Ferreira, A.; Lopez-Baeza, Ernesto; Velazquez Blazquez, Almudena; Soria-Olivas, Emilio; Serrano Lopez, Antonio J.; Gomez Chova, Juan

    2012-07-01

    In this work, Linear Models (LM) and Artificial Neural Networks (ANN) have been developed to estimate net radiation (RN) at the surface. The models have been developed and evaluated by using the synergy between Geostationary Earth Radiation Budget (GERB-1) and Spinning Enhanced Visible and Infrared Imager (SEVIRI) data, both instruments onboard METEOSAT-9, and ``in situ'' measurements. The data used in this work, corresponding to August 2006 and June to August 2007, proceed from Top of the Atmosphere (TOA) broadband fluxes from GERB-1, every 15 min, and from net radiation at the surface measured, every 10 min, at the Valencia Anchor Station (VAS) area, having measured independently the shortwave and the longwave radiation components (downwelling and upwelling) for different land uses and land cover. The adjustment of both temporal resolutions for the satellite and in situ data was achieved by linear interpolation that showed less standard deviation than the cubic one. The LMs were developed and validated by using satellite TOA RN and ground station surface RN measurements, only considering cloudy free days selected from the ground data. The ANN model was developed both for cloudy and cloudy-free conditions using seven input variables selected for the training/validation sets, namely, hour, day, month, surface RN, solar zenith angle and TOA shortwave and longwave fluxes. Both, LMs and ANNs show remarkably good agreement when compared to surface RN measurements. Therefore, this methodology can be successfully applied to estimate RN at surface from GERB/SEVIRI data.

  19. Convolutional neural network approach for enhanced capture of breast parenchymal complexity patterns associated with breast cancer risk

    Science.gov (United States)

    Oustimov, Andrew; Gastounioti, Aimilia; Hsieh, Meng-Kang; Pantalone, Lauren; Conant, Emily F.; Kontos, Despina

    2017-03-01

    We assess the feasibility of a parenchymal texture feature fusion approach, utilizing a convolutional neural network (ConvNet) architecture, to benefit breast cancer risk assessment. Hypothesizing that by capturing sparse, subtle interactions between localized motifs present in two-dimensional texture feature maps derived from mammographic images, a multitude of texture feature descriptors can be optimally reduced to five meta-features capable of serving as a basis on which a linear classifier, such as logistic regression, can efficiently assess breast cancer risk. We combine this methodology with our previously validated lattice-based strategy for parenchymal texture analysis and we evaluate the feasibility of this approach in a case-control study with 424 digital mammograms. In a randomized split-sample setting, we optimize our framework in training/validation sets (N=300) and evaluate its descriminatory performance in an independent test set (N=124). The discriminatory capacity is assessed in terms of the the area under the curve (AUC) of the receiver operator characteristic (ROC). The resulting meta-features exhibited strong classification capability in the test dataset (AUC = 0.90), outperforming conventional, non-fused, texture analysis which previously resulted in an AUC=0.85 on the same case-control dataset. Our results suggest that informative interactions between localized motifs exist and can be extracted and summarized via a fairly simple ConvNet architecture.

  20. Credit Risk Evaluation System: An Artificial Neural Network Approach

    African Journals Online (AJOL)

    In this paper, we proposed an architecture which uses the theory of artificial neural networks and business rules to correctly determine whether a customer is good or bad. In the first step, by using clustering algorithm, clients are segmented into groups with similar features. In the second step, decision trees are built based ...

  1. Artificial neural network approach for estimation of surface specific ...

    Indian Academy of Sciences (India)

    Microwave sensor MSMR (Multifrequency Scanning Microwave Radiometer) data onboard Oceansat-1 was used for retrieval of monthly averages of near surface specific humidity (a) and air temperature (a) by means of Artificial Neural Network (ANN). The MSMR measures the microwave radiances in 8 channels at ...

  2. Design of efficient and safe neural stimulators : A multidisciplinary approach

    NARCIS (Netherlands)

    Van Dongen, M.N.

    2015-01-01

    Neural stimulation is an established treatment methodology for an increasing number of diseases. Electrical Stimulation injects a stimulation signal through electrodes that are implanted in the target area of the central or peripheral nervous system in order to evoke a specific neuronal response

  3. A Neural Network Approach for GMA Butt Joint Welding

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2003-01-01

    This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...

  4. Artificial Neural Networks: A New Approach to Predicting Application Behavior.

    Science.gov (United States)

    Gonzalez, Julie M. Byers; DesJardins, Stephen L.

    2002-01-01

    Applied the technique of artificial neural networks to predict which students were likely to apply to one research university. Compared the results to the traditional analysis tool, logistic regression modeling. Found that the addition of artificial intelligence models was a useful new tool for predicting student application behavior. (EV)

  5. artificial neural network (ann) approach to electrical load

    African Journals Online (AJOL)

    2004-08-18

    Aug 18, 2004 ... UNIVERSITY POWER HOUSE. A.A.AKINTOLA", G.A. ADEROUNMU and O.E. ... The model was tested using two of the seven feeders of the Obafemi. Awolowo University electric network. The results of .... The architecture of a neural network is the specific arrangement and connections of the neurons that.

  6. Estimating Net Interracial Mobility in the U.S. A Residual Methods Approach

    Science.gov (United States)

    Perez, Anthony Daniel; Hirschman, Charles

    2013-01-01

    This paper presents a residual methods approach to identifying social mobility across race/ethnic categories. In traditional demographic accounting models, population growth is limited to changes in natural increase and migration. Other sources of growth are absorbed by the model residual and can only be estimated indirectly. While these residual estimates have been used to illuminate a number of elusive demographic processes, there has been little effort to incorporate shifts in racial identification into formal accounts of population change. In light of growing evidence that a number of Americans view race/ethnic identities as a personal choice, and not a fixed characteristic, mobility across racial categories may play important roles in the growth of race/ethnic sub-populations and changes to the composition of the U.S. To examine this potential, we derive a reduced-form population balancing equation that treats fertility and international migration as given and estimates survival from period life table data. After subtracting out national increase and migration and adjusting the balance of observed growth for changes in racial measurement and census coverage, we argue that the remaining error of closure provides a reasonable estimate of net interracial mobility among the native born. Using recent Census and ACS microdata, we illustrate the impact that identity shifts may have had on the growth of race/ethnic sub-populations in the past quarter century. Findings suggest a small drift from the non-Hispanic white population into race/ethnic minority groups, though the pattern varies by age and between time periods. PMID:24078761

  7. RESTful NET

    CERN Document Server

    Flanders, Jon

    2008-01-01

    RESTful .NET is the first book that teaches Windows developers to build RESTful web services using the latest Microsoft tools. Written by Windows Communication Foundation (WFC) expert Jon Flanders, this hands-on tutorial demonstrates how you can use WCF and other components of the .NET 3.5 Framework to build, deploy and use REST-based web services in a variety of application scenarios. RESTful architecture offers a simpler approach to building web services than SOAP, SOA, and the cumbersome WS- stack. And WCF has proven to be a flexible technology for building distributed systems not necessa

  8. A comparative performance evaluation of neural network based approach for sentiment classification of online reviews

    Directory of Open Access Journals (Sweden)

    G. Vinodhini

    2016-01-01

    Full Text Available The aim of sentiment classification is to efficiently identify the emotions expressed in the form of text messages. Machine learning methods for sentiment classification have been extensively studied, due to their predominant classification performance. Recent studies suggest that ensemble based machine learning methods provide better performance in classification. Artificial neural networks (ANNs are rarely being investigated in the literature of sentiment classification. This paper compares neural network based sentiment classification methods (back propagation neural network (BPN, probabilistic neural network (PNN & homogeneous ensemble of PNN (HEN using varying levels of word granularity as features for feature level sentiment classification. They are validated using a dataset of product reviews collected from the Amazon reviews website. An empirical analysis is done to compare results of ANN based methods with two statistical individual methods. The methods are evaluated using five different quality measures and results show that the homogeneous ensemble of the neural network method provides better performance. Among the two neural network approaches used, probabilistic neural networks (PNNs outperform in classifying the sentiment of the product reviews. The integration of neural network based sentiment classification methods with principal component analysis (PCA as a feature reduction technique provides superior performance in terms of training time also.

  9. Modeling of methane emissions using artificial neural network approach

    Directory of Open Access Journals (Sweden)

    Stamenković Lidija J.

    2015-01-01

    Full Text Available The aim of this study was to develop a model for forecasting CH4 emissions at the national level, using Artificial Neural Networks (ANN with broadly available sustainability, economical and industrial indicators as their inputs. ANN modeling was performed using two different types of architecture; a Backpropagation Neural Network (BPNN and a General Regression Neural Network (GRNN. A conventional multiple linear regression (MLR model was also developed in order to compare model performance and assess which model provides the best results. ANN and MLR models were developed and tested using the same annual data for 20 European countries. The ANN model demonstrated very good performance, significantly better than the MLR model. It was shown that a forecast of CH4 emissions at the national level using the ANN model can be made successfully and accurately for a future period of up to two years, thereby opening the possibility to apply such a modeling technique which can be used to support the implementation of sustainable development strategies and environmental management policies. [Projekat Ministarstva nauke Republike Srbije, br. 172007

  10. Culture-sensitive neural substrates of human cognition: a transcultural neuroimaging approach.

    Science.gov (United States)

    Han, Shihui; Northoff, Georg

    2008-08-01

    Our brains and minds are shaped by our experiences, which mainly occur in the context of the culture in which we develop and live. Although psychologists have provided abundant evidence for diversity of human cognition and behaviour across cultures, the question of whether the neural correlates of human cognition are also culture-dependent is often not considered by neuroscientists. However, recent transcultural neuroimaging studies have demonstrated that one's cultural background can influence the neural activity that underlies both high- and low-level cognitive functions. The findings provide a novel approach by which to distinguish culture-sensitive from culture-invariant neural mechanisms of human cognition.

  11. NetEnquiry--A Competitive Mobile Learning Approach for the Banking Sector

    Science.gov (United States)

    Beutner, Marc; Teine, Matthias; Gebbe, Marcel; Fortmann, Lara Melissa

    2016-01-01

    Initial and further education in the banking sector is becoming more and more important due to the fact that the regulations and the complexity in world of work and an international banking scene is increasing. In this article we provide the structures of and information on NetEnquiry, an innovative mobile learning environment in this field,…

  12. An Exploratory Analysis of the .Net Component Model and Uniframe Paradigm Using a Collaborative Approach

    Science.gov (United States)

    2004-08-01

    Architecture [SIR01]..................................................................... 87 Figure 5.2 Communication between HH, AR and DSM in .NET URDS...Shelf DCOM Distributed Component Object Model DCS Distributed Computing System DM Discovery Manager DSM Domain Security Manager EAI Enterprise...service descriptions with little or no human administrative intervention. Service Location Protocol ( SLP ) [GUT99a], JINI [SUN01a], Ninja project

  13. Extension of Petri Nets by Aspects to Apply the Model Driven Architecture Approach

    NARCIS (Netherlands)

    Roubtsova, E.E.; Aksit, Mehmet

    2005-01-01

    Within MDA models are usually created in the UML. However, one may prefer to use different notations such as Petri-nets, for example, for modelling concurrency and synchronization properties of systems. This paper claims that techniques that are adopted within the context of MDA can also be

  14. The Unification Space implemented as a localist neural net: predictions and error-tolerance in a constraint-based parser.

    Science.gov (United States)

    Vosse, Theo; Kempen, Gerard

    2009-12-01

    We introduce a novel computer implementation of the Unification-Space parser (Vosse and Kempen in Cognition 75:105-143, 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen and Harbusch in Verb constructions in German and Dutch. Benjamins, Amsterdam, 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least qualitatively and rudimentarily, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described.

  15. A Neural Network Approach for GMA Butt Joint Welding

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2003-01-01

    This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...... penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least...

  16. A neural network approach to lung nodule segmentation

    Science.gov (United States)

    Hu, Yaoxiu; Menon, Prahlad G.

    2016-03-01

    Computed tomography (CT) imaging is a sensitive and specific lung cancer screening tool for the high-risk population and shown to be promising for detection of lung cancer. This study proposes an automatic methodology for detecting and segmenting lung nodules from CT images. The proposed methods begin with thorax segmentation, lung extraction and reconstruction of the original shape of the parenchyma using morphology operations. Next, a multi-scale hessian-based vesselness filter is applied to extract lung vasculature in lung. The lung vasculature mask is subtracted from the lung region segmentation mask to extract 3D regions representing candidate pulmonary nodules. Finally, the remaining structures are classified as nodules through shape and intensity features which are together used to train an artificial neural network. Up to 75% sensitivity and 98% specificity was achieved for detection of lung nodules in our testing dataset, with an overall accuracy of 97.62%+/-0.72% using 11 selected features as input to the neural network classifier, based on 4-fold cross-validation studies. Receiver operator characteristics for identifying nodules revealed an area under curve of 0.9476.

  17. Robust CDMA multiuser detection using a neural-network approach.

    Science.gov (United States)

    Chuah, Teong Chee; Sharif, B S; Hinton, O R

    2002-01-01

    Abstract-Recently, a robust version of the linear decorrelating detector (LDD) based on the Huber's M-estimation technique has been proposed. In this paper, we first demonstrate the use of a three-layer recurrent neural network (RNN) to implement the LDD without requiring matrix inversion. The key idea is based on minimizing an appropriate computational energy function iteratively. Second, it will be shown that the M-decorrelating detector (MDD) can be implemented by simply incorporating sigmoidal neurons in the first layer of the RNN. A proof of the redundancy of the matrix inversion process is provided and the computational saving in realistic network is highlighted. Third, we illustrate how further performance gain could be achieved for the subspace-based blind MDD by using robust estimates of the signal subspace components in the initial stage. The impulsive noise is modeled using non-Gaussian alpha-stable distributions, which do not include a Gaussian component but facilitate the use of the recently proposed geometric signal-to-noise ratio (G-SNR). The characteristics and performance of the proposed neural-network detectors are investigated by computer simulation.

  18. TUTORIAL: The dynamic neural field approach to cognitive robotics

    Science.gov (United States)

    Erlhagen, Wolfram; Bicho, Estela

    2006-09-01

    This tutorial presents an architecture for autonomous robots to generate behavior in joint action tasks. To efficiently interact with another agent in solving a mutual task, a robot should be endowed with cognitive skills such as memory, decision making, action understanding and prediction. The proposed architecture is strongly inspired by our current understanding of the processing principles and the neuronal circuitry underlying these functionalities in the primate brain. As a mathematical framework, we use a coupled system of dynamic neural fields, each representing the basic functionality of neuronal populations in different brain areas. It implements goal-directed behavior in joint action as a continuous process that builds on the interpretation of observed movements in terms of the partner's action goal. We validate the architecture in two experimental paradigms: (1) a joint search task; (2) a reproduction of an observed or inferred end state of a grasping-placing sequence. We also review some of the mathematical results about dynamic neural fields that are important for the implementation work. .

  19. A Neural Network Approach to Fluid Quantity Measurement in Dynamic Environments

    CERN Document Server

    Terzic, Edin; Nagarajah, Romesh; Alamgir, Muhammad

    2012-01-01

    Sloshing causes liquid to fluctuate, making accurate level readings difficult to obtain in dynamic environments. The measurement system described uses a single-tube capacitive sensor to obtain an instantaneous level reading of the fluid surface, thereby accurately determining the fluid quantity in the presence of slosh. A neural network based classification technique has been applied to predict the actual quantity of the fluid contained in a tank under sloshing conditions.   In A neural network approach to fluid quantity measurement in dynamic environments, effects of temperature variations and contamination on the capacitive sensor are discussed, and the authors propose that these effects can also be eliminated with the proposed neural network based classification system. To examine the performance of the classification system, many field trials were carried out on a running vehicle at various tank volume levels that range from 5 L to 50 L. The effectiveness of signal enhancement on the neural network base...

  20. An Organizational Approach to the Polysemy Problem in WordNet

    OpenAIRE

    Freihat, Abed Alhakim

    2014-01-01

    Polysemy in WordNet corresponds to various kinds of linguistic phenomena that can be grouped into five classes. One of them is homonymy that refers to the cases, where the meanings of a term are unrelated, and three of the classes refer to the polysemy cases, where the meanings of a term are related. These three classes are specialization polysemy, metonymy, and metaphoric polysemy.Another polysemy class is the compound noun polysemy. In this thesis, we focus on compound noun polysemy ...

  1. Context-Aware Mobility Management in HetNets: A Reinforcement Learning Approach

    OpenAIRE

    Simsek, Meryem; Bennis, Mehdi; Güvenc, Ismail

    2015-01-01

    The use of small cell deployments in heterogeneous network (HetNet) environments is expected to be a key feature of 4G networks and beyond, and essential for providing higher user throughput and cell-edge coverage. However, due to different coverage sizes of macro and pico base stations (BSs), such a paradigm shift introduces additional requirements and challenges in dense networks. Among these challenges is the handover performance of user equipment (UEs), which will be impacted especially w...

  2. Equilibrium Signal and Purchase Decision in China’s IPO Net Roadshow: A Dynamic Game Approach

    Directory of Open Access Journals (Sweden)

    Yi Zhao

    2016-01-01

    Full Text Available The net roadshow has been dominant in China’s IPO (initial public offerings roadshow structure. Considering the dynamic game with incomplete information between the issuer and investor during China’s IPO net roadshow, the quality of the letter of intent is presented as a discrete signal in this paper in accordance with China’s IPO net roadshow characteristics. A signaling game model is established to conclude the issuer’s equilibrium signal and the investor’s purchase action. The issuer disguised a letter of intent to uplift its quality if the disguising cost per share stands below the bidding spread. If the investor judges the letter of intent as high-quality, the basis of purchase is that the opportunity cost per share is less than the expectation on the intrinsic value of the IPO stock. Otherwise the investor rejects purchasing on the condition that the opportunity cost outnumbers the valuation of intrinsic value. In conclusion, there exist unique separating equilibrium and pooling equilibrium as a perfect Bayesian Nash equilibrium, and the existence and uniqueness of their equilibrium domains have been verified by numerical simulation. Finally, the comprehensive empirical studies have validated only one separating and pooling equilibrium existing in China’s real-world IPO market.

  3. An artificial neural networks approach in managing healthcare.

    Science.gov (United States)

    Okoroh, Michael Iheoma; Ilozor, Benedict Dozie; Gombera, Peter

    2007-01-01

    Hospitals as learning organisations have evolved through complex phases of service failures and continuous service improvement to meet the business needs of a varied continuum of care customers. This paper explores the use of Artificial Neural Network (ANN) in the development of a decision support system to manage healthcare non-clinical services. The information (postal questionnaires and repertory grid interviews) used to develop the input to the National Healthcare Service Facilities Risk Exposure System (NHSFRES) was articulated from 60 experienced healthcare operators. The system provides a reasonable early warning signal to the healthcare managers, and can be used by decision makers to evaluate the severity of risks on healthcare non clinical business operations. The advantage of using NHSFRES is that healthcare managers can provide their own risk assessment values (point score system) based on their own healthcare management business knowledge/judgement and corporate objectives.

  4. A Neural Network Approach for GMA Butt Joint Welding

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2003-01-01

    This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...... penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least...... squares has been used with the back-propagation algorithm for training the network, while a Bayesian regularization technique has been successfully applied for minimizing the risk of inexpedient over-training. Finally, a predictive closed-loop control strategy based on a so-called single-neuron self...

  5. Investigation of tt in the full hadronic final state at CDF with a neural network approach

    CERN Document Server

    Sidoti, A; Busetto, G; Castro, A; Dusini, S; Lazzizzera, I; Wyss, J

    2001-01-01

    In this work we present the results of a neural network (NN) approach to the measurement of the tt production cross-section and top mass in the all-hadronic channel, analyzing data collected at the Collider Detector at Fermilab (CDF) experiment. We have used a hardware implementation of a feedforward neural network, TOTEM, the product of a collaboration of INFN (Istituto Nazionale Fisica Nucleare)-IRST (Istituto per la Ricerca Scientifica e Tecnologica)-University of Trento, Italy. Particular attention has been paid to the evaluation of the systematics specifically related to the NN approach. The results are consistent with those obtained at CDF by conventional data selection techniques. (38 refs).

  6. Adaptive Critic Neural Network-Based Terminal Area Energy Management and Approach and Landing Guidance

    Science.gov (United States)

    Grantham, Katie

    2003-01-01

    Reusable Launch Vehicles (RLVs) have different mission requirements than the Space Shuttle, which is used for benchmark guidance design. Therefore, alternative Terminal Area Energy Management (TAEM) and Approach and Landing (A/L) Guidance schemes can be examined in the interest of cost reduction. A neural network based solution for a finite horizon trajectory optimization problem is presented in this paper. In this approach the optimal trajectory of the vehicle is produced by adaptive critic based neural networks, which were trained off-line to maintain a gradual glideslope.

  7. Inverse Reliability Task: Artificial Neural Networks and Reliability-Based Optimization Approaches

    OpenAIRE

    Lehký, David; Slowik, Ondřej; Novák, Drahomír

    2014-01-01

    Part 7: Genetic Algorithms; International audience; The paper presents two alternative approaches to solve inverse reliability task – to determine the design parameters to achieve desired target reliabilities. The first approach is based on utilization of artificial neural networks and small-sample simulation Latin hypercube sampling. The second approach considers inverse reliability task as reliability-based optimization task using double-loop method and also small-sample simulation. Efficie...

  8. Technical measures without enforcement tools: is there any sense? A methodological approach for the estimation of passive net length in small scale fisheries

    Directory of Open Access Journals (Sweden)

    A. LUCCHETTI

    2014-09-01

    Full Text Available Passive nets are currently among the most important fishing gears largely used along the Mediterranean coasts by the small scale fisheries sector. The fishing effort exerted by this sector is strongly correlated with net dimensions. Therefore, the use of passive nets is worldwide managed by defining net length and net drop. The EC Reg. 1967/2006 reports that the length of bottom-set and drifting nets may be also defined considering their weight or volume; however, no practical suggestions for fisheries inspectors are yet available. Consequently,  even if such technical measures are reasonable from a theoretical viewpoint, they are hardly suitable as a management tool, due to the difficulties in harbour control. The overall objective of this paper is to provide a quick methodological approach for the gross estimation of passive net length (by net type on the basis of net volume. The final goal is to support fisheries managers with suitable advice for enforcement and control purposes. The results obtained are important for the management of the fishing effort exerted by small scale fisheries. The methodology developed in this study should be considered as a first attempt to tackle the tangled problem of net length estimation that can be easily applied in other fisheries and areas in order to improve the precision of the models developed herein.

  9. Intercomparisons of Prognostic, Diagnostic, and Inversion Modeling Approaches for Estimation of Net Ecosystem Exchange over the Pacific Northwest Region

    Science.gov (United States)

    Turner, D. P.; Jacobson, A. R.; Nemani, R. R.

    2013-12-01

    The recent development of large spatially-explicit datasets for multiple variables relevant to monitoring terrestrial carbon flux offers the opportunity to estimate the terrestrial land flux using several alternative, potentially complimentary, approaches. Here we developed and compared regional estimates of net ecosystem exchange (NEE) over the Pacific Northwest region of the U.S. using three approaches. In the prognostic modeling approach, the process-based Biome-BGC model was driven by distributed meteorological station data and was informed by Landsat-based coverages of forest stand age and disturbance regime. In the diagnostic modeling approach, the quasi-mechanistic CFLUX model estimated net ecosystem production (NEP) by upscaling eddy covariance flux tower observations. The model was driven by distributed climate data and MODIS FPAR (the fraction of incident PAR that is absorbed by the vegetation canopy). It was informed by coarse resolution (1 km) data about forest stand age. In both the prognostic and diagnostic modeling approaches, emissions estimates for biomass burning, harvested products, and river/stream evasion were added to model-based NEP to get NEE. The inversion model (CarbonTracker) relied on observations of atmospheric CO2 concentration to optimize prior surface carbon flux estimates. The Pacific Northwest is heterogeneous with respect to land cover and forest management, and repeated surveys of forest inventory plots support the presence of a strong regional carbon sink. The diagnostic model suggested a stronger carbon sink than the prognostic model, and a much larger sink that the inversion model. The introduction of Landsat data on disturbance history served to reduce uncertainty with respect to regional NEE in the diagnostic and prognostic modeling approaches. The FPAR data was particularly helpful in capturing the seasonality of the carbon flux using the diagnostic modeling approach. The inversion approach took advantage of a global

  10. Social power and approach-related neural activity

    NARCIS (Netherlands)

    M.A.S. Boksem (Maarten); R. Smolders (Ruud); D. de Cremer (David)

    2009-01-01

    textabstractIt has been argued that power activates a general tendency to approach whereas powerlessness activates a tendency to inhibit. The assumption is that elevated power involves reward-rich environments, freedom and, as a consequence, triggers an approach-related motivational orientation and

  11. Neural change following different memory training approaches in very preterm born children - A pilot study.

    Science.gov (United States)

    Everts, Regula; Mürner-Lavanchy, Ines; Schroth, Gerhard; Steinlin, Maja

    2017-01-01

    There is mixed evidence regarding neural change following cognitive training. Brain activation increase, decrease, or a combination of both may occur. We investigated training-induced neural change using two different memory training approaches. Very preterm born children (aged 7-12 years) were randomly allocated to a memory strategy training, an intensive working memory practice or a waiting control group. Before and immediately after the trainings and the waiting period, brain activation during a visual working memory task was measured using fMRI and cognitive performance was assessed. Following both memory trainings, there was a significant decrease of fronto-parietal brain activation and a significant increase of memory performance. In the control group, no neural or performance change occurred after the waiting period. These pilot data point towards a training-related decrease of brain activation, independent of the training approach. Our data highlight the high training-induced plasticity of the child's brain during development.

  12. Approach to design neural cryptography: a generalized architecture and a heuristic rule.

    Science.gov (United States)

    Mu, Nankun; Liao, Xiaofeng; Huang, Tingwen

    2013-06-01

    Neural cryptography, a type of public key exchange protocol, is widely considered as an effective method for sharing a common secret key between two neural networks on public channels. How to design neural cryptography remains a great challenge. In this paper, in order to provide an approach to solve this challenge, a generalized network architecture and a significant heuristic rule are designed. The proposed generic framework is named as tree state classification machine (TSCM), which extends and unifies the existing structures, i.e., tree parity machine (TPM) and tree committee machine (TCM). Furthermore, we carefully study and find that the heuristic rule can improve the security of TSCM-based neural cryptography. Therefore, TSCM and the heuristic rule can guide us to designing a great deal of effective neural cryptography candidates, in which it is possible to achieve the more secure instances. Significantly, in the light of TSCM and the heuristic rule, we further expound that our designed neural cryptography outperforms TPM (the most secure model at present) on security. Finally, a series of numerical simulation experiments are provided to verify validity and applicability of our results.

  13. Validation of protein models by a neural network approach

    Directory of Open Access Journals (Sweden)

    Fantucci Piercarlo

    2008-01-01

    Full Text Available Abstract Background The development and improvement of reliable computational methods designed to evaluate the quality of protein models is relevant in the context of protein structure refinement, which has been recently identified as one of the bottlenecks limiting the quality and usefulness of protein structure prediction. Results In this contribution, we present a computational method (Artificial Intelligence Decoys Evaluator: AIDE which is able to consistently discriminate between correct and incorrect protein models. In particular, the method is based on neural networks that use as input 15 structural parameters, which include energy, solvent accessible surface, hydrophobic contacts and secondary structure content. The results obtained with AIDE on a set of decoy structures were evaluated using statistical indicators such as Pearson correlation coefficients, Znat, fraction enrichment, as well as ROC plots. It turned out that AIDE performances are comparable and often complementary to available state-of-the-art learning-based methods. Conclusion In light of the results obtained with AIDE, as well as its comparison with available learning-based methods, it can be concluded that AIDE can be successfully used to evaluate the quality of protein structures. The use of AIDE in combination with other evaluation tools is expected to further enhance protein refinement efforts.

  14. Simple Electromagnetic Modeling of Small Airplanes: Neural Network Approach

    Directory of Open Access Journals (Sweden)

    P. Tobola

    2009-04-01

    Full Text Available The paper deals with the development of simple electromagnetic models of small airplanes, which can contain composite materials in their construction. Electromagnetic waves can penetrate through the surface of the aircraft due to the specific electromagnetic properties of the composite materials, which can increase the intensity of fields inside the airplane and can negatively influence the functionality of the sensitive avionics. The airplane is simulated by two parallel dielectric layers (the left-hand side wall and the right-hand side wall of the airplane. The layers are put into a rectangular metallic waveguide terminated by the absorber in order to simulate the illumination of the airplane by the external wave (both of the harmonic nature and pulse one. Thanks to the simplicity of the model, the parametric analysis can be performed, and the results can be used in order to train an artificial neural network. The trained networks excel in further reduction of CPU-time demands of an airplane modeling.

  15. Modeling and Optimizing Energy Utilization of Steel Production Process: A Hybrid Petri Net Approach

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2013-01-01

    Full Text Available The steel industry is responsible for nearly 9% of anthropogenic energy utilization in the world. It is urgent to reduce the total energy utilization of steel industry under the huge pressures on reducing energy consumption and CO2 emission. Meanwhile, the steel manufacturing is a typical continuous-discrete process with multiprocedures, multiobjects, multiconstraints, and multimachines coupled, which makes energy management rather difficult. In order to study the energy flow within the real steel production process, this paper presents a new modeling and optimization method for the process based on Hybrid Petri Nets (HPN in consideration of the situation above. Firstly, we introduce the detailed description of HPN. Then the real steel production process from one typical integrated steel plant is transformed into Hybrid Petri Net model as a case. Furthermore, we obtain a series of constraints of our optimization model from this model. In consideration of the real process situation, we pick the steel production, energy efficiency and self-made gas surplus as the main optimized goals in this paper. Afterwards, a fuzzy linear programming method is conducted to obtain the multiobjective optimization results. Finally, some measures are suggested to improve this low efficiency and high whole cost process structure.

  16. A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets

    Directory of Open Access Journals (Sweden)

    Jie-Hao Chen

    2017-01-01

    Full Text Available In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR model by 5.57% and Support Vector Regression (SVR model by 5.80%.

  17. A two-step approach to estimating selectivity and fishing power of research gill nets used in Greenland waters

    DEFF Research Database (Denmark)

    Hovgård, Holger

    1996-01-01

    Catches of Atlantic cod (Gadus morhua) from Greenland gill-net surveys were analyzed by a two-step approach. In the initial step the form of the selection curve was identified as binormal, which was caused by fish being gilled or caught by the maxillae. Both capture processes could be described...... by normal distributions and could be related to mesh size in accordance with the principle of geometrical similarity. In the second step the selection parameters were estimated by a nonlinear least squares fit. The model also estimated the relative efficiency of the two capture processes and the fishing...

  18. Knowledge base and neural network approach for protein secondary structure prediction.

    Science.gov (United States)

    Patel, Maulika S; Mazumdar, Himanshu S

    2014-11-21

    Protein structure prediction is of great relevance given the abundant genomic and proteomic data generated by the genome sequencing projects. Protein secondary structure prediction is addressed as a sub task in determining the protein tertiary structure and function. In this paper, a novel algorithm, KB-PROSSP-NN, which is a combination of knowledge base and modeling of the exceptions in the knowledge base using neural networks for protein secondary structure prediction (PSSP), is proposed. The knowledge base is derived from a proteomic sequence-structure database and consists of the statistics of association between the 5-residue words and corresponding secondary structure. The predicted results obtained using knowledge base are refined with a Backpropogation neural network algorithm. Neural net models the exceptions of the knowledge base. The Q3 accuracy of 90% and 82% is achieved on the RS126 and CB396 test sets respectively which suggest improvement over existing state of art methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. A Monte Carlo EM approach for partially observable diffusion processes: theory and applications to neural networks.

    Science.gov (United States)

    Movellan, Javier R; Mineiro, Paul; Williams, R J

    2002-07-01

    We present a Monte Carlo approach for training partially observable diffusion processes. We apply the approach to diffusion networks, a stochastic version of continuous recurrent neural networks. The approach is aimed at learning probability distributions of continuous paths, not just expected values. Interestingly, the relevant activation statistics used by the learning rule presented here are inner products in the Hilbert space of square integrable functions. These inner products can be computed using Hebbian operations and do not require backpropagation of error signals. Moreover, standard kernel methods could potentially be applied to compute such inner products. We propose that the main reason that recurrent neural networks have not worked well in engineering applications (e.g., speech recognition) is that they implicitly rely on a very simplistic likelihood model. The diffusion network approach proposed here is much richer and may open new avenues for applications of recurrent neural networks. We present some analysis and simulations to support this view. Very encouraging results were obtained on a visual speech recognition task in which neural networks outperformed hidden Markov models.

  20. Using a multi-port architecture of neural-net associative memory based on the equivalency paradigm for parallel cluster image analysis and self-learning

    Science.gov (United States)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Grabovlyak, Sveta K.; Nikitovich, Diana V.

    2013-01-01

    We consider equivalency models, including matrix-matrix and matrix-tensor and with the dual adaptive-weighted correlation, multi-port neural-net auto-associative and hetero-associative memory (MP NN AAM and HAP), which are equivalency paradigm and the theoretical basis of our work. We make a brief overview of the possible implementations of the MP NN AAM and of their architectures proposed and investigated earlier by us. The main base unit of such architectures is a matrix-matrix or matrix-tensor equivalentor. We show that the MP NN AAM based on the equivalency paradigm and optoelectronic architectures with space-time integration and parallel-serial 2D images processing have advantages such as increased memory capacity (more than ten times of the number of neurons!), high performance in different modes (1010 - 1012 connections per second!) And the ability to process, store and associatively recognize highly correlated images. Next, we show that with minor modifications, such MP NN AAM can be successfully used for highperformance parallel clustering processing of images. We show simulation results of using these modifications for clustering and learning models and algorithms for cluster analysis of specific images and divide them into categories of the array. Show example of a cluster division of 32 images (40x32 pixels) letters and graphics for 12 clusters with simultaneous formation of the output-weighted space allocated images for each cluster. We discuss algorithms for learning and self-learning in such structures and their comparative evaluations based on Mathcad simulations are made. It is shown that, unlike the traditional Kohonen self-organizing maps, time of learning in the proposed structures of multi-port neuronet classifier/clusterizer (MP NN C) on the basis of equivalency paradigm, due to their multi-port, decreases by orders and can be, in some cases, just a few epochs. Estimates show that in the test clustering of 32 1280- element images into 12

  1. Parallel Approach for Time Series Analysis with General Regression Neural Networks

    Directory of Open Access Journals (Sweden)

    J.C. Cuevas-Tello

    2012-04-01

    Full Text Available The accuracy on time delay estimation given pairs of irregularly sampled time series is of great relevance in astrophysics. However the computational time is also important because the study of large data sets is needed. Besides introducing a new approach for time delay estimation, this paper presents a parallel approach to obtain a fast algorithm for time delay estimation. The neural network architecture that we use is general Regression Neural Network (GRNN. For the parallel approach, we use Message Passing Interface (MPI on a beowulf-type cluster and on a Cray supercomputer and we also use the Compute Unified Device Architecture (CUDA™ language on Graphics Processing Units (GPUs. We demonstrate that, with our approach, fast algorithms can be obtained for time delay estimation on large data sets with the same accuracy as state-of-the-art methods.

  2. A Petri Net-based Approach to Reconfigurable Manufacturing Systems Modeling

    Directory of Open Access Journals (Sweden)

    Brian Rodrigues

    2009-02-01

    Full Text Available Reconfigurable manufacturing systems (RMSs have been used to provide manufacturing companies with the required capacities and capabilities, when needed. Recognizing (1 the importance of dynamic modeling and visualization in decision making support in RMSs and (2 the limitations of the existing studies, we model RMSs based on Petri net (PN techniques with focus on the process of reconfiguring system elements while considering constraints and system performance. In response to the modeling difficulties identified, a new formalism of colored timed PNs is introduced. In conjunction with colored tokens and timing in colored PNs and timed PNs, we further define a reconfiguration mechanism to meet the modeling difficulties. A case study of an electronics product is reported as an application of the proposed colored timed PNs to RMS modeling.

  3. Mining and biodiversity offsets: a transparent and science-based approach to measure "no-net-loss".

    Science.gov (United States)

    Virah-Sawmy, Malika; Ebeling, Johannes; Taplin, Roslyn

    2014-10-01

    Mining and associated infrastructure developments can present themselves as economic opportunities that are difficult to forego for developing and industrialised countries alike. Almost inevitably, however, they lead to biodiversity loss. This trade-off can be greatest in economically poor but highly biodiverse regions. Biodiversity offsets have, therefore, increasingly been promoted as a mechanism to help achieve both the aims of development and biodiversity conservation. Accordingly, this mechanism is emerging as a key tool for multinational mining companies to demonstrate good environmental stewardship. Relying on offsets to achieve "no-net-loss" of biodiversity, however, requires certainty in their ecological integrity where they are used to sanction habitat destruction. Here, we discuss real-world practices in biodiversity offsetting by assessing how well some leading initiatives internationally integrate critical aspects of biodiversity attributes, net loss accounting and project management. With the aim of improving, rather than merely critiquing the approach, we analyse different aspects of biodiversity offsetting. Further, we analyse the potential pitfalls of developing counterfactual scenarios of biodiversity loss or gains in a project's absence. In this, we draw on insights from experience with carbon offsetting. This informs our discussion of realistic projections of project effectiveness and permanence of benefits to ensure no net losses, and the risk of displacing, rather than avoiding biodiversity losses ("leakage"). We show that the most prominent existing biodiversity offset initiatives employ broad and somewhat arbitrary parameters to measure habitat value and do not sufficiently consider real-world challenges in compensating losses in an effective and lasting manner. We propose a more transparent and science-based approach, supported with a new formula, to help design biodiversity offsets to realise their potential in enabling more responsible

  4. A Review of Neural Network Based Machine Learning Approaches for Rotor Angle Stability Control

    OpenAIRE

    Yousefian, Reza; Kamalasadan, Sukumar

    2017-01-01

    This paper reviews the current status and challenges of Neural Networks (NNs) based machine learning approaches for modern power grid stability control including their design and implementation methodologies. NNs are widely accepted as Artificial Intelligence (AI) approaches offering an alternative way to control complex and ill-defined problems. In this paper various application of NNs for power system rotor angle stabilization and control problem is discussed. The main focus of this paper i...

  5. Neural activity underlying motor-action preparation and cognitive narrowing in approach-motivated goal states.

    Science.gov (United States)

    Gable, Philip A; Threadgill, A Hunter; Adams, David L

    2016-02-01

    High-approach-motivated (pre-goal) positive affect states encourage tenacious goal pursuit and narrow cognitive scope. As such, high approach-motivated states likely enhance the neural correlates of motor-action preparation to aid in goal acquisition. These neural correlates may also relate to the cognitive narrowing associated with high approach-motivated states. In the present study, we investigated motor-action preparation during pre-goal and post-goal states using an index of beta suppression over the motor cortex. The results revealed that beta suppression was greatest in pre-goal positive states, suggesting that higher levels of motor-action preparation occur during high approach-motivated positive states. Furthermore, beta and alpha suppression in the high approach-motivated positive states predicted greater cognitive narrowing. These results suggest that approach-motivated pre-goal states engage the neural substrates of motor-action preparation and cognitive narrowing. Individual differences in motor-action preparation relate to the degree of cognitive narrowing.

  6. Why a regional approach to postgraduate water education makes sense - the WaterNet experience in Southern Africa

    Science.gov (United States)

    Jonker, L.; van der Zaag, P.; Gumbo, B.; Rockström, J.; Love, D.; Savenije, H. H. G.

    2012-03-01

    This paper reports the experience of a regional network of academic departments involved in water education that started as a project and evolved, over a period of 12 yr, into an independent network organisation. The paper pursues three objectives. First, it argues that it makes good sense to organise postgraduate education and research on water resources on a regional scale. This is because water has a transboundary dimension that poses delicate sharing questions, an approach that promotes a common understanding of what the real water-related issues are, results in future water specialists speaking a common (water) language, enhances mutual respect, and can thus be considered an investment in future peace. Second, it presents the WaterNet experience as an example that a regional approach can work and has an impact. Third, it draws three generalised lessons from the WaterNet experience. Lesson 1: For a regional capacity building network to be effective, it must have a legitimate ownership structure and a clear mandate. Lesson 2: Organising water-related training opportunities at a regional and transboundary scale makes sense - not only because knowledge resources are scattered, but also because the topic - water - has a regional and transboundary scope. Lesson 3: Jointly developing educational programmes by sharing expertise and resources requires intense intellectual management and sufficient financial means.

  7. Comparative nonlinear modeling of renal autoregulation in rats: Volterra approach versus artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Holstein-Rathlou, N H; Marsh, D J

    1998-01-01

    kernel estimation method based on Laguerre expansions. The results for the two types of artificial neural networks and the Volterra models are comparable in terms of normalized mean square error (NMSE) of the respective output prediction for independent testing data. However, the Volterra models obtained......In this paper, feedforward neural networks with two types of activation functions (sigmoidal and polynomial) are utilized for modeling the nonlinear dynamic relation between renal blood pressure and flow data, and their performance is compared to Volterra models obtained by use of the leading...... via the Laguerre expansion technique achieve this prediction NMSE with approximately half the number of free parameters relative to either neural-network model. However, both approaches are deemed effective in modeling nonlinear dynamic systems and their cooperative use is recommended in general....

  8. Prediction of Protein Thermostability by an Efficient Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Jalal Rezaeenour

    2016-10-01

    Full Text Available Introduction: Manipulation of protein stability is important for understanding the principles that govern protein thermostability, both in basic research and industrial applications. Various data mining techniques exist for prediction of thermostable proteins. Furthermore, ANN methods have attracted significant attention for prediction of thermostability, because they constitute an appropriate approach to mapping the non-linear input-output relationships and massive parallel computing. Method: An Extreme Learning Machine (ELM was applied to estimate thermal behavior of 1289 proteins. In the proposed algorithm, the parameters of ELM were optimized using a Genetic Algorithm (GA, which tuned a set of input variables, hidden layer biases, and input weights, to and enhance the prediction performance. The method was executed on a set of amino acids, yielding a total of 613 protein features. A number of feature selection algorithms were used to build subsets of the features. A total of 1289 protein samples and 613 protein features were calculated from UniProt database to understand features contributing to the enzymes’ thermostability and find out the main features that influence this valuable characteristic. Results:At the primary structure level, Gln, Glu and polar were the features that mostly contributed to protein thermostability. At the secondary structure level, Helix_S, Coil, and charged_Coil were the most important features affecting protein thermostability. These results suggest that the thermostability of proteins is mainly associated with primary structural features of the protein. According to the results, the influence of primary structure on the thermostabilty of a protein was more important than that of the secondary structure. It is shown that prediction accuracy of ELM (mean square error can improve dramatically using GA with error rates RMSE=0.004 and MAPE=0.1003. Conclusion: The proposed approach for forecasting problem

  9. A PSO based Artificial Neural Network approach for short term unit commitment problem

    Directory of Open Access Journals (Sweden)

    AFTAB AHMAD

    2010-10-01

    Full Text Available Unit commitment (UC is a non-linear, large scale, complex, mixed-integer combinatorial constrained optimization problem. This paper proposes, a new hybrid approach for generating unit commitment schedules using swarm intelligence learning rule based neural network. The training data has been generated using dynamic programming for machines without valve point effects and using genetic algorithm for machines with valve point effects. A set of load patterns as inputs and the corresponding unit generation schedules as outputs are used to train the network. The neural network fine tunes the best results to the desired targets. The proposed approach has been validated for three thermal machines with valve point effects and without valve point effects. The results are compared with the approaches available in the literature. The PSO-ANN trained model gives better results which show the promise of the proposed methodology.

  10. Net environmental benefit: introducing a new LCA approach on wastewater treatment systems.

    Science.gov (United States)

    Godin, D; Bouchard, C; Vanrolleghem, P A

    2012-01-01

    Life cycle assessment (LCA) allows evaluating the potential environmental impacts of a product or a service in relation to its function and over its life cycle. In past LCAs applied to wastewater treatment plants (WWTPs), the system function definition has received little attention despite its great importance. This has led to some limitations in LCA results interpretation. A new methodology to perform LCA on WWTPs is proposed to avoid those limitations. It is based on net environmental benefit (NEB) evaluation and requires assessing the potential impact of releasing wastewater without and with treatment besides assessing the impact of the WWTP's life cycle. The NEB allows showing the environmental trade-offs between avoided impact due to wastewater treatment and induced impact by the WWTP's life cycle. NEB is compared with a standard LCA through the case study of a small municipal WWTP consisting of facultative aerated lagoons. The NEB and standard LCA show similar results for impact categories solely related to the WWTP's life cycle but differ in categories where wastewater treatment environmental benefit is accounted for as NEB considers influent wastewater quality whereas standard LCA does not.

  11. Real-time collision-free motion planning of a mobile robot using a Neural Dynamics-based approach.

    Science.gov (United States)

    Yang, S X; Meng, M H

    2003-01-01

    A neural dynamics based approach is proposed for real-time motion planning with obstacle avoidance of a mobile robot in a nonstationary environment. The dynamics of each neuron in the topologically organized neural network is characterized by a shunting equation or an additive equation. The real-time collision-free robot motion is planned through the dynamic neural activity landscape of the neural network without any learning procedures and without any local collision-checking procedures at each step of the robot movement. Therefore the model algorithm is computationally simple. There are only local connections among neurons. The computational complexity linearly depends on the neural network size. The stability of the proposed neural network system is proved by qualitative analysis and a Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies.

  12. Deep convolutional neural network and 3D deformable approach for tissue segmentation in musculoskeletal magnetic resonance imaging.

    Science.gov (United States)

    Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard

    2017-07-21

    To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Semantic Networks and Neural Nets.

    Science.gov (United States)

    1984-06-01

    TRAGEDIES . If John likes SCIENCE-FICTION more than SHAKESPEAREAN - TRAGEDIES then it is easy to see how SCIENCE-FICTION will be chosen as the answer...manner it is easy to see how in subsequent steps the fod may converge to [LITERARY-KIND ’AI with the choices being SCIENCE-FICTION and SHAKESPEAREAN

  14. Deciphering the components of regional net ecosystem fluxes following a bottom-up approach for the Iberian Peninsula

    Directory of Open Access Journals (Sweden)

    N. Carvalhais

    2010-11-01

    Full Text Available Quantification of ecosystem carbon pools is a fundamental requirement for estimating carbon fluxes and for addressing the dynamics and responses of the terrestrial carbon cycle to environmental drivers. The initial estimates of carbon pools in terrestrial carbon cycle models often rely on the ecosystem steady state assumption, leading to initial equilibrium conditions. In this study, we investigate how trends and inter-annual variability of net ecosystem fluxes are affected by initial non-steady state conditions. Further, we examine how modeled ecosystem responses induced exclusively by the model drivers can be separated from the initial conditions. For this, the Carnegie-Ames-Stanford Approach (CASA model is optimized at set of European eddy covariance sites, which support the parameterization of regional simulations of ecosystem fluxes for the Iberian Peninsula, between 1982 and 2006.

    The presented analysis stands on a credible model performance for a set of sites, that represent generally well the plant functional types and selected descriptors of climate and phenology present in the Iberian region – except for a limited Northwestern area. The effects of initial conditions on inter-annual variability and on trends, results mostly from the recovery of pools to equilibrium conditions; which control most of the inter-annual variability (IAV and both the magnitude and sign of most of the trends. However, by removing the time series of pure model recovery from the time series of the overall fluxes, we are able to retrieve estimates of inter-annual variability and trends in net ecosystem fluxes that are quasi-independent from the initial conditions. This approach reduced the sensitivity of the net fluxes to initial conditions from 47% and 174% to −3% and 7%, for strong initial sink and source conditions, respectively.

    With the aim to identify and improve understanding of the component fluxes that drive the observed trends, the

  15. Nonlinear identification and control a neural network approach

    CERN Document Server

    Liu, G P

    2001-01-01

    The series Advances in Industrial Control aims to report and encourage technology transfer in control engineering. The rapid development of control technology has an impact on all areas of the control discipline. New theory, new controllers, actuators, sensors, new industrial processes, computer methods, new applications, new philosophies . . . , new challenges. Much of this development work resides in industrial reports, feasibility study papers and the reports of advanced collaborative projects. The series otTers an opportunity for researchers to present an extended exposition of such new work in all aspects of industrial control for wider and rapid dissemination. The time for nonlinear control to enter routine application seems to be approaching. Nonlinear control has had a long gestation period but much ofthe past has been concerned with methods that involve formal nonlinear functional model representations. It seems more likely that the breakthough will come through the use of other more flexible and ame...

  16. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242)

    Science.gov (United States)

    Dülger, L. Canan; Kapucu, Sadettin

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles. PMID:27610129

  17. Hybrid Neural Network Approach Based Tool for the Modelling of Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Antonino Laudani

    2015-01-01

    Full Text Available A hybrid neural network approach based tool for identifying the photovoltaic one-diode model is presented. The generalization capabilities of neural networks are used together with the robustness of the reduced form of one-diode model. Indeed, from the studies performed by the authors and the works present in the literature, it was found that a direct computation of the five parameters via multiple inputs and multiple outputs neural network is a very difficult task. The reduced form consists in a series of explicit formulae for the support to the neural network that, in our case, is aimed at predicting just two parameters among the five ones identifying the model: the other three parameters are computed by reduced form. The present hybrid approach is efficient from the computational cost point of view and accurate in the estimation of the five parameters. It constitutes a complete and extremely easy tool suitable to be implemented in a microcontroller based architecture. Validations are made on about 10000 PV panels belonging to the California Energy Commission database.

  18. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242).

    Science.gov (United States)

    Almusawi, Ahmed R J; Dülger, L Canan; Kapucu, Sadettin

    2016-01-01

    This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.

  19. A New Artificial Neural Network Approach in Solving Inverse Kinematics of Robotic Arm (Denso VP6242

    Directory of Open Access Journals (Sweden)

    Ahmed R. J. Almusawi

    2016-01-01

    Full Text Available This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot’s joint angles.

  20. Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches.

    Science.gov (United States)

    Lowet, Eric; Roberts, Mark J; Bonizzi, Pietro; Karel, Joël; De Weerd, Peter

    2016-01-01

    Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent) synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE) of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV) method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT) preceded by Singular Spectrum Decomposition (SSD) of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization-mediated information

  1. Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches.

    Directory of Open Access Journals (Sweden)

    Eric Lowet

    Full Text Available Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT preceded by Singular Spectrum Decomposition (SSD of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization

  2. Petri Net Approach of Collision Prevention Supervisor Design in Port Transport System

    Directory of Open Access Journals (Sweden)

    Danko Kezić

    2007-09-01

    Full Text Available Modern port terminals are equipped with various localtransport systems, which have the main task to transport cargobetween local storehouses and transport resources (ships,trains, trucks in the fastest and most efficient way, and at thelowest possible cost. These local transport systems consist offully automated transport units (AGV- automatic guided vehiclewhich are controlled by the computer system. The portcomputer system controls the fully automated transport units inthe way to avoid possible deadlocks and collisions betweenthem. However, beside the fully automated local transportunits, there are human operated transport units (fork-lifttrucks, cranes etc. which cross the path oftheAGVfrom timeto time. The collision of human operated transp011 unit andA GV is possible due to human inattention. To solve this problem,it is necesswy to design a supe1vismy control system thatcoordinates and controls both human driven transport unit andA G V In other words, the human-machine interactions need tobe supen·ised. The supen•ising system can be realized in the waythat the port terminal is divided into zones. Vehicle movementsare supen•ised by a video system which detects the moving ofparticular l'ehicles as a discrete event. Based on detected events,dangerous moving of certain vehicles is blocked by the supe1visi11gsystem. The paper considers the design of collision preventionsupen•isor by using discrete event dynamic themy. The portterminal is modeled by using ordi1za1y Petri nets. The design ofcollision prevention supe1visor is cmTied out by using the P-inl'ariantmethod. The verification of the supervisor is done bycomputer simulation.

  3. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    Directory of Open Access Journals (Sweden)

    Daniel Durstewitz

    2017-06-01

    Full Text Available The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast maximum-likelihood estimation framework for PLRNNs that may enable to recover

  4. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    Science.gov (United States)

    Durstewitz, Daniel

    2017-06-01

    The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects

  5. Assessing sulfate and carbon controls on net methylmercury production in peatlands: An in situ mesocosm approach

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Carl P.J. [Department of Geography, University of Toronto at Mississauga, 3359 Mississauga Road North, Mississauga, Ontario L5L 1C6 (Canada)], E-mail: mitchellc@si.edu; Branfireun, Brian A. [Department of Geography, University of Toronto at Mississauga, 3359 Mississauga Road North, Mississauga, Ontario L5L 1C6 (Canada); Kolka, Randall K. [Northern Research Station, US Department of Agriculture Forest Service, 1831 Highway 169 East, Grand Rapids, MN 55744 (United States)

    2008-03-15

    The transformation of atmospherically deposited inorganic Hg to the toxic, organic form methylmercury (MeHg) is of serious ecological concern because MeHg accumulates in aquatic biota, including fish. Research has shown that the Hg methylation reaction is dependent on the availability of SO{sub 4} (as an electron acceptor) because SO{sub 4}-reducing bacteria (SRB) mediate the biotic methylation of Hg. Much less research has investigated the possible organic C limitations to Hg methylation (i.e. from the perspective of the electron donor). Although peatlands are long-term stores of organic C, the C derived from peatland vegetation is of questionable microbial lability. This research investigated how both SO{sub 4} and organic C control net MeHg production using a controlled factorial addition design in 44 in situ peatland mesocosms. Two levels of SO{sub 4} addition and energetic-equivalent additions (i.e. same number of electrons) of a number of organic C sources were used including glucose, acetate, lactate, coniferous litter leachate, and deciduous litter leachate. This study supports previous research demonstrating the stimulation of MeHg production from SO{sub 4} input alone ({approx}200 pg/L/day). None of the additions of organic C alone resulted in significant MeHg production. The combined addition of SO{sub 4} and some organic C sources resulted in considerably more MeHg production ({approx}500 pg/L/day) than did the addition of SO{sub 4} alone, demonstrating that the highest levels of MeHg production can be expected only where fluxes of both SO{sub 4} and organic C are delivered concurrently. When compared to a number of pore water samples taken from two nearby peatlands, MeHg concentrations resulting from the combined addition of SO{sub 4} and organic C in this study were similar to MeHg 'hot spots' found near the upland-peatland interface. The formation of MeHg 'hot spots' at the upland-peatland interface may be dependent on concurrent

  6. Nonlinear identification using a B-spline neural network and chaotic immune approaches

    Science.gov (United States)

    dos Santos Coelho, Leandro; Pessôa, Marcelo Wicthoff

    2009-11-01

    One of the important applications of B-spline neural network (BSNN) is to approximate nonlinear functions defined on a compact subset of a Euclidean space in a highly parallel manner. Recently, BSNN, a type of basis function neural network, has received increasing attention and has been applied in the field of nonlinear identification. BSNNs have the potential to "learn" the process model from input-output data or "learn" fault knowledge from past experience. BSNN can be used as function approximators to construct the analytical model for residual generation too. However, BSNN is trained by gradient-based methods that may fall into local minima during the learning procedure. When using feed-forward BSNNs, the quality of approximation depends on the control points (knots) placement of spline functions. This paper describes the application of a modified artificial immune network inspired optimization method - the opt-aiNet - combined with sequences generate by Hénon map to provide a stochastic search to adjust the control points of a BSNN. The numerical results presented here indicate that artificial immune network optimization methods are useful for building good BSNN model for the nonlinear identification of two case studies: (i) the benchmark of Box and Jenkins gas furnace, and (ii) an experimental ball-and-tube system.

  7. Hybrid neural network approach for predicting maintainability of object-oriented software

    OpenAIRE

    Kumar, Lov; NIT Rourkela; Rath, Santanu Ku.; NIT Rourkela

    2014-01-01

    Estimation of different parameters for object-oriented systems development such as effort, quality, and risk is of major concern in software development life cycle.  Majority of the approaches available in literature for estimation are based on regression analysis and neural network techniques.  Also  it is observed that numerous software metrics are being used as input for estimation. In this study, object-oriented metrics have been considered to provide requisite input data to design the mo...

  8. Applying Artificial Neural Networks to Evaluate Export Performance: A Relational Approach

    OpenAIRE

    Antonio CORREIA de BARROS; Hortensia BARANDAS; Paulo Alexandre PIRES

    2009-01-01

    The paper applies artificial neural networks to investigate the effect of the exporter’s relationship orientation on the export performance, mediated by the relationship quality, taking into account the supplier’s strategic orientation and the foreign customer’s approach to purchasing. The proposed model is supported mainly by the Second Networking Marketing Paradox, the Commitment-Trust Theory, the Relationship Marketing Paradigm and International Marketing fundamentals. The model developed,...

  9. A comparison of standard spell checking algorithms and a novel binary neural approach

    OpenAIRE

    Hodge, V.J.; Austin, Jim

    2003-01-01

    In this paper, we propose a simple, flexible, and efficient hybrid spell checking methodology based upon phonetic matching, supervised learning, and associative matching in the AURA neural system. We integrate Hamming Distance and n-gram algorithms that have high recall for typing errors and a phonetic spell-checking algorithm in a single novel architecture. Our approach is suitable for any spell checking application though aimed toward isolated word error correction, particularly spell check...

  10. GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.

    Science.gov (United States)

    Deng, Lei; Jiao, Peng; Pei, Jing; Wu, Zhenzhi; Li, Guoqi

    2018-02-02

    Although deep neural networks (DNNs) are being a revolutionary power to open up the AI era, the notoriously huge hardware overhead has challenged their applications. Recently, several binary and ternary networks, in which the costly multiply-accumulate operations can be replaced by accumulations or even binary logic operations, make the on-chip training of DNNs quite promising. Therefore there is a pressing need to build an architecture that could subsume these networks under a unified framework that achieves both higher performance and less overhead. To this end, two fundamental issues are yet to be addressed. The first one is how to implement the back propagation when neuronal activations are discrete. The second one is how to remove the full-precision hidden weights in the training phase to break the bottlenecks of memory/computation consumption. To address the first issue, we present a multi-step neuronal activation discretization method and a derivative approximation technique that enable the implementing the back propagation algorithm on discrete DNNs. While for the second issue, we propose a discrete state transition (DST) methodology to constrain the weights in a discrete space without saving the hidden weights. Through this way, we build a unified framework that subsumes the binary or ternary networks as its special cases, and under which a heuristic algorithm is provided at the website https://github.com/AcrossV/Gated-XNOR. More particularly, we find that when both the weights and activations become ternary values, the DNNs can be reduced to sparse binary networks, termed as gated XNOR networks (GXNOR-Nets) since only the event of non-zero weight and non-zero activation enables the control gate to start the XNOR logic operations in the original binary networks. This promises the event-driven hardware design for efficient mobile intelligence. We achieve advanced performance compared with state-of-the-art algorithms. Furthermore, the computational sparsity

  11. A Modified Dynamic Evolving Neural-Fuzzy Approach to Modeling Customer Satisfaction for Affective Design

    Directory of Open Access Journals (Sweden)

    C. K. Kwong

    2013-01-01

    Full Text Available Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1 the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS failed to run due to a large number of inputs; (2 the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort.

  12. A modified dynamic evolving neural-fuzzy approach to modeling customer satisfaction for affective design.

    Science.gov (United States)

    Kwong, C K; Fung, K Y; Jiang, Huimin; Chan, K Y; Siu, Kin Wai Michael

    2013-01-01

    Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort.

  13. A Bayesian compressed-sensing approach for reconstructing neural connectivity from subsampled anatomical data.

    Science.gov (United States)

    Mishchenko, Yuriy; Paninski, Liam

    2012-10-01

    In recent years, the problem of reconstructing the connectivity in large neural circuits ("connectomics") has re-emerged as one of the main objectives of neuroscience. Classically, reconstructions of neural connectivity have been approached anatomically, using electron or light microscopy and histological tracing methods. This paper describes a statistical approach for connectivity reconstruction that relies on relatively easy-to-obtain measurements using fluorescent probes such as synaptic markers, cytoplasmic dyes, transsynaptic tracers, or activity-dependent dyes. We describe the possible design of these experiments and develop a Bayesian framework for extracting synaptic neural connectivity from such data. We show that the statistical reconstruction problem can be formulated naturally as a tractable L₁-regularized quadratic optimization. As a concrete example, we consider a realistic hypothetical connectivity reconstruction experiment in C. elegans, a popular neuroscience model where a complete wiring diagram has been previously obtained based on long-term electron microscopy work. We show that the new statistical approach could lead to an orders of magnitude reduction in experimental effort in reconstructing the connectivity in this circuit. We further demonstrate that the spatial heterogeneity and biological variability in the connectivity matrix--not just the "average" connectivity--can also be estimated using the same method.

  14. Forward and Reverse Process Models for the Squeeze Casting Process Using Neural Network Based Approaches

    Directory of Open Access Journals (Sweden)

    Manjunath Patel Gowdru Chandrashekarappa

    2014-01-01

    Full Text Available The present research work is focussed to develop an intelligent system to establish the input-output relationship utilizing forward and reverse mappings of artificial neural networks. Forward mapping aims at predicting the density and secondary dendrite arm spacing (SDAS from the known set of squeeze cast process parameters such as time delay, pressure duration, squeezes pressure, pouring temperature, and die temperature. An attempt is also made to meet the industrial requirements of developing the reverse model to predict the recommended squeeze cast parameters for the desired density and SDAS. Two different neural network based approaches have been proposed to carry out the said task, namely, back propagation neural network (BPNN and genetic algorithm neural network (GA-NN. The batch mode of training is employed for both supervised learning networks and requires huge training data. The requirement of huge training data is generated artificially at random using regression equation derived through real experiments carried out earlier by the same authors. The performances of BPNN and GA-NN models are compared among themselves with those of regression for ten test cases. The results show that both models are capable of making better predictions and the models can be effectively used in shop floor in selection of most influential parameters for the desired outputs.

  15. Artificial Neural Network Approach to Predict Biodiesel Production in Supercritical tert-Butyl Methyl Ether

    Directory of Open Access Journals (Sweden)

    Obie Farobie

    2016-05-01

    Full Text Available In this study, for the first time artificial neural network was used to predict biodiesel yield in supercritical tert-butyl methyl ether (MTBE. The experimental data of biodiesel yield conducted by varying four input factors (i.e. temperature, pressure, oil-to-MTBE molar ratio, and reaction time were used to elucidate artificial neural network model in order to predict biodiesel yield. The main goal of this study was to assess how accurately this artificial neural network model to predict biodiesel yield conducted under supercritical MTBE condition. The result shows that artificial neural network is a powerful tool for modeling and predicting biodiesel yield conducted under supercritical MTBE condition that was proven by a high value of coefficient of determination (R of 0.9969, 0.9899, and 0.9658 for training, validation, and testing, respectively. Using this approach, the highest biodiesel yield was determined of 0.93 mol/mol (corresponding to the actual biodiesel yield of 0.94 mol/mol that was achieved at 400 °C, under the reactor pressure of 10 MPa, oil-to-MTBE molar ratio of 1:40 within 15 min of reaction time.

  16. The BraveNet prospective observational study on integrative medicine treatment approaches for pain.

    Science.gov (United States)

    Abrams, Donald I; Dolor, Rowena; Roberts, Rhonda; Pechura, Constance; Dusek, Jeffery; Amoils, Sandi; Amoils, Steven; Barrows, Kevin; Edman, Joel S; Frye, Joyce; Guarneri, Erminia; Kligler, Ben; Monti, Daniel; Spar, Myles; Wolever, Ruth Q

    2013-06-24

    Chronic pain affects nearly 116 million American adults at an estimated cost of up to $635 billion annually and is the No. 1 condition for which patients seek care at integrative medicine clinics. In our Study on Integrative Medicine Treatment Approaches for Pain (SIMTAP), we observed the impact of an integrative approach on chronic pain and a number of other related patient-reported outcome measures. Our prospective, non-randomized, open-label observational evaluation was conducted over six months, at nine clinical sites. Participants received a non-standardized, personalized, multimodal approach to chronic pain. Validated instruments for pain (severity and interference levels), quality of life, mood, stress, sleep, fatigue, sense of control, overall well-being, and work productivity were completed at baseline and at six, 12, and 24 weeks. Blood was collected at baseline and week 12 for analysis of high-sensitivity C-reactive protein and 25-hydroxyvitamin D levels. Repeated-measures analysis was performed on data to assess change from baseline at 24 weeks. Of 409 participants initially enrolled, 252 completed all follow-up visits during the 6 month evaluation. Participants were predominantly white (81%) and female (73%), with a mean age of 49.1 years (15.44) and an average of 8.0 (9.26) years of chronic pain. At baseline, 52% of patients reported symptoms consistent with depression. At 24 weeks, significantly decreased pain severity (-23%) and interference (-28%) were seen. Significant improvements in mood, stress, quality of life, fatigue, sleep and well-being were also observed. Mean 25-hydroxyvitamin D levels increased from 33.4 (17.05) ng/mL at baseline to 39.6 (16.68) ng/mL at week 12. Among participants completing an integrative medicine program for chronic pain, significant improvements were seen in pain as well as other relevant patient-reported outcome measures. ClinicalTrials.gov, NCT01186341.

  17. On the Control of Social Approach-Avoidance Behavior: Neural and Endocrine Mechanisms.

    Science.gov (United States)

    Kaldewaij, Reinoud; Koch, Saskia B J; Volman, Inge; Toni, Ivan; Roelofs, Karin

    The ability to control our automatic action tendencies is crucial for adequate social interactions. Emotional events trigger automatic approach and avoidance tendencies. Although these actions may be generally adaptive, the capacity to override these emotional reactions may be key to flexible behavior during social interaction. The present chapter provides a review of the neuroendocrine mechanisms underlying this ability and their relation to social psychopathologies. Aberrant social behavior, such as observed in social anxiety or psychopathy, is marked by abnormalities in approach-avoidance tendencies and the ability to control them. Key neural regions involved in the regulation of approach-avoidance behavior are the amygdala, widely implicated in automatic emotional processing, and the anterior prefrontal cortex, which exerts control over the amygdala. Hormones, especially testosterone and cortisol, have been shown to affect approach-avoidance behavior and the associated neural mechanisms. The present chapter also discusses ways to directly influence social approach and avoidance behavior and will end with a research agenda to further advance this important research field. Control over approach-avoidance tendencies may serve as an exemplar of emotional action regulation and might have a great value in understanding the underlying mechanisms of the development of affective disorders.

  18. A new approach for visual identification of orange varieties using neural networks and metaheuristic algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Sabzi

    2018-03-01

    Full Text Available Accurate classification of fruit varieties in processing factories and during post-harvesting applications is a challenge that has been widely studied. This paper presents a novel approach to automatic fruit identification applied to three common varieties of oranges (Citrus sinensis L., namely Bam, Payvandi and Thomson. A total of 300 color images were used for the experiments, 100 samples for each orange variety, which are publicly available. After segmentation, 263 parameters, including texture, color and shape features, were extracted from each sample using image processing. Among them, the 6 most effective features were automatically selected by using a hybrid approach consisting of an artificial neural network and particle swarm optimization algorithm (ANN-PSO. Then, three different classifiers were applied and compared: hybrid artificial neural network – artificial bee colony (ANN-ABC; hybrid artificial neural network – harmony search (ANN-HS; and k-nearest neighbors (kNN. The experimental results show that the hybrid approaches outperform the results of kNN. The average correct classification rate of ANN-HS was 94.28%, while ANN-ABS achieved 96.70% accuracy with the available data, contrasting with the 70.9% baseline accuracy of kNN. Thus, this new proposed methodology provides a fast and accurate way to classify multiple fruits varieties, which can be easily implemented in processing factories. The main contribution of this work is that the method can be directly adapted to other use cases, since the selection of the optimal features and the configuration of the neural network are performed automatically using metaheuristic algorithms.

  19. Cognitive And Neural Sciences Division 1992 Programs

    Science.gov (United States)

    1992-08-01

    Neuronal Micronets as Nodal Elements PRINCIPAL INVESTIGATOR: Thomas H. Brown Yale University Department of Psychology (203) 432-7008 R&T PROJECT CODE...of neural nets, and to develop a micronet architecture which captures the computations in neurons. Approach: Simulations will be conducted of the

  20. Neural networks and forecasting stock price movements-accounting approach: Empirical evidence from Iran

    Directory of Open Access Journals (Sweden)

    Hossein Naderi

    2012-08-01

    Full Text Available Stock market prediction is one of the most important interesting areas of research in business. Stock markets prediction is normally assumed as tedious task since there are many factors influencing the market. The primary objective of this paper is to forecast trend closing price movement of Tehran Stock Exchange (TSE using financial accounting ratios from year 2003 to year 2008. The proposed study of this paper uses two approaches namely Artificial Neural Networks and multi-layer perceptron. Independent variables are accounting ratios and dependent variable of stock price , so the latter was gathered for the industry of Motor Vehicles and Auto Parts. The results of this study show that neural networks models are useful tools in forecasting stock price movements in emerging markets but multi-layer perception provides better results in term of lowering error terms.

  1. Observer design for switched recurrent neural networks: an average dwell time approach.

    Science.gov (United States)

    Lian, Jie; Feng, Zhi; Shi, Peng

    2011-10-01

    This paper is concerned with the problem of observer design for switched recurrent neural networks with time-varying delay. The attention is focused on designing the full-order observers that guarantee the global exponential stability of the error dynamic system. Based on the average dwell time approach and the free-weighting matrix technique, delay-dependent sufficient conditions are developed for the solvability of such problem and formulated as linear matrix inequalities. The error-state decay estimate is also given. Then, the stability analysis problem for the switched recurrent neural networks can be covered as a special case of our results. Finally, four illustrative examples are provided to demonstrate the effectiveness and the superiority of the proposed methods. © 2011 IEEE

  2. Neural networks and principle component analysis approaches to predict pile capacity in sand

    Directory of Open Access Journals (Sweden)

    Benali A

    2018-01-01

    Full Text Available Determination of pile bearing capacity from the in-situ tests has developed considerably due to the significant development of their technology. The project presented in this paper is a combination of two approaches, artificial neural networks and main component analyses that allow the development of a neural network model that provides a more accurate prediction of axial load bearing capacity based on the SPT test data. The retropropagation multi-layer perceptron with Bayesian regularization (RB was used in this model. This was established by the incorporation of about 260 data, obtained from the published literature, of experimental programs for large displacement driven piles. The PCA method is proposed for compression and suppression of the correlation between these data. This will improve the performance of generalization of the model.

  3. A fast image registration approach of neural activities in light-sheet fluorescence microscopy images

    Science.gov (United States)

    Meng, Hui; Hui, Hui; Hu, Chaoen; Yang, Xin; Tian, Jie

    2017-03-01

    The ability of fast and single-neuron resolution imaging of neural activities enables light-sheet fluorescence microscopy (LSFM) as a powerful imaging technique in functional neural connection applications. The state-of-art LSFM imaging system can record the neuronal activities of entire brain for small animal, such as zebrafish or C. elegans at single-neuron resolution. However, the stimulated and spontaneous movements in animal brain result in inconsistent neuron positions during recording process. It is time consuming to register the acquired large-scale images with conventional method. In this work, we address the problem of fast registration of neural positions in stacks of LSFM images. This is necessary to register brain structures and activities. To achieve fast registration of neural activities, we present a rigid registration architecture by implementation of Graphics Processing Unit (GPU). In this approach, the image stacks were preprocessed on GPU by mean stretching to reduce the computation effort. The present image was registered to the previous image stack that considered as reference. A fast Fourier transform (FFT) algorithm was used for calculating the shift of the image stack. The calculations for image registration were performed in different threads while the preparation functionality was refactored and called only once by the master thread. We implemented our registration algorithm on NVIDIA Quadro K4200 GPU under Compute Unified Device Architecture (CUDA) programming environment. The experimental results showed that the registration computation can speed-up to 550ms for a full high-resolution brain image. Our approach also has potential to be used for other dynamic image registrations in biomedical applications.

  4. Identifying endogenous neural stem cells in the adult brain in vitro and in vivo: novel approaches.

    Science.gov (United States)

    Rueger, Maria Adele; Androutsellis-Theotokis, Andreas

    2013-01-01

    In the 1960s, Joseph Altman reported that the adult mammalian brain is capable of generating new neurons. Today it is understood that some of these neurons are derived from uncommitted cells in the subventricular zone lining the lateral ventricles, and the dentate gyrus of the hippocampus. The first area generates new neuroblasts which migrate to the olfactory bulb, whereas hippocampal neurogenesis seems to play roles in particular types of learning and memory. A part of these uncommitted (immature) cells is able to divide and their progeny can generate all three major cell types of the nervous system: neurons, astrocytes, and oligodendrocytes; these properties define such cells as neural stem cells. Although the roles of these cells are not yet clear, it is accepted that they affect functions including olfaction and learning/memory. Experiments with insults to the central nervous system also show that neural stem cells are quickly mobilized due to injury and in various disorders by proliferating, and migrating to injury sites. This suggests a role of endogenous neural stem cells in disease. New pools of stem cells are being discovered, suggesting an even more important role for these cells. To understand these cells and to coax them to contribute to tissue repair it would be very useful to be able to image them in the living organism. Here we discuss advances in imaging approaches as well as new concepts that emerge from stem cell biology with emphasis on the interface between imaging and stem cells.

  5. Parametric motion control of robotic arms: A biologically based approach using neural networks

    Science.gov (United States)

    Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.

    1993-01-01

    A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.

  6. Refining mass formulas for astrophysical applications: A Bayesian neural network approach

    Science.gov (United States)

    Utama, R.; Piekarewicz, J.

    2017-10-01

    Background: Exotic nuclei, particularly those near the drip lines, are at the core of one of the fundamental questions driving nuclear structure and astrophysics today: What are the limits of nuclear binding? Exotic nuclei play a critical role in both informing theoretical models as well as in our understanding of the origin of the heavy elements. Purpose: Our aim is to refine existing mass models through the training of an artificial neural network that will mitigate the large model discrepancies far away from stability. Methods: The basic paradigm of our two-pronged approach is an existing mass model that captures as much as possible of the underlying physics followed by the implementation of a Bayesian neural network (BNN) refinement to account for the missing physics. Bayesian inference is employed to determine the parameters of the neural network so that model predictions may be accompanied by theoretical uncertainties. Results: Despite the undeniable quality of the mass models adopted in this work, we observe a significant improvement (of about 40%) after the BNN refinement is implemented. Indeed, in the specific case of the Duflo-Zuker mass formula, we find that the rms deviation relative to experiment is reduced from σrms=0.503 MeV to σrms=0.286 MeV. These newly refined mass tables are used to map the neutron drip lines (or rather "drip bands") and to study a few critical r -process nuclei. Conclusions: The BNN approach is highly successful in refining the predictions of existing mass models. In particular, the large discrepancy displayed by the original "bare" models in regions where experimental data are unavailable is considerably quenched after the BNN refinement. This lends credence to our approach and has motivated us to publish refined mass tables that we trust will be helpful for future astrophysical applications.

  7. A non-parametric Bayesian approach for clustering and tracking non-stationarities of neural spikes.

    Science.gov (United States)

    Shalchyan, Vahid; Farina, Dario

    2014-02-15

    Neural spikes from multiple neurons recorded in a multi-unit signal are usually separated by clustering. Drifts in the position of the recording electrode relative to the neurons over time cause gradual changes in the position and shapes of the clusters, challenging the clustering task. By dividing the data into short time intervals, Bayesian tracking of the clusters based on Gaussian cluster model has been previously proposed. However, the Gaussian cluster model is often not verified for neural spikes. We present a Bayesian clustering approach that makes no assumptions on the distribution of the clusters and use kernel-based density estimation of the clusters in every time interval as a prior for Bayesian classification of the data in the subsequent time interval. The proposed method was tested and compared to Gaussian model-based approach for cluster tracking by using both simulated and experimental datasets. The results showed that the proposed non-parametric kernel-based density estimation of the clusters outperformed the sequential Gaussian model fitting in both simulated and experimental data tests. Using non-parametric kernel density-based clustering that makes no assumptions on the distribution of the clusters enhances the ability of tracking cluster non-stationarity over time with respect to the Gaussian cluster modeling approach. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. A HYBRID GENETIC ALGORITHM-NEURAL NETWORK APPROACH FOR PRICING CORES AND REMANUFACTURED CORES

    Directory of Open Access Journals (Sweden)

    M. Seidi

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT:Sustainability has become a major issue in most economies, causing many leading companies to focus on product recovery and reverse logistics. Remanufacturing is an industrial process that makes used products reusable. One of the important aspects in both reverse logistics and remanufacturing is the pricing of returned and remanufactured products (called cores. In this paper, we focus on pricing the cores and remanufactured cores. First we present a mathematical model for this purpose. Since this model does not satisfy our requirements, we propose a simulation optimisation approach. This approach consists of a hybrid genetic algorithm based on a neural network employed as the fitness function. We use automata learning theory to obtain the learning rate required for training the neural network. Numerical results demonstrate that the optimal value of the acquisition price of cores and price of remanufactured cores is obtained by this approach.

    AFRIKAANSE OPSOMMING: Volhoubaarheid het ‘n belangrike saak geword in die meeste ekonomieë, wat verskeie maatskappye genoop het om produkherwinning en omgekeerde logistiek te onder oë te neem. Hervervaardiging is ‘n industriële proses wat gebruikte produkte weer bruikbaar maak. Een van die belangrike aspekte in beide omgekeerde logistiek en hervervaardiging is die prysbepaling van herwinne en hervervaardigde produkte. Hierdie artikel fokus op die prysbepalingsaspekte by wyse van ‘n wiskundige model.

  9. Efficient Approach for RLS Type Learning in TSK Neural Fuzzy Systems.

    Science.gov (United States)

    Yeh, Jen-Wei; Su, Shun-Feng

    2017-09-01

    This paper presents an efficient approach for the use of recursive least square (RLS) learning algorithm in Takagi-Sugeno-Kang neural fuzzy systems. In the use of RLS, reduced covariance matrix, of which the off-diagonal blocks defining the correlation between rules are set to zeros, may be employed to reduce computational burden. However, as reported in the literature, the performance of such an approach is slightly worse than that of using the full covariance matrix. In this paper, we proposed a so-called enhanced local learning concept in which a threshold is considered to stop learning for those less fired rules. It can be found from our experiments that the proposed approach can have better performances than that of using the full covariance matrix. Enhanced local learning method can be more active on the structure learning phase. Thus, the method not only can stop the update for insufficiently fired rules to reduce disturbances in self-constructing neural fuzzy inference network but also raises the learning speed on structure learning phase by using a large backpropagation learning constant.

  10. An adaptive neural swarm approach for intrusion defense in ad hoc networks

    Science.gov (United States)

    Cannady, James

    2011-06-01

    Wireless sensor networks (WSN) and mobile ad hoc networks (MANET) are being increasingly deployed in critical applications due to the flexibility and extensibility of the technology. While these networks possess numerous advantages over traditional wireless systems in dynamic environments they are still vulnerable to many of the same types of host-based and distributed attacks common to those systems. Unfortunately, the limited power and bandwidth available in WSNs and MANETs, combined with the dynamic connectivity that is a defining characteristic of the technology, makes it extremely difficult to utilize traditional intrusion detection techniques. This paper describes an approach to accurately and efficiently detect potentially damaging activity in WSNs and MANETs. It enables the network as a whole to recognize attacks, anomalies, and potential vulnerabilities in a distributive manner that reflects the autonomic processes of biological systems. Each component of the network recognizes activity in its local environment and then contributes to the overall situational awareness of the entire system. The approach utilizes agent-based swarm intelligence to adaptively identify potential data sources on each node and on adjacent nodes throughout the network. The swarm agents then self-organize into modular neural networks that utilize a reinforcement learning algorithm to identify relevant behavior patterns in the data without supervision. Once the modular neural networks have established interconnectivity both locally and with neighboring nodes the analysis of events within the network can be conducted collectively in real-time. The approach has been shown to be extremely effective in identifying distributed network attacks.

  11. Interdisciplinary Approach to the Mental Lexicon: Neural Network and Text Extraction From Long-term Memory

    Directory of Open Access Journals (Sweden)

    Vardan G. Arutyunyan

    2013-01-01

    Full Text Available The paper touches upon the principles of mental lexicon organization in the light of recent research in psycho- and neurolinguistics. As a focal point of discussion two main approaches to mental lexicon functioning are considered: modular or dual-system approach, developed within generativism and opposite single-system approach, representatives of which are the connectionists and supporters of network models. The paper is an endeavor towards advocating the viewpoint that mental lexicon is complex psychological organization based upon specific composition of neural network. In this regard, the paper further elaborates on the matter of storing text in human mental space and introduces a model of text extraction from long-term memory. Based upon data available, the author develops a methodology of modeling structures of knowledge representation in the systems of artificial intelligence.

  12. Stochastic Neural Network Approach for Learning High-Dimensional Free Energy Surfaces

    Science.gov (United States)

    Schneider, Elia; Dai, Luke; Topper, Robert Q.; Drechsel-Grau, Christof; Tuckerman, Mark E.

    2017-10-01

    The generation of free energy landscapes corresponding to conformational equilibria in complex molecular systems remains a significant computational challenge. Adding to this challenge is the need to represent, store, and manipulate the often high-dimensional surfaces that result from rare-event sampling approaches employed to compute them. In this Letter, we propose the use of artificial neural networks as a solution to these issues. Using specific examples, we discuss network training using enhanced-sampling methods and the use of the networks in the calculation of ensemble averages.

  13. A neural network approach to discrimination between defects and calyces in oranges

    Directory of Open Access Journals (Sweden)

    Salvatore Ingrassia

    1993-11-01

    Full Text Available The problem of automatic discrimination among pictures concerning either defects or calyces in oranges is approached. The method here proposed is based on a statistical analysis of the grey-levels and the shape of calyces in the pictures. Some suitable statistical indices are considered and the discriminant function is designed by means of a neural network on the basis of a suitable vector representation of the images. Numerical experiments give 5 misclassifications in a set of 52 images, where only three defects have been classified as calyces.

  14. An EEMD-ICA Approach to Enhancing Artifact Rejection for Noisy Multivariate Neural Data.

    Science.gov (United States)

    Zeng, Ke; Chen, Dan; Ouyang, Gaoxiang; Wang, Lizhe; Liu, Xianzeng; Li, Xiaoli

    2016-06-01

    As neural data are generally noisy, artifact rejection is crucial for data preprocessing. It has long been a grand research challenge for an approach which is able: 1) to remove the artifacts and 2) to avoid loss or disruption of the structural information at the same time, thus the risk of introducing bias to data interpretation may be minimized. In this study, an approach (namely EEMD-ICA) was proposed to first decompose multivariate neural data that are possibly noisy into intrinsic mode functions (IMFs) using ensemble empirical mode decomposition (EEMD). Independent component analysis (ICA) was then applied to the IMFs to separate the artifactual components. The approach was tested against the classical ICA and the automatic wavelet ICA (AWICA) methods, which were dominant methods for artifact rejection. In order to evaluate the effectiveness of the proposed approach in handling neural data possibly with intensive noises, experiments on artifact removal were performed using semi-simulated data mixed with a variety of noises. Experimental results indicate that the proposed approach continuously outperforms the counterparts in terms of both normalized mean square error (NMSE) and Structure SIMilarity (SSIM). The superiority becomes even greater with the decrease of SNR in all cases, e.g., SSIM of the EEMD-ICA can almost double that of AWICA and triple that of ICA. To further examine the potentials of the approach in sophisticated applications, the approach together with the counterparts were used to preprocess a real-life epileptic EEG with absence seizure. Experiments were carried out with the focus on characterizing the dynamics of the data after artifact rejection, i.e., distinguishing seizure-free, pre-seizure and seizure states. Using multi-scale permutation entropy to extract feature and linear discriminant analysis for classification, the EEMD-ICA performed the best for classifying the states (87.4%, about 4.1% and 8.7% higher than that of AWICA and ICA

  15. Using WordNet for Building WordNets

    CERN Document Server

    Farreres, X; Farreres, Xavier; Rodriguez, Horacio; Rigau, German

    1998-01-01

    This paper summarises a set of methodologies and techniques for the fast construction of multilingual WordNets. The English WordNet is used in this approach as a backbone for Catalan and Spanish WordNets and as a lexical knowledge resource for several subtasks.

  16. Net Locality

    DEFF Research Database (Denmark)

    de Souza e Silva, Adriana Araujo; Gordon, Eric

    Provides an introduction to the new theory of Net Locality and the profound effect on individuals and societies when everything is located or locatable. Describes net locality as an emerging form of location awareness central to all aspects of digital media, from mobile phones, to Google Maps...... of emerging technologies, from GeoCities to GPS, Wi-Fi, Wiki Me, and Google Android....

  17. Net Neutrality

    DEFF Research Database (Denmark)

    Savin, Andrej

    2017-01-01

    Repealing “net neutrality” in the US will have no bearing on Internet freedom or security there or anywhere else.......Repealing “net neutrality” in the US will have no bearing on Internet freedom or security there or anywhere else....

  18. Automated Modeling of Microwave Structures by Enhanced Neural Networks

    Directory of Open Access Journals (Sweden)

    Z. Raida

    2006-12-01

    Full Text Available The paper describes the methodology of the automated creation of neural models of microwave structures. During the creation process, artificial neural networks are trained using the combination of the particle swarm optimization and the quasi-Newton method to avoid critical training problems of the conventional neural nets. In the paper, neural networks are used to approximate the behavior of a planar microwave filter (moment method, Zeland IE3D. In order to evaluate the efficiency of neural modeling, global optimizations are performed using numerical models and neural ones. Both approaches are compared from the viewpoint of CPU-time demands and the accuracy. Considering conclusions, methodological recommendations for including neural networks to the microwave design are formulated.

  19. Symbolic processing in neural networks

    OpenAIRE

    Neto, João Pedro; Hava T Siegelmann; Costa,J.Félix

    2003-01-01

    In this paper we show that programming languages can be translated into recurrent (analog, rational weighted) neural nets. Implementation of programming languages in neural nets turns to be not only theoretical exciting, but has also some practical implications in the recent efforts to merge symbolic and sub symbolic computation. To be of some use, it should be carried in a context of bounded resources. Herein, we show how to use resource bounds to speed up computations over neural nets, thro...

  20. A robust neural network-based approach for microseismic event detection

    KAUST Repository

    Akram, Jubran

    2017-08-17

    We present an artificial neural network based approach for robust event detection from low S/N waveforms. We use a feed-forward network with a single hidden layer that is tuned on a training dataset and later applied on the entire example dataset for event detection. The input features used include the average of absolute amplitudes, variance, energy-ratio and polarization rectilinearity. These features are calculated in a moving-window of same length for the entire waveform. The output is set as a user-specified relative probability curve, which provides a robust way of distinguishing between weak and strong events. An optimal network is selected by studying the weight-based saliency and effect of number of neurons on the predicted results. Using synthetic data examples, we demonstrate that this approach is effective in detecting weaker events and reduces the number of false positives.

  1. Neural Network Control of CSTR for Reversible Reaction Using Reverence Model Approach

    Directory of Open Access Journals (Sweden)

    Duncan ALOKO

    2007-01-01

    Full Text Available In this work, non-linear control of CSTR for reversible reaction is carried out using Neural Network as design tool. The Model Reverence approach in used to design ANN controller. The idea is to have a control system that will be able to achieve improvement in the level of conversion and to be able to track set point change and reject load disturbance. We use PID control scheme as benchmark to study the performance of the controller. The comparison shows that ANN controller out perform PID in the extreme range of non-linearity.This paper represents a preliminary effort to design a simplified neutral network control scheme for a class of non-linear process. Future works will involve further investigation of the effectiveness of thin approach for the real industrial chemical process

  2. A neural network approach for the blind deconvolution of turbulent flows

    Science.gov (United States)

    Maulik, Romit; San, Omer

    2017-11-01

    We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations. PhD Student.

  3. Stability analysis of switched cellular neural networks: A mode-dependent average dwell time approach.

    Science.gov (United States)

    Huang, Chuangxia; Cao, Jie; Cao, Jinde

    2016-10-01

    This paper addresses the exponential stability of switched cellular neural networks by using the mode-dependent average dwell time (MDADT) approach. This method is quite different from the traditional average dwell time (ADT) method in permitting each subsystem to have its own average dwell time. Detailed investigations have been carried out for two cases. One is that all subsystems are stable and the other is that stable subsystems coexist with unstable subsystems. By employing Lyapunov functionals, linear matrix inequalities (LMIs), Jessen-type inequality, Wirtinger-based inequality, reciprocally convex approach, we derived some novel and less conservative conditions on exponential stability of the networks. Comparing to ADT, the proposed MDADT show that the minimal dwell time of each subsystem is smaller and the switched system stabilizes faster. The obtained results extend and improve some existing ones. Moreover, the validness and effectiveness of these results are demonstrated through numerical simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A neural network approach for the blind deconvolution of turbulent flows

    Science.gov (United States)

    Maulik, R.; San, O.

    2017-11-01

    We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations.

  5. Fuzzy Petri nets to model vision system decisions within a flexible manufacturing system

    Science.gov (United States)

    Hanna, Moheb M.; Buck, A. A.; Smith, R.

    1994-10-01

    The paper presents a Petri net approach to modelling, monitoring and control of the behavior of an FMS cell. The FMS cell described comprises a pick and place robot, vision system, CNC-milling machine and 3 conveyors. The work illustrates how the block diagrams in a hierarchical structure can be used to describe events at different levels of abstraction. It focuses on Fuzzy Petri nets (Fuzzy logic with Petri nets) including an artificial neural network (Fuzzy Neural Petri nets) to model and control vision system decisions and robot sequences within an FMS cell. This methodology can be used as a graphical modelling tool to monitor and control the imprecise, vague and uncertain situations, and determine the quality of the output product of an FMS cell.

  6. The ImageNet Shuffle: Reorganized Pre-training for Video Event Detection

    NARCIS (Netherlands)

    Mettes, P.; Koelma, D.C.; Snoek, C.G.M.

    2016-01-01

    This paper strives for video event detection using a representation learned from deep convolutional neural networks. Different from the leading approaches, who all learn from the 1,000 classes defined in the ImageNet Large Scale Visual Recognition Challenge, we investigate how to leverage the

  7. Design and regularization of neural networks: the optimal use of a validation set

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai; Svarer, Claus

    1996-01-01

    We derive novel algorithms for estimation of regularization parameters and for optimization of neural net architectures based on a validation set. Regularisation parameters are estimated using an iterative gradient descent scheme. Architecture optimization is performed by approximative...... combinatorial search among the relevant subsets of an initial neural network architecture by employing a validation set based optimal brain damage/surgeon (OBD/OBS) or a mean field combinatorial optimization approach. Numerical results with linear models and feed-forward neural networks demonstrate...

  8. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    Science.gov (United States)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  9. Analysis of Salinity Intrusion in the San Francisco Bay-Delta Using a GA-Optimized Neural Net, and Application of the Model to Prediction in the Elkhorn Slough Habitat

    Science.gov (United States)

    Thompson, D. E.; Rajkumar, T.

    2002-12-01

    The San Francisco Bay Delta is a large hydrodynamic complex that incorporates the Sacramento and San Joaquin Estuaries, the Suisan Marsh, and the San Francisco Bay proper. Competition exists for the use of this extensive water system both from the fisheries industry, the agricultural industry, and from the marine and estuarine animal species within the Delta. As tidal fluctuations occur, more saline water pushes upstream allowing fish to migrate beyond the Suisan Marsh for breeding and habitat occupation. However, the agriculture industry does not want extensive salinity intrusion to impact water quality for human and plant consumption. The balance is regulated by pumping stations located along the estuaries and reservoirs whereby flushing of fresh water keeps the saline intrusion at bay. The pumping schedule is driven by data collected at various locations within the Bay Delta and by numerical models that predict the salinity intrusion as part of a larger model of the system. The Interagency Ecological Program (IEP) for the San Francisco Bay / Sacramento-San Joaquin Estuary collects, monitors, and archives the data, and the Department of Water Resources provides a numerical model simulation (DSM2) from which predictions are made that drive the pumping schedule. A problem with DSM2 is that the numerical simulation takes roughly 16 hours to complete a prediction. We have created a neural net, optimized with a genetic algorithm, that takes as input the archived data from multiple gauging stations and predicts stage, salinity, and flow at the Carquinez Straits (at the downstream end of the Suisan Marsh). This model seems to be robust in its predictions and operates much faster than the current numerical DSM2 model. Because the Bay-Delta is strongly tidally driven, we used both Principal Component Analysis and Fast Fourier Transforms to discover dominant features within the IEP data. We then filtered out the dominant tidal forcing to discover non-primary tidal effects

  10. Neural Network Approach for Estimation of Penetration Depth in Concrete Targets by Ogive-nose Steel Projectiles

    Directory of Open Access Journals (Sweden)

    M. Hosseini

    Full Text Available AbstractDespite the availability of large number of empirical and semi-empirical models, the problem of penetration depth prediction for concrete targets has remained inconclusive partly due to the complexity of the phenomenon involved and partly because of the limitations of the statistical regression employed. Conventional statistical analysis is now being replaced in many fields by the alternative approach of neural networks. Neural networks have advantages over statistical models like their data-driven nature, model-free form of predictions, and tolerance to data errors. The objective of this study is to reanalyze the data for the prediction of penetration depth by employing the technique of neural networks with a view towards seeing if better predictions are possible. The data used in the analysis pertains to the ogive-nose steel projectiles on concrete targets and the neural network models result in very low errors and high correlation coefficients as compared to the regression based models.

  11. Intelligent control a hybrid approach based on fuzzy logic, neural networks and genetic algorithms

    CERN Document Server

    Siddique, Nazmul

    2014-01-01

    Intelligent Control considers non-traditional modelling and control approaches to nonlinear systems. Fuzzy logic, neural networks and evolutionary computing techniques are the main tools used. The book presents a modular switching fuzzy logic controller where a PD-type fuzzy controller is executed first followed by a PI-type fuzzy controller thus improving the performance of the controller compared with a PID-type fuzzy controller.  The advantage of the switching-type fuzzy controller is that it uses one rule-base thus minimises the rule-base during execution. A single rule-base is developed by merging the membership functions for change of error of the PD-type controller and sum of error of the PI-type controller. Membership functions are then optimized using evolutionary algorithms. Since the two fuzzy controllers were executed in series, necessary further tuning of the differential and integral scaling factors of the controller is then performed. Neural-network-based tuning for the scaling parameters of t...

  12. Prediction of roadheaders' performance using artificial neural network approaches (MLP and KOSFM

    Directory of Open Access Journals (Sweden)

    Arash Ebrahimabadi

    2015-10-01

    Full Text Available Application of mechanical excavators is one of the most commonly used excavation methods because it can bring the project more productivity, accuracy and safety. Among the mechanical excavators, roadheaders are mechanical miners which have been extensively used in tunneling, mining and civil industries. Performance prediction is an important issue for successful roadheader application and generally deals with machine selection, production rate and bit consumption. The main aim of this research is to investigate the cutting performance (instantaneous cutting rates (ICRs of medium-duty roadheaders by using artificial neural network (ANN approach. There are different categories for ANNs, but based on training algorithm there are two main kinds: supervised and unsupervised. The multi-layer perceptron (MLP and Kohonen self-organizing feature map (KSOFM are the most widely used neural networks for supervised and unsupervised ones, respectively. For gaining this goal, a database was primarily provided from roadheaders' performance and geomechanical characteristics of rock formations in tunnels and drift galleries in Tabas coal mine, the largest and the only fully-mechanized coal mine in Iran. Then the database was analyzed in order to yield the most important factor for ICR by using relatively important factor in which Garson equation was utilized. The MLP network was trained by 3 input parameters including rock mass properties, rock quality designation (RQD, intact rock properties such as uniaxial compressive strength (UCS and Brazilian tensile strength (BTS, and one output parameter (ICR. In order to have more validation on MLP outputs, KSOFM visualization was applied. The mean square error (MSE and regression coefficient (R of MLP were found to be 5.49 and 0.97, respectively. Moreover, KSOFM network has a map size of 8 × 5 and final quantization and topographic errors were 0.383 and 0.032, respectively. The results show that MLP neural networks

  13. Nuclear mass predictions based on Bayesian neural network approach with pairing and shell effects

    Science.gov (United States)

    Niu, Z. M.; Liang, H. Z.

    2018-03-01

    Bayesian neural network (BNN) approach is employed to improve the nuclear mass predictions of various models. It is found that the noise error in the likelihood function plays an important role in the predictive performance of the BNN approach. By including a distribution for the noise error, an appropriate value can be found automatically in the sampling process, which optimizes the nuclear mass predictions. Furthermore, two quantities related to nuclear pairing and shell effects are added to the input layer in addition to the proton and mass numbers. As a result, the theoretical accuracies are significantly improved not only for nuclear masses but also for single-nucleon separation energies. Due to the inclusion of the shell effect, in the unknown region, the BNN approach predicts a similar shell-correction structure to that in the known region, e.g., the predictions of underestimation of nuclear mass around the magic numbers in the relativistic mean-field model. This manifests that better predictive performance can be achieved if more physical features are included in the BNN approach.

  14. Artificial neural network approach for moiré fringe center determination

    Science.gov (United States)

    Woo, Wing Hon; Ratnam, Mani Maran; Yen, Kin Sam

    2015-11-01

    The moiré effect has been used in high-accuracy positioning and alignment systems for decades. Various methods have been proposed to identify and locate moiré fringes in order to relate the pattern information to dimensional and displacement measurement. These methods can be broadly categorized into manual interpretation based on human knowledge and image processing based on computational algorithms. An artificial neural network (ANN) is proposed to locate moiré fringe centers within circular grating moiré patterns. This ANN approach aims to mimic human decision making by eliminating complex mathematical computations or time-consuming image processing algorithms in moiré fringe recognition. A feed-forward backpropagation ANN architecture was adopted in this work. Parametric studies were performed to optimize the ANN architecture. The finalized ANN approach was able to determine the location of the fringe centers with average deviations of 3.167 pixels out of 200 pixels (≈1.6%) and 6.166 pixels out of 200 pixels (≈3.1%) for real moiré patterns that lie within and outside the training intervals, respectively. In addition, a reduction of 43.4% in the computational time was reported using the ANN approach. Finally, the applicability of the ANN approach for moiré fringe center determination was confirmed.

  15. A new approach for sizing stand alone photovoltaic systems based in neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Hontoria, L.; Aguilera, J. [Universidad de Jaen, Dept. de Electronica, Jaen (Spain); Zufiria, P. [UPM Ciudad Universitaria, Dept. de Matematica Aplicada a las Tecnologias de la Informacion, Madrid (Spain)

    2005-02-01

    Several methods for sizing stand alone photovoltaic (pv) systems has been developed. The more simplistic are called intuitive methods. They are a useful tool for a first approach in sizing stand alone photovoltaic systems. Nevertheless they are very inaccurate. Analytical methods use equations to describe the pv system size as a function of reliability. These ones are more accurate than the previous ones but they are also not accurate enough for sizing of high reliability. In a third group there are methods which use system simulations. These ones are called numerical methods. Many of the analytical methods employ the concept of reliability of the system or the complementary term: loss of load probability (LOLP). In this paper an improvement for obtaining LOLP curves based on the neural network called Multilayer Perceptron (MLP) is presented. A unique MLP for many locations of Spain has been trained and after the training, the MLP is able to generate LOLP curves for any value and location. (Author)

  16. Artificial neural network (ANN) approach for modeling Zn(II) adsorption in batch process

    Energy Technology Data Exchange (ETDEWEB)

    Yildiz, Sayiter [Engineering Faculty, Cumhuriyet University, Sivas (Turkmenistan)

    2017-09-15

    Artificial neural networks (ANN) were applied to predict adsorption efficiency of peanut shells for the removal of Zn(II) ions from aqueous solutions. Effects of initial pH, Zn(II) concentrations, temperature, contact duration and adsorbent dosage were determined in batch experiments. The sorption capacities of the sorbents were predicted with the aid of equilibrium and kinetic models. The Zn(II) ions adsorption onto peanut shell was better defined by the pseudo-second-order kinetic model, for both initial pH, and temperature. The highest R{sup 2} value in isotherm studies was obtained from Freundlich isotherm for the inlet concentration and from Temkin isotherm for the sorbent amount. The high R{sup 2} values prove that modeling the adsorption process with ANN is a satisfactory approach. The experimental results and the predicted results by the model with the ANN were found to be highly compatible with each other.

  17. An integrated data envelopment analysis-artificial neural network approach for benchmarking of bank branches

    Science.gov (United States)

    Shokrollahpour, Elsa; Hosseinzadeh Lotfi, Farhad; Zandieh, Mostafa

    2016-02-01

    Efficiency and quality of services are crucial to today's banking industries. The competition in this section has become increasingly intense, as a result of fast improvements in Technology. Therefore, performance analysis of the banking sectors attracts more attention these days. Even though data envelopment analysis (DEA) is a pioneer approach in the literature as of an efficiency measurement tool and finding benchmarks, it is on the other hand unable to demonstrate the possible future benchmarks. The drawback to it could be that the benchmarks it provides us with, may still be less efficient compared to the more advanced future benchmarks. To cover for this weakness, artificial neural network is integrated with DEA in this paper to calculate the relative efficiency and more reliable benchmarks of one of the Iranian commercial bank branches. Therefore, each branch could have a strategy to improve the efficiency and eliminate the cause of inefficiencies based on a 5-year time forecast.

  18. Automatic detection of photoresist residual layer in lithography using a neural classification approach

    KAUST Repository

    Gereige, Issam

    2012-09-01

    Photolithography is a fundamental process in the semiconductor industry and it is considered as the key element towards extreme nanoscale integration. In this technique, a polymer photo sensitive mask with the desired patterns is created on the substrate to be etched. Roughly speaking, the areas to be etched are not covered with polymer. Thus, no residual layer should remain on these areas in order to insure an optimal transfer of the patterns on the substrate. In this paper, we propose a nondestructive method based on a classification approach achieved by artificial neural network for automatic residual layer detection from an ellipsometric signature. Only the case of regular defect, i.e. homogenous residual layer, will be considered. The limitation of the method will be discussed. Then, an experimental result on a 400 nm period grating manufactured with nanoimprint lithography is analyzed with our method. © 2012 Elsevier B.V. All rights reserved.

  19. Dual Analyte immunoassay--a new approach to neural tube defect and Down's syndrome screening.

    Science.gov (United States)

    Macri, J N; Spencer, K; Anderson, R

    1992-07-01

    A microtiter plate based Dual Analyte enzyme-immunoassay method for the simultaneous measurement of alpha-fetoprotein (AFP) and Free-beta human chorionic gonadotrophin (hCG) was evaluated. This rapid assay, which has application in both Neural Tube Defect screening and Down's screening, shows good precision with between assay coefficients of variation between 5 and 7.5% for AFP and 3.7 to 5.8% for Free-beta(hCG). Correlation with single analyte procedures is good, with correlation coefficients being greater than 0.91 in both cases. Clinical discrimination in detecting both types of abnormalities is not compromised by this new simultaneous Dual Analyte assay. We conclude that the Dual Analyte approach, which combines analytes achieving the highest known detection efficiency, will bring about improvements in the efficiency of screening, reduce costs and improve report turnaround, all leading to better quality of patient care.

  20. Hammerstein-Wiener Model: A New Approach to the Estimation of Formal Neural Information

    Directory of Open Access Journals (Sweden)

    Reza Abbasi-Asl

    2012-09-01

    Full Text Available A new approach is introduced to estimate the formal information of neurons. Formal Information, mainly discusses about the aspects of the response that is related to the stimulus. Estimation is based on introducing a mathematical nonlinear model with Hammerstein-Wiener system estimator. This method of system identification consists of three blocks to completely describe the nonlinearity of input and output and linear behaviour of the model. The introduced model is trained by 166 spikes of neurons and other 166 spikes are used to test and validate the model. The simulation results show the R-Value of 92.6 % between estimated and reference information rate. This shows improvement of 1.41 % in comparison with MLP neural network.

  1. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography.

    Science.gov (United States)

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa

    2017-03-01

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.

  2. A neural network approach to smarter sensor networks for water quality monitoring.

    Science.gov (United States)

    O'Connor, Edel; Smeaton, Alan F; O'Connor, Noel E; Regan, Fiona

    2012-01-01

    Environmental monitoring is evolving towards large-scale and low-cost sensor networks operating reliability and autonomously over extended periods of time. Sophisticated analytical instrumentation such as chemo-bio sensors present inherent limitations because of the number of samples that they can take. In order to maximize their deployment lifetime, we propose the coordination of multiple heterogeneous information sources. We use rainfall radar images and information from a water depth sensor as input to a neural network (NN) to dictate the sampling frequency of a phosphate analyzer at the River Lee in Cork, Ireland. This approach shows varied performance for different times of the year but overall produces output that is very satisfactory for the application context in question. Our study demonstrates that even with limited training data, a system for controlling the sampling rate of the nutrient sensor can be set up and can improve the efficiency of the more sophisticated nodes of the sensor network.

  3. An approach to including protein quality when assessing the net contribution of livestock to human food supply.

    Science.gov (United States)

    Ertl, P; Knaus, W; Zollitsch, W

    2016-11-01

    The production of protein from animal sources is often criticized because of the low efficiency of converting plant protein from feeds into protein in the animal products. However, this critique does not consider the fact that large portions of the plant-based proteins fed to animals may be human-inedible and that the quality of animal proteins is usually superior as compared with plant proteins. The aim of the present study was therefore to assess changes in protein quality in the course of the transformation of potentially human-edible plant proteins into animal products via livestock production; data from 30 Austrian dairy farms were used as a case study. A second aim was to develop an approach for combining these changes with quantitative aspects (e.g. with the human-edible feed conversion efficiency (heFCE), defined as kilogram protein in the animal product divided by kilogram potentially human-edible protein in the feeds). Protein quality of potentially human-edible inputs and outputs was assessed using the protein digestibility-corrected amino acid score and the digestible indispensable amino acid score, two methods proposed by the Food and Agriculture Organization of the United Nations to describe the nutritional value of proteins for humans. Depending on the method used, protein scores were between 1.40 and 1.87 times higher for the animal products than for the potentially human-edible plant protein input on a barn-gate level (=protein quality ratio (PQR)). Combining the PQR of 1.87 with the heFCE for the same farms resulted in heFCE×PQR of 2.15. Thus, considering both quantity and quality, the value of the proteins in the animal products for human consumption (in this case in milk and beef) is 2.15 times higher than that of proteins in the potentially human-edible plant protein inputs. The results of this study emphasize the necessity of including protein quality changes resulting from the transformation of plant proteins to animal proteins when

  4. Pore pressure effects on fracture net pressure and hydraulic fracture containment : Insights from an empirical and simulation approach

    NARCIS (Netherlands)

    Prabhakaran, R.; de Pater, C.J.; Shaoul, Josef

    2017-01-01

    Pore pressure and its relationship with fracture net pressure has been reported qualitatively from both field and experimental observations. From a modeling perspective, the ubiquitously used pseudo 3D (P3D) models that are based on linear elastic fracture mechanics (LEFM) do not include the

  5. Water level forecasting through fuzzy logic and artificial neural network approaches

    Directory of Open Access Journals (Sweden)

    S. Alvisi

    2006-01-01

    Full Text Available In this study three data-driven water level forecasting models are presented and discussed. One is based on the artificial neural networks approach, while the other two are based on the Mamdani and the Takagi-Sugeno fuzzy logic approaches, respectively. All of them are parameterised with reference to flood events alone, where water levels are higher than a selected threshold. The analysis of the three models is performed by using the same input and output variables. However, in order to evaluate their capability to deal with different levels of information, two different input sets are considered. The former is characterized by significant spatial and time aggregated rainfall information, while the latter considers rainfall information more distributed in space and time. The analysis is made with great attention to the reliability and accuracy of each model, with reference to the Reno river at Casalecchio di Reno (Bologna, Italy. It is shown that the two models based on the fuzzy logic approaches perform better when the physical phenomena considered are synthesised by both a limited number of variables and IF-THEN logic statements, while the ANN approach increases its performance when more detailed information is used. As regards the reliability aspect, it is shown that the models based on the fuzzy logic approaches may fail unexpectedly to forecast the water levels, in the sense that in the testing phase, some input combinations are not recognised by the rule system and thus no forecasting is performed. This problem does not occur in the ANN approach.

  6. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Directory of Open Access Journals (Sweden)

    Lucas Theis

    Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  7. Neural network based tomographic approach to detect earthquake-related ionospheric anomalies

    Directory of Open Access Journals (Sweden)

    S. Hirooka

    2011-08-01

    Full Text Available A tomographic approach is used to investigate the fine structure of electron density in the ionosphere. In the present paper, the Residual Minimization Training Neural Network (RMTNN method is selected as the ionospheric tomography with which to investigate the detailed structure that may be associated with earthquakes. The 2007 Southern Sumatra earthquake (M = 8.5 was selected because significant decreases in the Total Electron Content (TEC have been confirmed by GPS and global ionosphere map (GIM analyses. The results of the RMTNN approach are consistent with those of TEC approaches. With respect to the analyzed earthquake, we observed significant decreases at heights of 250–400 km, especially at 330 km. However, the height that yields the maximum electron density does not change. In the obtained structures, the regions of decrease are located on the southwest and southeast sides of the Integrated Electron Content (IEC (altitudes in the range of 400–550 km and on the southern side of the IEC (altitudes in the range of 250–400 km. The global tendency is that the decreased region expands to the east with increasing altitude and concentrates in the Southern hemisphere over the epicenter. These results indicate that the RMTNN method is applicable to the estimation of ionospheric electron density.

  8. Development of a highly accurate OCR system by neural network approach

    Science.gov (United States)

    Poon, Joe C. H.; Man, Gary M. T.; Hung, Yan C.

    1993-04-01

    We have developed an application software system for font and size invariant optical character recognition (OCR). A preliminary front-end process, which handles gray scale normalization, noise elimination, line finding, and character block segmentation, has been included in our system. But the main characteristic of our system is that we adopt a Fourier descriptors (FDs) based feature extraction approach and a multicategory back-propagation neural network classifier for the recognition. Instead of using one set of FDs to represent the image object as in conventional FDs approach, we use three sets of FDs to represent different portions of the object. We find this approach can solve the intrinsic problem of FDs caused by their rotational and reflectional invariance properties. Thus, our existing method can correctly classify ambiguous characters like 5, 2, p, q, etc. Our system will become a primitive building block for later more complex OCR systems. In general, the back propagation provides a gradient descent optimization in training. Without prior knowledge of the mathematical relation between the input and output, the network is highly efficient to map a large variety set of input patterns to an arbitrary set of output patterns after successful training. Thus, the system is not only capable of understanding printed English text and isolated cursive scripts, but also can be extended to read other symbols at the expense of additional training time. At present, a 99.8 percent rate of recognition accuracy has already been achieved for trained English samples in our experiment.

  9. A neural network approach for fast, automated quantification of DIR performance.

    Science.gov (United States)

    Neylon, John; Min, Yugang; Low, Daniel A; Santhanam, Anand

    2017-08-01

    A critical step in adaptive radiotherapy (ART) workflow is deformably registering the simulation CT with the daily or weekly volumetric imaging. Quantifying the deformable image registration accuracy under these circumstances is a complex task due to the lack of known ground-truth landmark correspondences between the source data and target data. Generating landmarks manually (using experts) is time-consuming, and limited by image quality and observer variability. While image similarity metrics (ISM) may be used as an alternative approach to quantify the registration error, there is a need to characterize the ISM values by developing a nonlinear cost function and translate them to physical distance measures in order to enable fast, quantitative comparison of registration performance. In this paper, we present a proof-of-concept methodology for automated quantification of DIR performance. A nonlinear cost function was developed as a combination of ISM values and governed by the following two expectations for an accurate registration: (a) the deformed data obtained from transforming the simulation CT data with the deformation vector field (DVF) should match the target image data with near perfect similarity, and (b) the similarity between the simulation CT and deformed data should match the similarity between the simulation CT and the target image data. A deep neural network (DNN) was developed that translated the cost function values to actual physical distance measure. To train the neural network, patient-specific biomechanical models of the head-and-neck anatomy were employed. The biomechanical model anatomy was systematically deformed to represent changes in patient posture and physiological regression. Volumetric source and target images with known ground-truth deformations vector fields were then generated, representing the daily or weekly imaging data. Annotated data was then fed through a supervised machine learning process, iteratively optimizing a nonlinear

  10. Predicting manual arm strength: A direct comparison between artificial neural network and multiple regression approaches.

    Science.gov (United States)

    La Delfa, Nicholas J; Potvin, Jim R

    2016-02-29

    In ergonomics, strength prediction has typically been accomplished using linked-segment biomechanical models, and independent estimates of strength about each axis of the wrist, elbow and shoulder joints. It has recently been shown that multiple regression approaches, using the simple task-relevant inputs of hand location and force direction, may be a better method for predicting manual arm strength (MAS) capabilities. Artificial neural networks (ANNs) also serve as a powerful data fitting approach, but their application to occupational biomechanics and ergonomics is limited. Therefore, the purpose of this study was to perform a direct comparison between ANN and regression models, by evaluating their ability to predict MAS with identical sets of development and validation MAS data. Multi-directional MAS data were obtained from 95 healthy female participants at 36 hand locations within the reach envelope. ANN and regression models were developed using a random, but identical, sample of 85% of the MAS data (n=456). The remaining 15% of the data (n=80) were used to validate the two approaches. When compared to the development data, the ANN predictions had a much higher explained variance (90.2% vs. 66.5%) and much lower RMSD (9.3N vs. 17.2N), vs. the regression model. The ANN also performed better with the independent validation data (r(2)=78.6%, RMSD=15.1) compared to the regression approach (r(2)=65.3%, RMSD=18.6N). These results suggest that ANNs provide a more accurate and robust alternative to regression approaches, and should be considered more often in biomechanics and ergonomics evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach.

    Science.gov (United States)

    Chandrasekar, A; Rakkiyappan, R; Cao, Jinde

    2015-10-01

    This paper studies the impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach. The array of neural networks are coupled in a random fashion which is governed by Bernoulli random variable. The aim of this paper is to obtain the synchronization criteria, which is suitable for both exactly known and partly unknown transition probabilities such that the coupled neural network is synchronized with mixed time-delay. The considered impulsive effects can be synchronized at partly unknown transition probabilities. Besides, a multiple integral approach is also proposed to strengthen the Markovian jumping randomly coupled neural networks with partly unknown transition probabilities. By making use of Kronecker product and some useful integral inequalities, a novel Lyapunov-Krasovskii functional was designed for handling the coupled neural network with mixed delay and then impulsive synchronization criteria are solvable in a set of linear matrix inequalities. Finally, numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. The Application of Cognitive Diagnostic Approaches via Neural Network Analysis of Serious Educational Games

    Science.gov (United States)

    Lamb, Richard L.

    Serious Educational Games (SEGs) have been a topic of increased popularity within the educational realm since the early millennia. SEGs are generalized form of Serious Games to mean games for purposes other than entertainment but, that also specifically include training, educational purpose and pedagogy within their design. This rise in popularity (for SEGs) has occurred at a time when school systems have increased the type, number, and presentations of student achievement tests for decision-making purposes. These tests often task the form of end of course (year) tests and periodic benchmark testing. As the use of these tests, has increased policymakers have suggested their use as a measure for teacher accountability. The change in testing resulted from a push by school districts and policy makers at various component levels for a data-driven decision-making (D3M) approach. With the data-driven decision making approaches by school districts, there has been an increased focus on the measurement and assessment of student content knowledge with little focus on the contributing factors and cognitive attributes within learning that cross multiple-content areas. One-way to increase the focus on these aspects of learning (factors and attributes) that are additional to content learning is through assessments based in cognitive diagnostics. Cognitive diagnostics are a family of methodological approaches in which tasks tie to specific cognitive attributes for analytical purposes. This study explores data derived from computer data logging (n=158,000) in an observational design, using traditional statistical techniques such as clustering (exploratory and confirmatory), item response theory and through data mining techniques such as artificial neural network analysis. From these analyses, a model of student learning emerges illustrating student thinking and learning while engaged in SEG Design. This study seeks to use cognitive diagnostic type approaches to measure student

  13. Neural Decoding and “Inner” Psychophysics: A Distance-to-Bound Approach for Linking Mind, Brain, and Behavior

    Science.gov (United States)

    Ritchie, J. Brendan; Carlson, Thomas A.

    2016-01-01

    A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called “inner” psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural “decoding,” methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our “neural distance-to-bound” approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior. PMID:27199652

  14. Neural Decoding and "Inner" Psychophysics: A Distance-to-Bound Approach for Linking Mind, Brain, and Behavior.

    Science.gov (United States)

    Ritchie, J Brendan; Carlson, Thomas A

    2016-01-01

    A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called "inner" psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural "decoding," methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our "neural distance-to-bound" approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior.

  15. Curriculum Assessment Using Artificial Neural Network and Support Vector Machine Modeling Approaches: A Case Study. IR Applications. Volume 29

    Science.gov (United States)

    Chen, Chau-Kuang

    2010-01-01

    Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…

  16. Net Zero Energy Buildings

    DEFF Research Database (Denmark)

    Marszal, Anna Joanna; Bourrelle, Julien S.; Musall, Eike

    2010-01-01

    and identify possible renewable energy supply options which may be considered in calculations. Finally, the gap between the methodology proposed by each organisation and their respective national building code is assessed; providing an overview of the possible changes building codes will need to undergo......The international cooperation project IEA SHC Task 40 / ECBCS Annex 52 “Towards Net Zero Energy Solar Buildings”, attempts to develop a common understanding and to set up the basis for an international definition framework of Net Zero Energy Buildings (Net ZEBs). The understanding of such buildings...... parameters used in the calculations are discussed and the various renewable supply options considered in the methodologies are summarised graphically. Thus, the paper helps to understand different existing approaches to calculate energy balance in Net ZEBs, highlights the importance of variables selection...

  17. NetSig

    DEFF Research Database (Denmark)

    Horn, Heiko; Lawrence, Michael S; Chouinard, Candace R

    2018-01-01

    Methods that integrate molecular network information and tumor genome data could complement gene-based statistical tests to identify likely new cancer genes; but such approaches are challenging to validate at scale, and their predictive value remains unclear. We developed a robust statistic (Net......Sig) that integrates protein interaction networks with data from 4,742 tumor exomes. NetSig can accurately classify known driver genes in 60% of tested tumor types and predicts 62 new driver candidates. Using a quantitative experimental framework to determine in vivo tumorigenic potential in mice, we found that Net......Sig candidates induce tumors at rates that are comparable to those of known oncogenes and are ten-fold higher than those of random genes. By reanalyzing nine tumor-inducing NetSig candidates in 242 patients with oncogene-negative lung adenocarcinomas, we find that two (AKT2 and TFDP2) are significantly amplified...

  18. Application of a neural network for reflectance spectrum classification

    Science.gov (United States)

    Yang, Gefei; Gartley, Michael

    2017-05-01

    Traditional reflectance spectrum classification algorithms are based on comparing spectrum across the electromagnetic spectrum anywhere from the ultra-violet to the thermal infrared regions. These methods analyze reflectance on a pixel by pixel basis. Inspired by high performance that Convolution Neural Networks (CNN) have demonstrated in image classification, we applied a neural network to analyze directional reflectance pattern images. By using the bidirectional reflectance distribution function (BRDF) data, we can reformulate the 4-dimensional into 2 dimensions, namely incident direction × reflected direction × channels. Meanwhile, RIT's micro-DIRSIG model is utilized to simulate additional training samples for improving the robustness of the neural networks training. Unlike traditional classification by using hand-designed feature extraction with a trainable classifier, neural networks create several layers to learn a feature hierarchy from pixels to classifier and all layers are trained jointly. Hence, the our approach of utilizing the angular features are different to traditional methods utilizing spatial features. Although training processing typically has a large computational cost, simple classifiers work well when subsequently using neural network generated features. Currently, most popular neural networks such as VGG, GoogLeNet and AlexNet are trained based on RGB spatial image data. Our approach aims to build a directional reflectance spectrum based neural network to help us to understand from another perspective. At the end of this paper, we compare the difference among several classifiers and analyze the trade-off among neural networks parameters.

  19. A regional GNSS-VTEC model over Nigeria using neural networks: A novel approach

    Directory of Open Access Journals (Sweden)

    Daniel Okoh

    2016-01-01

    Full Text Available A neural network model of the Global Navigation Satellite System – vertical total electron content (GNSS-VTEC over Nigeria is developed. A new approach that has been utilized in this work is the consideration of the International Reference Ionosphere's (IRI's critical plasma frequency (foF2 parameter as an additional neuron for the network's input layer. The work also explores the effects of using various other input layer neurons like disturbance storm time (DST and sunspot number. All available GNSS data from the Nigerian Permanent GNSS Network (NIGNET were used, and these cover the period from 2011 to 2015, for 14 stations. Asides increasing the learning accuracy of the networks, the inclusion of the IRI's foF2 parameter as an input neuron is ideal for making the networks to learn long-term solar cycle variations. This is important especially for regions, like in this work, where the GNSS data is available for less than the period of a solar cycle. The neural network model developed in this work has been tested for time-varying and spatial performances. The latest 10% of the GNSS observations from each of the stations were used to test the forecasting ability of the networks, while data from 2 of the stations were entirely used for spatial performance testing. The results show that root-mean-squared-errors were generally less than 8.5 TEC units for all modes of testing performed using the optimal network. When compared to other models, the model developed in this work was observed to reduce the prediction errors to about half those of the NeQuick and the IRI model.

  20. Forecasting Electricity Demand in Thailand with an Artificial Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Karin Kandananond

    2011-08-01

    Full Text Available Demand planning for electricity consumption is a key success factor for the development of any countries. However, this can only be achieved if the demand is forecasted accurately. In this research, different forecasting methods—autoregressive integrated moving average (ARIMA, artificial neural network (ANN and multiple linear regression (MLR—were utilized to formulate prediction models of the electricity demand in Thailand. The objective was to compare the performance of these three approaches and the empirical data used in this study was the historical data regarding the electricity demand (population, gross domestic product: GDP, stock index, revenue from exporting industrial products and electricity consumption in Thailand from 1986 to 2010. The results showed that the ANN model reduced the mean absolute percentage error (MAPE to 0.996%, while those of ARIMA and MLR were 2.80981 and 3.2604527%, respectively. Based on these error measures, the results indicated that the ANN approach outperformed the ARIMA and MLR methods in this scenario. However, the paired test indicated that there was no significant difference among these methods at α = 0.05. According to the principle of parsimony, the ARIMA and MLR models might be preferable to the ANN one because of their simple structure and competitive performance

  1. Surface Roughness Optimization of Polyamide-6/Nanoclay Nanocomposites Using Artificial Neural Network: Genetic Algorithm Approach

    Science.gov (United States)

    Moghri, Mehdi; Omidi, Mostafa; Farahnakian, Masoud

    2014-01-01

    During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite. PMID:24578636

  2. Examining the articulation of innovativeness in co-creative firms: a neural network approach

    Science.gov (United States)

    di Tollo, Giacomo; Tanev, Stoyan

    2011-03-01

    Value co-creation is an emerging marketing and innovation paradigm describing a broader opening of the firm to its customers by providing them with the opportunity to become active participants in the design and development of personalized products, services and experiences. The aim of the present contribution is to provide preliminary results from a research project focusing on the relationship between value co-creation and the perception of innovation in technology-driven firms. The data was collected in a previous study using web search techniques and factor analysis to identify the key co-creation components and the frequency of firms' online comments about their new products, processes and services. The present work focuses on using an Artificial Neural Network (ANN) approach to understand if the extent of value co-creation activities can be thought of as an indicator of the perception of innovation. The preliminary simulation results indicate the existence of such relationship. The ANN approach does not suggest a specific model but the relationship that was found out between the forecasted values of the perception of innovation and its actual values clearly points in this direction.

  3. Hysteresis Nonlinearity Identification Using New Preisach Model-Based Artificial Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Zakerzadeh

    2011-01-01

    Full Text Available Preisach model is a well-known hysteresis identification method in which the hysteresis is modeled by linear combination of hysteresis operators. Although Preisach model describes the main features of system with hysteresis behavior, due to its rigorous numerical nature, it is not convenient to use in real-time control applications. Here a novel neural network approach based on the Preisach model is addressed, provides accurate hysteresis nonlinearity modeling in comparison with the classical Preisach model and can be used for many applications such as hysteresis nonlinearity control and identification in SMA and Piezo actuators and performance evaluation in some physical systems such as magnetic materials. To evaluate the proposed approach, an experimental apparatus consisting one-dimensional flexible aluminum beam actuated with an SMA wire is used. It is shown that the proposed ANN-based Preisach model can identify hysteresis nonlinearity more accurately than the classical one. It also has powerful ability to precisely predict the higher-order hysteresis minor loops behavior even though only the first-order reversal data are in use. It is also shown that to get the same precise results in the classical Preisach model, many more data should be used, and this directly increases the experimental cost.

  4. Surface roughness optimization of polyamide-6/nanoclay nanocomposites using artificial neural network: genetic algorithm approach.

    Science.gov (United States)

    Moghri, Mehdi; Madic, Milos; Omidi, Mostafa; Farahnakian, Masoud

    2014-01-01

    During the past decade, polymer nanocomposites attracted considerable investment in research and development worldwide. One of the key factors that affect the quality of polymer nanocomposite products in machining is surface roughness. To obtain high quality products and reduce machining costs it is very important to determine the optimal machining conditions so as to achieve enhanced machining performance. The objective of this paper is to develop a predictive model using a combined design of experiments and artificial intelligence approach for optimization of surface roughness in milling of polyamide-6 (PA-6) nanocomposites. A surface roughness predictive model was developed in terms of milling parameters (spindle speed and feed rate) and nanoclay (NC) content using artificial neural network (ANN). As the present study deals with relatively small number of data obtained from full factorial design, application of genetic algorithm (GA) for ANN training is thought to be an appropriate approach for the purpose of developing accurate and robust ANN model. In the optimization phase, a GA is considered in conjunction with the explicit nonlinear function derived from the ANN to determine the optimal milling parameters for minimization of surface roughness for each PA-6 nanocomposite.

  5. An innovative approach to improve SRTM DEM using multispectral imagery and artificial neural network

    Science.gov (United States)

    Wendi, Dadiyorto; Liong, Shie-Yui; Sun, Yabin; doan, Chi Dung

    2016-06-01

    Although the Shuttle Radar Topography Mission [SRTM) data are a publicly accessible Digital Elevation Model [DEM) provided at no cost, its accuracy especially at forested area is known to be limited with root mean square error (RMSE) of approx. 14 m in Singapore's forested area. Such inaccuracy is attributed to the 5.6 cm wavelength used by SRTM that does not penetrate vegetation well. This paper considers forested areas of central catchment of Singapore as a proof of concept of an approach to improve the SRTM data set. The approach makes full use of (1) the introduction of multispectral imagery (Landsat 8), of 30 m resolution, into SRTM data; (2) the Artificial Neural Network (ANN) to flex its known strengths in pattern recognition and; (3) a reference DEM of high accuracy (1 m) derived through the integration of stereo imaging of worldview-1 and extensive ground survey points. The study shows a series of significant improvements of the SRTM when assessed with the reference DEM of 2 different areas, with RMSE reduction of ˜68% (from 13.9 m to 4.4 m) and ˜52% (from 14.2 m to 6.7 m). In addition, the assessment of the resulting DEM also includes comparisons with simple denoising methodology (Low Pass Filter) and commercially available product called NEXTMap® World 30™.

  6. Bias correction in SMAP soil moisture assimilation using a neural network approach

    Science.gov (United States)

    Kolassa, J.; Reichle, R. H.; Gentine, P.; Alemohammad, S. H.; Prigent, C.; Aires, F.; Draper, C. S.; Liu, Q.

    2016-12-01

    Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, they can be used to implement an alternative bias correction to the local cumulative distribution function matching typically used in soil moisture data assimilation (DA) systems. In this study, a statistical neural network (NN) retrieval algorithm is calibrated using SMAP brightness temperature observations and modeled soil moisture (which is also used to calibrate the SMAP Level 4 DA system). Daily values of surface soil moisture are estimated using the NN and then assimilated into the NASA Catchment model. We assess the skill of the NN retrieval and the assimilation estimates through a comprehensive comparison to in situ measurements from the SMAP core and sparse network sites. The NN method compares well against the official RTM based approach and is able to extract information from the SMAP observations that is complementary to the model. Additionally, we compare the NN method to more traditional bias correction approaches and analyze the potential of using spatially variable error estimates to improve the relative impact of observations in the assimilation.

  7. Prescribed performance synchronization controller design of fractional-order chaotic systems: An adaptive neural network control approach

    Directory of Open Access Journals (Sweden)

    Yuan Li

    2017-03-01

    Full Text Available In this study, an adaptive neural network synchronization (NNS approach, capable of guaranteeing prescribed performance (PP, is designed for non-identical fractional-order chaotic systems (FOCSs. For PP synchronization, we mean that the synchronization error converges to an arbitrary small region of the origin with convergence rate greater than some function given in advance. Neural networks are utilized to estimate unknown nonlinear functions in the closed-loop system. Based on the integer-order Lyapunov stability theorem, a fractional-order adaptive NNS controller is designed, and the PP can be guaranteed. Finally, simulation results are presented to confirm our results.

  8. Prescribed performance synchronization controller design of fractional-order chaotic systems: An adaptive neural network control approach

    Science.gov (United States)

    Li, Yuan; Lv, Hui; Jiao, Dongxiu

    2017-03-01

    In this study, an adaptive neural network synchronization (NNS) approach, capable of guaranteeing prescribed performance (PP), is designed for non-identical fractional-order chaotic systems (FOCSs). For PP synchronization, we mean that the synchronization error converges to an arbitrary small region of the origin with convergence rate greater than some function given in advance. Neural networks are utilized to estimate unknown nonlinear functions in the closed-loop system. Based on the integer-order Lyapunov stability theorem, a fractional-order adaptive NNS controller is designed, and the PP can be guaranteed. Finally, simulation results are presented to confirm our results.

  9. Petri Nets

    Indian Academy of Sciences (India)

    Associate Professor of. Computer Science and. Automation at the Indian. Institute of Science,. Bangalore. His research interests are broadly in the areas of stochastic modeling and scheduling methodologies for future factories; and object oriented modeling. GENERAL I ARTICLE. Petri Nets. 1. Overview and Foundations.

  10. Petri Nets

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 8. Petri Nets - Overview and Foundations. Y Narahari. General Article Volume 4 Issue 8 August 1999 pp ... Author Affiliations. Y Narahari1. Department ot Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India.

  11. Artificial Neural Network Approach to Predict Biodiesel Production in Supercritical tert-Butyl Methyl Ether

    OpenAIRE

    Obie Farobie; Nur Hasanah

    2016-01-01

    In this study, for the first time artificial neural network was used to predict biodiesel yield in supercritical tert-butyl methyl ether (MTBE). The experimental data of biodiesel yield conducted by varying four input factors (i.e. temperature, pressure, oil-to-MTBE molar ratio, and reaction time) were used to elucidate artificial neural network model in order to predict biodiesel yield. The main goal of this study was to assess how accurately this artificial neural network model to predict b...

  12. Multiscale approach including microfibril scale to assess elastic constants of cortical bone based on neural network computation and homogenization method

    CERN Document Server

    Barkaoui, Abdelwahed; Tarek, Merzouki; Hambli, Ridha; Ali, Mkaddem

    2014-01-01

    The complexity and heterogeneity of bone tissue require a multiscale modelling to understand its mechanical behaviour and its remodelling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network computation and homogenisation equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained neural network simulation. Finite element (FE) calculation is performed at nanoscopic levels to provide a database to train an in-house neural network program; (iii) in steps 2 to 10 from fibril to continuum cortical bone tissue, homogenisation equations are used to perform the computation at the higher s...

  13. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    Science.gov (United States)

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  14. A Graph Algorithmic Approach to Separate Direct from Indirect Neural Interactions.

    Science.gov (United States)

    Wollstadt, Patricia; Meyer, Ulrich; Wibral, Michael

    2015-01-01

    Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between multiple neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking the multivariate nature of interactions: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable because of the combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm uses interaction delays reconstructed for directed bivariate interactions to tag potentially spurious edges on the basis of their timing signatures in the context of the surrounding network. Such tagged interactions may then be pruned, which produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation in MATLAB to test the algorithm's performance on simulated networks as well as networks derived from magnetoencephalographic data. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios. Our approach is a

  15. A Graph Algorithmic Approach to Separate Direct from Indirect Neural Interactions.

    Directory of Open Access Journals (Sweden)

    Patricia Wollstadt

    Full Text Available Network graphs have become a popular tool to represent complex systems composed of many interacting subunits; especially in neuroscience, network graphs are increasingly used to represent and analyze functional interactions between multiple neural sources. Interactions are often reconstructed using pairwise bivariate analyses, overlooking the multivariate nature of interactions: it is neglected that investigating the effect of one source on a target necessitates to take all other sources as potential nuisance variables into account; also combinations of sources may act jointly on a given target. Bivariate analyses produce networks that may contain spurious interactions, which reduce the interpretability of the network and its graph metrics. A truly multivariate reconstruction, however, is computationally intractable because of the combinatorial explosion in the number of potential interactions. Thus, we have to resort to approximative methods to handle the intractability of multivariate interaction reconstruction, and thereby enable the use of networks in neuroscience. Here, we suggest such an approximative approach in the form of an algorithm that extends fast bivariate interaction reconstruction by identifying potentially spurious interactions post-hoc: the algorithm uses interaction delays reconstructed for directed bivariate interactions to tag potentially spurious edges on the basis of their timing signatures in the context of the surrounding network. Such tagged interactions may then be pruned, which produces a statistically conservative network approximation that is guaranteed to contain non-spurious interactions only. We describe the algorithm and present a reference implementation in MATLAB to test the algorithm's performance on simulated networks as well as networks derived from magnetoencephalographic data. We discuss the algorithm in relation to other approximative multivariate methods and highlight suitable application scenarios

  16. Improving automated multiple sclerosis lesion segmentation with a cascaded 3D convolutional neural network approach.

    Science.gov (United States)

    Valverde, Sergi; Cabezas, Mariano; Roura, Eloy; González-Villà, Sandra; Pareto, Deborah; Vilanova, Joan C; Ramió-Torrentà, Lluís; Rovira, Àlex; Oliver, Arnau; Lladó, Xavier

    2017-07-15

    In this paper, we present a novel automated method for White Matter (WM) lesion segmentation of Multiple Sclerosis (MS) patient images. Our approach is based on a cascade of two 3D patch-wise convolutional neural networks (CNN). The first network is trained to be more sensitive revealing possible candidate lesion voxels while the second network is trained to reduce the number of misclassified voxels coming from the first network. This cascaded CNN architecture tends to learn well from a small (n≤35) set of labeled data of the same MRI contrast, which can be very interesting in practice, given the difficulty to obtain manual label annotations and the large amount of available unlabeled Magnetic Resonance Imaging (MRI) data. We evaluate the accuracy of the proposed method on the public MS lesion segmentation challenge MICCAI2008 dataset, comparing it with respect to other state-of-the-art MS lesion segmentation tools. Furthermore, the proposed method is also evaluated on two private MS clinical datasets, where the performance of our method is also compared with different recent public available state-of-the-art MS lesion segmentation methods. At the time of writing this paper, our method is the best ranked approach on the MICCAI2008 challenge, outperforming the rest of 60 participant methods when using all the available input modalities (T1-w, T2-w and FLAIR), while still in the top-rank (3rd position) when using only T1-w and FLAIR modalities. On clinical MS data, our approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods, highly correlating (r≥0.97) also with the expected lesion volume. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies

    Science.gov (United States)

    Peng, Yefei

    2010-01-01

    An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…

  18. ANNarchy: a code generation approach to neural simulations on parallel hardware

    Directory of Open Access Journals (Sweden)

    Julien eVitay

    2015-07-01

    Full Text Available Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit. Several numerical methods are available to transform ordinary differential equations into an efficient C++ code. We compare the parallel performance of the simulator to existing solutions.

  19. Petri Net and Probabilistic Model Checking Based Approach for the Modelling, Simulation and Verification of Internet Worm Propagation.

    Science.gov (United States)

    Razzaq, Misbah; Ahmad, Jamil

    2015-01-01

    Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework.

  20. Feature Selection and Classification of Electroencephalographic Signals: An Artificial Neural Network and Genetic Algorithm Based Approach.

    Science.gov (United States)

    Erguzel, Turker Tekin; Ozekes, Serhat; Tan, Oguz; Gultekin, Selahattin

    2015-10-01

    Feature selection is an important step in many pattern recognition systems aiming to overcome the so-called curse of dimensionality. In this study, an optimized classification method was tested in 147 patients with major depressive disorder (MDD) treated with repetitive transcranial magnetic stimulation (rTMS). The performance of the combination of a genetic algorithm (GA) and a back-propagation (BP) neural network (BPNN) was evaluated using 6-channel pre-rTMS electroencephalographic (EEG) patterns of theta and delta frequency bands. The GA was first used to eliminate the redundant and less discriminant features to maximize classification performance. The BPNN was then applied to test the performance of the feature subset. Finally, classification performance using the subset was evaluated using 6-fold cross-validation. Although the slow bands of the frontal electrodes are widely used to collect EEG data for patients with MDD and provide quite satisfactory classification results, the outcomes of the proposed approach indicate noticeably increased overall accuracy of 89.12% and an area under the receiver operating characteristic (ROC) curve (AUC) of 0.904 using the reduced feature set. © EEG and Clinical Neuroscience Society (ECNS) 2014.

  1. An Ionospheric Index Model based on Linear Regression and Neural Network Approaches

    Science.gov (United States)

    Tshisaphungo, Mpho; McKinnell, Lee-Anne; Bosco Habarulema, John

    2017-04-01

    The ionosphere is well known to reflect radio wave signals in the high frequency (HF) band due to the present of electron and ions within the region. To optimise the use of long distance HF communications, it is important to understand the drivers of ionospheric storms and accurately predict the propagation conditions especially during disturbed days. This paper presents the development of an ionospheric storm-time index over the South African region for the application of HF communication users. The model will result into a valuable tool to measure the complex ionospheric behaviour in an operational space weather monitoring and forecasting environment. The development of an ionospheric storm-time index is based on a single ionosonde station data over Grahamstown (33.3°S,26.5°E), South Africa. Critical frequency of the F2 layer (foF2) measurements for a period 1996-2014 were considered for this study. The model was developed based on linear regression and neural network approaches. In this talk validation results for low, medium and high solar activity periods will be discussed to demonstrate model's performance.

  2. A novel approach to parameter uncertainty analysis of hydrological models using neural networks

    Directory of Open Access Journals (Sweden)

    D. P. Solomatine

    2009-07-01

    Full Text Available In this study, a methodology has been developed to emulate a time consuming Monte Carlo (MC simulation by using an Artificial Neural Network (ANN for the assessment of model parametric uncertainty. First, MC simulation of a given process model is run. Then an ANN is trained to approximate the functional relationships between the input variables of the process model and the synthetic uncertainty descriptors estimated from the MC realizations. The trained ANN model encapsulates the underlying characteristics of the parameter uncertainty and can be used to predict uncertainty descriptors for the new data vectors. This approach was validated by comparing the uncertainty descriptors in the verification data set with those obtained by the MC simulation. The method is applied to estimate the parameter uncertainty of a lumped conceptual hydrological model, HBV, for the Brue catchment in the United Kingdom. The results are quite promising as the prediction intervals estimated by the ANN are reasonably accurate. The proposed techniques could be useful in real time applications when it is not practicable to run a large number of simulations for complex hydrological models and when the forecast lead time is very short.

  3. A convolutional neural network approach to calibrating the rotation axis for X-ray computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa

    2017-01-24

    This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential forreducing or removingother artifacts caused by instrument instability, detector non-linearity,etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.

  4. A Neural Network Approach for Identifying Particle Pitch Angle Distributions in Van Allen Probes Data

    Science.gov (United States)

    Souza, V. M.; Vieira, L. E. A.; Medeiros, C.; Da Silva, L. A.; Alves, L. R.; Koga, D.; Sibeck, D. G.; Walsh, B. M.; Kanekal, S. G.; Jauer, P. R.; hide

    2016-01-01

    Analysis of particle pitch angle distributions (PADs) has been used as a means to comprehend a multitude of different physical mechanisms that lead to flux variations in the Van Allen belts and also to particle precipitation into the upper atmosphere. In this work we developed a neural network-based data clustering methodology that automatically identifies distinct PAD types in an unsupervised way using particle flux data. One can promptly identify and locate three well-known PAD types in both time and radial distance, namely, 90deg peaked, butterfly, and flattop distributions. In order to illustrate the applicability of our methodology, we used relativistic electron flux data from the whole month of November 2014, acquired from the Relativistic Electron-Proton Telescope instrument on board the Van Allen Probes, but it is emphasized that our approach can also be used with multiplatform spacecraft data. Our PAD classification results are in reasonably good agreement with those obtained by standard statistical fitting algorithms. The proposed methodology has a potential use for Van Allen belt's monitoring.

  5. Sensorless speed estimation of an AC induction motor by using an artificial neural network approach

    Science.gov (United States)

    Alkhoraif, Abdulelah Ali

    Sensorless speed detection of an induction motor is an attractive area for researchers to enhance the reliability of the system and to reduce the cost of the components. This paper presents a simple method of estimating a rotational speed by utilizing an artificial neural network (ANN) that would be fed by a set of stator current frequencies that contain some saliency harmonics. This approach allows operators to detect the speed in induction motors such an approach also provides reliability, low cost, and simplicity. First, the proposed method is based on converting the stator current signals to the frequency domain and then applying a tracking algorithm to the stator current spectrum in order to detect frequency peaks. Secondly, the ANN has to be trained by the detected peaks; the training data must be from very precise data to provide an accurate rotor speed. Moreover, the desired output of the training is the speed, which is measured by a tachometer simultaneously with the stator current signal. The databases were collected at many different speeds from two different types of AC induction motors, wound rotor and squirrel cage. They were trained and tested, so when the difference between the desired speed value and the ANN output value reached the wanted accuracy, the system does not need to use the tachometer anymore. Eventually, the experimental results show that in an optimal ANN design, the speed of the wound rotor induction motor was estimated accurately, where the testing average error was 1 RPM. The proposed method has not succeeded to predict the rotor speed of the squirrel cage induction motor precisely, where the smallest testing­average error that was achieved was 5 RPM.

  6. Biological Petri Nets

    CERN Document Server

    Wingender, E

    2011-01-01

    It was suggested some years ago that Petri nets might be well suited to modeling metabolic networks, overcoming some of the limitations encountered by the use of systems employing ODEs (ordinary differential equations). Much work has been done since then which confirms this and demonstrates the usefulness of this concept for systems biology. Petri net technology is not only intuitively understood by scientists trained in the life sciences, it also has a robust mathematical foundation and provides the required degree of flexibility. As a result it appears to be a very promising approach to mode

  7. Citizen Management of Technology: A Science and Technology Studies approach to wireless networks and urban governance trough guifi.net

    Directory of Open Access Journals (Sweden)

    Yann Bona Beauvois

    2011-03-01

    Full Text Available Thesis presented at the Departament de Psicologia Social de la UAB by Yann Bona on December, 2010. Directed by Dr. Joan Pujol Tarrés.This dissertation explores the many ways in which citizens aiming to manage technologies in urban scape relate to public administrations. To accomplish it's task, it brings forward certain STS notions such as cosmopolitics, hybrid composition or technical democracy. On a general level, this thesis seeks an answer to Bruno Latour concern with what does it mean to conceive the technical as political?. We offer a set of conclusions based on what we choose to name a Sociotechnique of Public Policy .Our work relies on a case study focused on a free and open wireless network (located in Catalunya for the most part and called guifi.net that emerged from the desire and will of Civil Society wich, up to date, turns out to be the world's biggest free wireless network.

  8. Modeling of the nonlinearity in nano-displacement measuring system based on the neural network approaches

    Science.gov (United States)

    Olyaee, Saeed; Ebrahimpour, Reza; Hamedi, Samaneh; Jafarlou, Farzad M.

    2009-08-01

    Periodic nonlinearity is the main limitation on the accuracy of the nano-displacement measurements in the heterodyne interferometers. It is mainly produced by non-ideal polarized beams of the leaser and imperfect alignment of the optical components. In this paper, we model the periodic nonlinearity resulting from non-orthogonality and ellipticity of the laser beam by using combination of neural networks such as stacked generalization method and mixture of experts. The ensemble neural networks used for nonlinearity modeling are compared with single neural networks such as multi layer percepterons and radial basis function.

  9. ANT Advanced Neural Tool

    Energy Technology Data Exchange (ETDEWEB)

    Labrador, I.; Carrasco, R.; Martinez, L.

    1996-07-01

    This paper describes a practical introduction to the use of Artificial Neural Networks. Artificial Neural Nets are often used as an alternative to the traditional symbolic manipulation and first order logic used in Artificial Intelligence, due the high degree of difficulty to solve problems that can not be handled by programmers using algorithmic strategies. As a particular case of Neural Net a Multilayer Perception developed by programming in C language on OS9 real time operating system is presented. A detailed description about the program structure and practical use are included. Finally, several application examples that have been treated with the tool are presented, and some suggestions about hardware implementations. (Author) 15 refs.

  10. Artificial neural network approach to predicting engine-out emissions and performance parameters of a turbo charged diesel engine

    Directory of Open Access Journals (Sweden)

    Özener Orkun

    2013-01-01

    Full Text Available This study details the artificial neural network (ANN modelling of a diesel engine to predict the torque, power, brake-specific fuel consumption and pollutant emissions, including carbon dioxide, carbon monoxide, nitrogen oxides, total hydrocarbons and filter smoke number. To collect data for training and testing the neural network, experiments were performed on a four cylinder, four stroke compression ignition engine. A total of 108 test points were run on a dynamometer. For the first part of this work, a parameter packet was used as the inputs for the neural network, and satisfactory regression was found with the outputs (over ~95%, excluding total hydrocarbons. The second stage of this work addressed developing new networks with additional inputs for predicting the total hydrocarbons, and the regression was raised from 75 % to 90 %. This study shows that the ANN approach can be used for accurately predicting characteristic values of an internal combustion engine and that the neural network performance can be increased using additional related input data.

  11. A deep convolutional neural network approach to single-particle recognition in cryo-electron microscopy.

    Science.gov (United States)

    Zhu, Yanan; Ouyang, Qi; Mao, Youdong

    2017-07-21

    Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.

  12. Towards a Usability and Error "Safety Net": A Multi-Phased Multi-Method Approach to Ensuring System Usability and Safety.

    Science.gov (United States)

    Kushniruk, Andre; Senathirajah, Yalini; Borycki, Elizabeth

    2017-01-01

    The usability and safety of health information systems have become major issues in the design and implementation of useful healthcare IT. In this paper we describe a multi-phased multi-method approach to integrating usability engineering methods into system testing to ensure both usability and safety of healthcare IT upon widespread deployment. The approach involves usability testing followed by clinical simulation (conducted in-situ) and "near-live" recording of user interactions with systems. At key stages in this process, usability problems are identified and rectified forming a usability and technology-induced error "safety net" that catches different types of usability and safety problems prior to releasing systems widely in healthcare settings.

  13. Human avoidance and approach learning: evidence for overlapping neural systems and experiential avoidance modulation of avoidance neurocircuitry.

    Science.gov (United States)

    Schlund, Michael W; Magee, Sandy; Hudgins, Caleb D

    2011-12-01

    Adaptive functioning is thought to reflect a balance between approach and avoidance neural systems with imbalances often producing pathological forms of avoidance. Yet little evidence is available in healthy adults demonstrating a balance between approach and avoidance neural systems and modulation in avoidance neurocircuitry by vulnerability factors for avoidance. Consequently, we used functional magnetic resonance imaging (fMRI) to compare changes in brain activation associated with human avoidance and approach learning and modulation of avoidance neurocircuitry by experiential avoidance. fMRI tracked trial-by-trial increases in activation while adults learned through trial and error an avoidance response that prevented money loss and an approach response that produced money gain. Avoidance and approach cues elicited similar experience-dependent increases in activation in a fronto-limbic-striatal network. Positive and negative reinforcing outcomes (i.e., money gain and avoidance of loss) also elicited similar increases in activation in frontal and striatal regions. Finally, increased experiential avoidance and self-punishment coping was associated with decreased activation in medial/superior frontal regions, anterior cingulate, amygdala and hippocampus. These findings suggest avoidance and approach learning recruit a similar fronto-limbic-striatal network in healthy adults. Increased experiential avoidance also appears to be associated with reduced frontal and limbic reactivity in avoidance, establishing an important link between maladaptive avoidance coping and altered responses in avoidance neurocircuitry. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. A Flexible Terminal Approach to Sampled-Data Exponentially Synchronization of Markovian Neural Networks With Time-Varying Delayed Signals.

    Science.gov (United States)

    Cheng, Jun; Park, Ju H; Karimi, Hamid Reza; Shen, Hao

    2017-08-02

    This paper investigates the problem of sampled-data (SD) exponentially synchronization for a class of Markovian neural networks with time-varying delayed signals. Based on the tunable parameter and convex combination computational method, a new approach named flexible terminal approach is proposed to reduce the conservatism of delay-dependent synchronization criteria. The SD subject to stochastic sampling period is introduced to exhibit the general phenomena of reality. Novel exponential synchronization criterion are derived by utilizing uniform Lyapunov-Krasovskii functional and suitable integral inequality. Finally, numerical examples are provided to show the usefulness and advantages of the proposed design procedure.

  15. Neural network technologies

    Science.gov (United States)

    Villarreal, James A.

    1991-01-01

    A whole new arena of computer technologies is now beginning to form. Still in its infancy, neural network technology is a biologically inspired methodology which draws on nature's own cognitive processes. The Software Technology Branch has provided a software tool, Neural Execution and Training System (NETS), to industry, government, and academia to facilitate and expedite the use of this technology. NETS is written in the C programming language and can be executed on a variety of machines. Once a network has been debugged, NETS can produce a C source code which implements the network. This code can then be incorporated into other software systems. Described here are various software projects currently under development with NETS and the anticipated future enhancements to NETS and the technology.

  16. Bayesian neural network approach for determining the risk of re-intervention after endovascular aortic aneurysm repair.

    Science.gov (United States)

    Attallah, Omneya; Ma, Xianghong

    2014-09-01

    This article proposes a Bayesian neural network approach to determine the risk of re-intervention after endovascular aortic aneurysm repair surgery. The target of proposed technique is to determine which patients have high chance to re-intervention (high-risk patients) and which are not (low-risk patients) after 5 years of the surgery. Two censored datasets relating to the clinical conditions of aortic aneurysms have been collected from two different vascular centers in the United Kingdom. A Bayesian network was first employed to solve the censoring issue in the datasets. Then, a back propagation neural network model was built using the uncensored data of the first center to predict re-intervention on the second center and classify the patients into high-risk and low-risk groups. Kaplan-Meier curves were plotted for each group of patients separately to show whether there is a significant difference between the two risk groups. Finally, the logrank test was applied to determine whether the neural network model was capable of predicting and distinguishing between the two risk groups. The results show that the Bayesian network used for uncensoring the data has improved the performance of the neural networks that were built for the two centers separately. More importantly, the neural network that was trained with uncensored data of the first center was able to predict and discriminate between groups of low risk and high risk of re-intervention after 5 years of endovascular aortic aneurysm surgery at center 2 (p = 0.0037 in the logrank test). © IMechE 2014.

  17. A neural network approach to fault detection in spacecraft attitude determination and control systems

    Science.gov (United States)

    Schreiner, John N.

    This thesis proposes a method of performing fault detection and isolation in spacecraft attitude determination and control systems. The proposed method works by deploying a trained neural network to analyze a set of residuals that are defined such that they encompass the attitude control, guidance, and attitude determination subsystems. Eight neural networks were trained using either the resilient backpropagation, Levenberg-Marquardt, or Levenberg-Marquardt with Bayesian regularization training algorithms. The results of each of the neural networks were analyzed to determine the accuracy of the networks with respect to isolating the faulty component or faulty subsystem within the ADCS. The performance of the proposed neural network-based fault detection and isolation method was compared and contrasted with other ADCS FDI methods. The results obtained via simulation showed that the best neural networks employing this method successfully detected the presence of a fault 79% of the time. The faulty subsystem was successfully isolated 75% of the time and the faulty components within the faulty subsystem were isolated 37% of the time.

  18. Artificial Neural Network approach to develop unique Classification and Raga identification tools for Pattern Recognition in Carnatic Music

    Science.gov (United States)

    Srimani, P. K.; Parimala, Y. G.

    2011-12-01

    A unique approach has been developed to study patterns in ragas of Carnatic Classical music based on artificial neural networks. Ragas in Carnatic music which have found their roots in the Vedic period, have grown on a Scientific foundation over thousands of years. However owing to its vastness and complexities it has always been a challenge for scientists and musicologists to give an all encompassing perspective both qualitatively and quantitatively. Cognition, comprehension and perception of ragas in Indian classical music have always been the subject of intensive research, highly intriguing and many facets of these are hitherto not unravelled. This paper is an attempt to view the melakartha ragas with a cognitive perspective using artificial neural network based approach which has given raise to very interesting results. The 72 ragas of the melakartha system were defined through the combination of frequencies occurring in each of them. The data sets were trained using several neural networks. 100% accurate pattern recognition and classification was obtained using linear regression, TLRN, MLP and RBF networks. Performance of the different network topologies, by varying various network parameters, were compared. Linear regression was found to be the best performing network.

  19. Multi-Feature Segmentation for High-Resolution Polarimetric SAR Data Based on Fractal Net Evolution Approach

    Directory of Open Access Journals (Sweden)

    Qihao Chen

    2017-06-01

    Full Text Available Segmentation techniques play an important role in understanding high-resolution polarimetric synthetic aperture radar (PolSAR images. PolSAR image segmentation is widely used as a preprocessing step for subsequent classification, scene interpretation and extraction of surface parameters. However, speckle noise and rich spatial features of heterogeneous regions lead to blurred boundaries of high-resolution PolSAR image segmentation. A novel segmentation algorithm is proposed in this study in order to address the problem and to obtain accurate and precise segmentation results. This method integrates statistical features into a fractal net evolution algorithm (FNEA framework, and incorporates polarimetric features into a simple linear iterative clustering (SLIC superpixel generation algorithm. First, spectral heterogeneity in the traditional FNEA is substituted by the G0 distribution statistical heterogeneity in order to combine the shape and statistical features of PolSAR data. The statistical heterogeneity between two adjacent image objects is measured using a log likelihood function. Second, a modified SLIC algorithm is utilized to generate compact superpixels as the initial samples for the G0 statistical model, which substitutes the polarimetric distance of the Pauli RGB composition for the CIELAB color distance. The segmentation results were obtained by weighting the G0 statistical feature and the shape features, based on the FNEA framework. The validity and applicability of the proposed method was verified with extensive experiments on simulated data and three real-world high-resolution PolSAR images from airborne multi-look ESAR, spaceborne single-look RADARSAT-2, and multi-look TerraSAR-X data sets. The experimental results indicate that the proposed method obtains more accurate and precise segmentation results than the other methods for high-resolution PolSAR images.

  20. Centralized Data-Sampling Approach for Global Ot-α Synchronization of Fractional-Order Neural Networks with Time Delays

    Directory of Open Access Journals (Sweden)

    Jin-E Zhang

    2017-01-01

    Full Text Available In this paper, the global O(t-α synchronization problem is investigated for a class of fractional-order neural networks with time delays. Taking into account both better control performance and energy saving, we make the first attempt to introduce centralized data-sampling approach to characterize the O(t-α synchronization design strategy. A sufficient criterion is given under which the drive-response-based coupled neural networks can achieve global O(t-α synchronization. It is worth noting that, by using centralized data-sampling principle, fractional-order Lyapunov-like technique, and fractional-order Leibniz rule, the designed controller performs very well. Two numerical examples are presented to illustrate the efficiency of the proposed centralized data-sampling scheme.

  1. Use of Time-Frequency Analysis and Neural Networks for Mode Identification in a Wireless Software-Defined Radio Approach

    Directory of Open Access Journals (Sweden)

    Matteo Gandetto

    2004-09-01

    Full Text Available The use of time-frequency distributions is proposed as a nonlinear signal processing technique that is combined with a pattern recognition approach to identify superimposed transmission modes in a reconfigurable wireless terminal based on software-defined radio techniques. In particular, a software-defined radio receiver is described aiming at the identification of two coexistent communication modes: frequency hopping code division multiple access and direct sequence code division multiple access. As a case study, two standards, based on the previous modes and operating in the same band (industrial, scientific, and medical, are considered: IEEE WLAN 802.11b (direct sequence and Bluetooth (frequency hopping. Neural classifiers are used to obtain identification results. A comparison between two different neural classifiers is made in terms of relative error frequency.

  2. Modeling of farnesyltransferase inhibition by some thiol and non-thiol peptidomimetic inhibitors using genetic neural networks and RDF approaches.

    Science.gov (United States)

    González, Maykel Pérez; Caballero, Julio; Tundidor-Camba, Alain; Helguera, Aliuska Morales; Fernández, Michael

    2006-01-01

    Inhibition of farnesyltransferase (FT) enzyme by a set of 78 thiol and non-thiol peptidomimetic inhibitors was successfully modeled by a genetic neural network (GNN) approach, using radial distribution function descriptors. A linear model was unable to successfully fit the whole data set; however, the optimum Bayesian regularized neural network model described about 87% inhibitory activity variance with a relevant predictive power measured by q2 values of leave-one-out and leave-group-out cross-validations of about 0.7. According to their activity levels, thiol and non-thiol inhibitors were well-distributed in a topological map, built with the inputs of the optimum non-linear predictor. Furthermore, descriptors in the GNN model suggested the occurrence of a strong dependence of FT inhibition on the molecular shape and size rather than on electronegativity or polarizability characteristics of the studied compounds.

  3. A new approach to the automatic identification of organism evolution using neural networks.

    Science.gov (United States)

    Kasperski, Andrzej; Kasperska, Renata

    2016-01-01

    Automatic identification of organism evolution still remains a challenging task, which is especially exiting, when the evolution of human is considered. The main aim of this work is to present a new idea to allow organism evolution analysis using neural networks. Here we show that it is possible to identify evolution of any organisms in a fully automatic way using the designed EvolutionXXI program, which contains implemented neural network. The neural network has been taught using cytochrome b sequences of selected organisms. Then, analyses have been carried out for the various exemplary organisms in order to demonstrate capabilities of the EvolutionXXI program. It is shown that the presented idea allows supporting existing hypotheses, concerning evolutionary relationships between selected organisms, among others, Sirenia and elephants, hippopotami and whales, scorpions and spiders, dolphins and whales. Moreover, primate (including human), tree shrew and yeast evolution has been reconstructed. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. A cross-disciplinary approach to understanding neural stem cells in development and disease.

    Science.gov (United States)

    Henrique, Domingos; Bally-Cuif, Laure

    2010-06-01

    The Company of Biologists recently launched a new series of workshops aimed at bringing together scientists with different backgrounds to discuss cutting edge research in emerging and cross-disciplinary areas of biology. The first workshop was held at Wilton Park, Sussex, UK, and the chosen theme was 'Neural Stem Cells in Development and Disease', which is indeed a hot topic, not only because of the potential use of neural stem cells in cell replacement therapies to treat neurodegenerative diseases, but also because alterations in their behaviour can, in certain cases, lie at the origin of brain tumours and other diseases.

  5. Relation of net portal flux of nitrogen compounds with dietary characteristics in ruminants: a meta-analysis approach.

    Science.gov (United States)

    Martineau, R; Sauvant, D; Ouellet, D R; Côrtes, C; Vernet, J; Ortigues-Marty, I; Lapierre, H

    2011-06-01

    Decrease of N intake (NI) with the aim of increasing efficiency of N utilization and decreasing the negative environmental effects of animal production requires assessment of the forms in which N is absorbed. A meta-analysis was conducted on 68 publications (90 experiments and 215 treatments) to study the effect of NI on net portal appearance (NPA) of nitrogenous nutrients [amino acids (AA), ammonia, and urea] in ruminants. In addition, the effect of several dietary energy and protein factors on this relationship was investigated. These factors were: dry matter intake; proportion of concentrate; diet concentrations and intakes of nonfiber carbohydrates and neutral detergent fiber (NDF); diet concentrations of total digestible nutrients (TDN) and crude protein; rumen-degradable protein and rumen-undegradable protein, as percent dry matter or percent crude protein. The effect of species and physiological stage was also investigated. Within-experiment analyses revealed that the NPA of AA-N and ammonia-N increased linearly, whereas the NPA of urea-N decreased (or recycling of urea-N increased) linearly with NI. Besides NI, many significant covariates could be introduced in each NPA model. However, only TDN and neutral detergent fiber intake (NDFi) were common significant covariates of NI in each NPA model. In this database, ruminants converted 60% of incremental NI into NPA of AA-N with no species effect on that slope. However, at similar NI, TDN, and NDFi, sheep absorbed more AA-N than did cattle and dairy cows. On the other hand, species tended to affect the slope of the relationship between NPA of ammonia-N and NI, which varied from 0.19 for the sheep to 0.38 for dairy cows. On average, the equivalent of 11% of incremental NI was recycled as urea-N to the gut through the portal-drained viscera, which excludes salivary contribution, and no species difference was detected. Overall, at similar TDN and NDFi, sheep and cattle increased their NPA of AA-N relative to NI

  6. Energy Behavior Change and Army Net Zero Energy; Gaps in the Army’s Approach to Changing Energy Behavior

    Science.gov (United States)

    2014-06-13

    messenger approach provides only self-reinforcing information. Related is the eighth problem, which is human nature that supports complacency by only...Sustainability, and energy conservation programs. For example, the Army National Guard maintains a sustainability Facebook page as does the Assistant 67

  7. An optimal method for the computation of the parameter R s of the net emission coefficient approximation approach for determining the electrical and thermal characteristics of plasma arcs

    Science.gov (United States)

    Abdo, Youssef; Rohani, Vandad; Fulcheri, Laurent

    2017-11-01

    Large-scale industrial plasma torches and processes use primarily high-current electric arcs. Therefore, their basic design must inevitably account for radiative transfer which becomes the prevailing heat loss mechanism at high currents. This heavily increases the complexity of the governing equations. Many approximate approaches have been proposed. The present work relies on the method of approximate average net emission coefficient (NEC) using the isothermal sphere approximation with a radius R s to solve semi-analytically the Ellenbaas-Heller equation and compares it with exact calculations obtained using an iterative method. To our knowledge no study has provided yet a method to determine the most accurate value of R s . In this paper, we present an optimal method for determining the best value of R s that leads to the best agreement between the approximate and the exact methods. As a result, the complete electric characteristic has been obtained for hydrogen at 1 bar in a detailed case study.

  8. A Translational Approach to Vocalization Deficits and Neural Recovery after Behavioral Treatment in Parkinson Disease

    Science.gov (United States)

    Ciucci, Michelle R.; Vinney, Lisa; Wahoske, Emerald J.; Connor, Nadine P.

    2010-01-01

    Parkinson disease is characterized by a complex neuropathological profile that primarily affects dopaminergic neural pathways in the basal ganglia, including pathways that modulate cranial sensorimotor functions such as swallowing, voice and speech. Prior work from our lab has shown that the rat model of unilateral 6-hydroxydopamine infusion to…

  9. A dynamic programming approach to missing data estimation using neural networks

    CSIR Research Space (South Africa)

    Nelwamondo, FV

    2013-01-01

    Full Text Available This paper develops and presents a novel technique for missing data estimation using a combination of dynamic programming, neural networks and genetic algorithms (GA) on suitable subsets of the input data. The method proposed here is well suited...

  10. Neural Approach in Multi-Agent Routing for Static Telecommunication Networks

    OpenAIRE

    Timofeev, Adil; Syrtsev, Alexey

    2003-01-01

    The problem of multi-agent routing in static telecommunication networks with fixed configuration is considered. The problem is formulated in two ways: for centralized routing schema with the coordinator-agent (global routing) and for distributed routing schema with independent agents (local routing). For both schemas appropriate Hopfield neural networks (HNN) are constructed.

  11. Forecasting financial time series using a low complexity recurrent neural network and evolutionary learning approach

    Directory of Open Access Journals (Sweden)

    Ajit Kumar Rout

    2017-10-01

    Full Text Available The paper presents a low complexity recurrent Functional Link Artificial Neural Network for predicting the financial time series data like the stock market indices over a time frame varying from 1 day ahead to 1 month ahead. Although different types of basis functions have been used for low complexity neural networks earlier for stock market prediction, a comparative study is needed to choose the optimal combinations of these for a reasonably accurate forecast. Further several evolutionary learning methods like the Particle Swarm Optimization (PSO and modified version of its new variant (HMRPSO, and the Differential Evolution (DE are adopted here to find the optimal weights for the recurrent computationally efficient functional link neural network (RCEFLANN using a combination of linear and hyperbolic tangent basis functions. The performance of the recurrent computationally efficient FLANN model is compared with that of low complexity neural networks using the Trigonometric, Chebyshev, Laguerre, Legendre, and tangent hyperbolic basis functions in predicting stock prices of Bombay Stock Exchange data and Standard & Poor’s 500 data sets using different evolutionary methods and has been presented in this paper and the results clearly reveal that the recurrent FLANN model trained with the DE outperforms all other FLANN models similarly trained.

  12. Statistical Classification for Cognitive Diagnostic Assessment: An Artificial Neural Network Approach

    Science.gov (United States)

    Cui, Ying; Gierl, Mark; Guo, Qi

    2016-01-01

    The purpose of the current investigation was to describe how the artificial neural networks (ANNs) can be used to interpret student performance on cognitive diagnostic assessments (CDAs) and evaluate the performances of ANNs using simulation results. CDAs are designed to measure student performance on problem-solving tasks and provide useful…

  13. Mechanisms of Developmental Regression in Autism and the Broader Phenotype: A Neural Network Modeling Approach

    Science.gov (United States)

    Thomas, Michael S. C.; Knowland, Victoria C. P.; Karmiloff-Smith, Annette

    2011-01-01

    Loss of previously established behaviors in early childhood constitutes a markedly atypical developmental trajectory. It is found almost uniquely in autism and its cause is currently unknown (Baird et al., 2008). We present an artificial neural network model of developmental regression, exploring the hypothesis that regression is caused by…

  14. Artificial Neural Networks: A New Approach for Predicting Application Behavior. AIR 2001 Annual Forum Paper.

    Science.gov (United States)

    Gonzalez, Julie M. Byers; DesJardins, Stephen L.

    This paper examines how predictive modeling can be used to study application behavior. A relatively new technique, artificial neural networks (ANNs), was applied to help predict which students were likely to get into a large Research I university. Data were obtained from a university in Iowa. Two cohorts were used, each containing approximately…

  15. Process optimization of gravure printed light-emitting polymer layers by a neural network approach

    NARCIS (Netherlands)

    Michels, J.J.; Winter, S.H.P.M. de; Symonds, L.H.G.

    2009-01-01

    We demonstrate that artificial neural network modeling is a viable tool to predict the processing dependence of gravure printed light-emitting polymer layers for flexible OLED lighting applications. The (local) thickness of gravure printed light-emitting polymer (LEP) layers was analyzed using

  16. A Drone Remote Sensing for Virtual Reality Simulation System for Forest Fires: Semantic Neural Network Approach

    Science.gov (United States)

    Narasimha Rao, Gudikandhula; Jagadeeswara Rao, Peddada; Duvvuru, Rajesh

    2016-09-01

    Wild fires have significant impact on atmosphere and lives. The demand of predicting exact fire area in forest may help fire management team by using drone as a robot. These are flexible, inexpensive and elevated-motion remote sensing systems that use drones as platforms are important for substantial data gaps and supplementing the capabilities of manned aircraft and satellite remote sensing systems. In addition, powerful computational tools are essential for predicting certain burned area in the duration of a forest fire. The reason of this study is to built up a smart system based on semantic neural networking for the forecast of burned areas. The usage of virtual reality simulator is used to support the instruction process of fire fighters and all users for saving of surrounded wild lives by using a naive method Semantic Neural Network System (SNNS). Semantics are valuable initially to have a enhanced representation of the burned area prediction and better alteration of simulation situation to the users. In meticulous, consequences obtained with geometric semantic neural networking is extensively superior to other methods. This learning suggests that deeper investigation of neural networking in the field of forest fires prediction could be productive.

  17. Two-Stage Approach to Image Classification by Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Ososkov Gennady

    2018-01-01

    Full Text Available The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  18. Multidisciplinary team approach to improved chronic care management for diabetic patients in an urban safety net ambulatory care clinic.

    Science.gov (United States)

    Tapp, Hazel; Phillips, Shay E; Waxman, Dael; Alexander, Matthew; Brown, Rhett; Hall, Mary

    2012-01-01

    Since the care of patients with multiple chronic diseases such as diabetes and depression accounts for the majority of health care costs, effective team approaches to managing such complex care in primary care are needed, particularly since psychosocial and physical disorders coexist. Uncontrolled diabetes is a leading health risk for morbidity, disability and premature mortality with between 18-31% of patients also having undiagnosed or undertreated depression. Here we describe a team driven approach that initially focused on patients with poorly controlled diabetes (A1c > 9) that took place at a family medicare office. The team included: resident and faculty physicians, a pharmacist, social worker, nurses, behavioral medicine interns, office scheduler, and an information technologist. The team developed immediate integrative care for diabetic patients during routine office visits.

  19. Reconciling estimates of the contemporary North American carbon balance among terrestrial biosphere models, atmospheric inversions, and a new approach for estimating net ecosystem exchange from inventory-based data

    Science.gov (United States)

    Daniel J. Hayes; David P. Turner; Graham Stinson; A. David Mcguire; Yaxing Wei; Tristram O. West; Linda S. Heath; Bernardus Dejong; Brian G. McConkey; Richard A. Birdsey; Werner A. Kurz; Andrew R. Jacobson; Deborah N. Huntzinger; Yude Pan; W. Mac Post; Robert B. Cook

    2012-01-01

    We develop an approach for estimating net ecosystem exchange (NEE) using inventory-based information over North America (NA) for a recent 7-year period (ca. 2000-2006). The approach notably retains information on the spatial distribution of NEE, or the vertical exchange between land and atmosphere of all non-fossil fuel sources and sinks of CO2,...

  20. Robust MPC for a non-linear system - a neural network approach

    Science.gov (United States)

    Luzar, Marcel; Witczak, Marcin

    2014-12-01

    The aim of the paper is to design a robust actuator fault-tolerant control for a non-linear discrete-time system. Considered system is described by the Linear Parameter-Varying (LPV) model obtained with recurrent neural network. The proposed solution starts with a discretetime quasi-LPV system identification using artificial neural network. Subsequently, the robust controller is proposed, which does not take into account actuator saturation level and deals with the previously estimated faults. To check if the compensation problem is feasible, the robust invariant set is employed, which takes into account actuator saturation level. When the current state does not belong to the set, then a predictive control is performed in order to make such set larger. This makes it possible to increase the domain of attraction, which makes the proposed methodology an efficient solution for the fault-tolerant control. The last part of the paper presents an experimental results regarding wind turbines.

  1. Pattern Recognition and Classification of Fatal Traffic Accidents in Israel A Neural Network Approach

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo; Gitelman, Victoria; Bekhor, Shlomo

    2011-01-01

    This article provides a broad picture of fatal traffic accidents in Israel to answer an increasing need of addressing compelling problems, designing preventive measures, and targeting specific population groups with the objective of reducing the number of traffic fatalities. The analysis focuses...... on 1,793 fatal traffic accidents occurred during the period between 2003 and 2006 and applies Kohonen and feed-forward back-propagation neural networks with the objective of extracting from the data typical patterns and relevant factors. Kohonen neural networks reveal five compelling accident patterns......: (1) single-vehicle accidents of young drivers, (2) multiple-vehicle accidents between young drivers, (3) accidents involving motorcyclists or cyclists, (4) accidents where elderly pedestrians crossed in urban areas, and (5) accidents where children and teenagers cross major roads in small urban areas...

  2. A comparative performance evaluation of neural network based approach for sentiment classification of online reviews

    OpenAIRE

    Vinodhini, G.; Chandrasekaran, R.M.

    2016-01-01

    The aim of sentiment classification is to efficiently identify the emotions expressed in the form of text messages. Machine learning methods for sentiment classification have been extensively studied, due to their predominant classification performance. Recent studies suggest that ensemble based machine learning methods provide better performance in classification. Artificial neural networks (ANNs) are rarely being investigated in the literature of sentiment classification. This paper compare...

  3. Hybrid intelligence systems and artificial neural network (ANN approach for modeling of surface roughness in drilling

    Directory of Open Access Journals (Sweden)

    Ch. Sanjay

    2014-12-01

    Full Text Available In machining processes, drilling operation is material removal process that has been widely used in manufacturing since industrial revolution. The useful life of cutting tool and its operating conditions largely controls the economics of machining operations. Drilling is most frequently performed material removing process and is used as a preliminary step for many operations, such as reaming, tapping, and boring. Drill wear has a bad effect on the surface finish and dimensional accuracy of the work piece. The surface finish of a machined part is one of the most important quality characteristics in manufacturing industries. The primary objective of this research is the prediction of suitable parameters for surface roughness in drilling. Cutting speed, cutting force, and machining time were given as inputs to the adaptive fuzzy neural network and neuro-fuzzy analysis for estimating the values of surface roughness by using 2, 3, 4, and 5 membership functions. The best structures were selected based on minimum of summation of square with the actual values with the estimated values by artificial neural fuzzy inference system (ANFIS and neuro-fuzzy systems. For artificial neural network (ANN analysis, the number of neurons was selected from 1, 2, 3, … , 20. The learning rate was selected as .5 and .5 smoothing factor was used. The inputs were selected as cutting speed, feed, machining time, and thrust force. The best structures of neural networks were selected based on the criteria as the minimum of summation of square with the actual value of surface roughness. Drilling experiments with 10 mm size were performed at two cutting speeds and feeds. Comparative analysis has been done between the actual values and the estimated values obtained by ANFIS, neuro-fuzzy, and ANN analysis.

  4. Credit risk assessment model for Jordanian commercial banks: Neural scoring approach

    OpenAIRE

    Bekhet, Hussain Ali; Eletter, Shorouq Fathi Kamel

    2014-01-01

    Despite the increase in the number of non-performing loans and competition in the banking market, most of the Jordanian commercial banks are reluctant to use data mining tools to support credit decisions. Artificial neural networks represent a new family of statistical techniques and promising data mining tools that have been used successfully in classification problems in many domains. This paper proposes two credit scoring models using data mining techniques to support loan decisions for th...

  5. Caco-2 cell permeability modelling: a neural network coupled genetic algorithm approach

    Science.gov (United States)

    Di Fenza, Armida; Alagona, Giuliano; Ghio, Caterina; Leonardi, Riccardo; Giolitti, Alessandro; Madami, Andrea

    2007-04-01

    The ability to cross the intestinal cell membrane is a fundamental prerequisite of a drug compound. However, the experimental measurement of such an important property is a costly and highly time consuming step of the drug development process because it is necessary to synthesize the compound first. Therefore, in silico modelling of intestinal absorption, which can be carried out at very early stages of drug design, is an appealing alternative procedure which is based mainly on multivariate statistical analysis such as partial least squares (PLS) and neural networks (NN). Our implementation of neural network models for the prediction of intestinal absorption is based on the correlation of Caco-2 cell apparent permeability ( P app) values, as a measure of intestinal absorption, to the structures of two different data sets of drug candidates. Several molecular descriptors of the compounds were calculated and the optimal subsets were selected using a genetic algorithm; therefore, the method was indicated as Genetic Algorithm-Neural Network (GA-NN). A methodology combining a genetic algorithm search with neural network analysis applied to the modelling of Caco-2 P app has never been presented before, although the two procedures have been already employed separately. Moreover, we provide new Caco-2 cell permeability measurements for more than two hundred compounds. Interestingly, the selected descriptors show to possess physico-chemical connotations which are in excellent accordance with the well known relevant molecular properties involved in the cellular membrane permeation phenomenon: hydrophilicity, hydrogen bonding propensity, hydrophobicity and molecular size. The predictive ability of the models, although rather good for a preliminary study, is somewhat affected by the poor precision of the experimental Caco-2 measurements. Finally, the generalization ability of one model was checked on an external test set not derived from the data sets used to build the models

  6. Incorporation of iodine into apatite structure: a crystal chemistry approach using Artificial Neural Network

    OpenAIRE

    Jianwei eWang

    2015-01-01

    Materials with apatite crystal structure have a great potential for incorporating the long-lived radioactive iodine isotope (129I) in the form of iodide (I−) from nuclear waste streams. Because of its durability and potentially high iodine content, the apatite waste form can reduce iodine release rate and minimize the waste volume. Crystal structure and composition of apatite (A5(XO4)3Z) was investigated for iodide incorporation into the channel of the structure using Artificial Neural Networ...

  7. Artificial neural network approach to modeling of alcoholic fermentation of thick juice from sugar beet processing

    Directory of Open Access Journals (Sweden)

    Jokić Aleksandar I.

    2012-01-01

    Full Text Available In this paper the bioethanol production in batch culture by free Saccharomyces cerevisiae cells from thick juice as intermediate product of sugar beet processing was examined. The obtained results suggest that it is possible to decrease fermentation time for the cultivation medium based on thick juice with starting sugar content of 5-15 g kg-1. For the fermentation of cultivation medium based on thick juice with starting sugar content of 20 and 25 g kg-1 significant increase in ethanol content was attained during the whole fermentation process, resulting in 12.51 and 10.95 dm3 m-3 ethanol contents after 48 h, respectively. Other goals of this work were to investigate the possibilities for experimental results prediction using artificial neural networks (ANNs and to find its optimal topology. A feed-forward back-propagation artificial neural network was used to test the hypothesis. As input variables fermentation time and starting sugar content were used. Neural networks had one output value, ethanol content, yeast cell number or sugar content. There was one hidden layer and the optimal number of neurons was found to be nine for all selected network outputs. In this study transfer function was tansig and the selected learning rule was Levenberg-Marquardt. Results suggest that artificial neural networks are good prediction tool for selected network outputs. It was found that experimental results are in very good agreement with computed ones. The coefficient of determination (the R-squared was found to be 0.9997, 0.9997 and 0.9999 for ethanol content, yeast cell number and sugar content, respectively.

  8. Hybrid energy system evaluation in water supply system energy production: neural network approach

    Energy Technology Data Exchange (ETDEWEB)

    Goncalves, Fabio V.; Ramos, Helena M. [Civil Engineering Department, Instituto Superior Tecnico, Technical University of Lisbon, Av. Rovisco Pais, 1049-001, Lisbon (Portugal); Reis, Luisa Fernanda R. [Universidade de Sao Paulo, EESC/USP, Departamento de Hidraulica e Saneamento., Avenida do Trabalhador Saocarlense, 400, Sao Carlos-SP (Brazil)

    2010-07-01

    Water supply systems are large consumers of energy and the use of hybrid systems for green energy production is this new proposal. This work presents a computational model based on neural networks to determine the best configuration of a hybrid system to generate energy in water supply systems. In this study the energy sources to make this hybrid system can be the national power grid, micro-hydro and wind turbines. The artificial neural network is composed of six layers, trained to use data generated by a model of hybrid configuration and an economic simulator - CES. The reason for the development of an advanced model of forecasting based on neural networks is to allow rapid simulation and proper interaction with hydraulic and power model simulator - HPS. The results show that this computational model is useful as advanced decision support system in the design of configurations of hybrid power systems applied to water supply systems, improving the solutions in the development of its global energy efficiency.

  9. Subspace projection approaches to classification and visualization of neural network-level encoding patterns.

    Directory of Open Access Journals (Sweden)

    Remus Oşan

    2007-05-01

    Full Text Available Recent advances in large-scale ensemble recordings allow monitoring of activity patterns of several hundreds of neurons in freely behaving animals. The emergence of such high-dimensional datasets poses challenges for the identification and analysis of dynamical network patterns. While several types of multivariate statistical methods have been used for integrating responses from multiple neurons, their effectiveness in pattern classification and predictive power has not been compared in a direct and systematic manner. Here we systematically employed a series of projection methods, such as Multiple Discriminant Analysis (MDA, Principal Components Analysis (PCA and Artificial Neural Networks (ANN, and compared them with non-projection multivariate statistical methods such as Multivariate Gaussian Distributions (MGD. Our analyses of hippocampal data recorded during episodic memory events and cortical data simulated during face perception or arm movements illustrate how low-dimensional encoding subspaces can reveal the existence of network-level ensemble representations. We show how the use of regularization methods can prevent these statistical methods from over-fitting of training data sets when the trial numbers are much smaller than the number of recorded units. Moreover, we investigated the extent to which the computations implemented by the projection methods reflect the underlying hierarchical properties of the neural populations. Based on their ability to extract the essential features for pattern classification, we conclude that the typical performance ranking of these methods on under-sampled neural data of large dimension is MDA>PCA>ANN>MGD.

  10. Artificial neural network based modelling approach for municipal solid waste gasification in a fluidized bed reactor.

    Science.gov (United States)

    Pandey, Daya Shankar; Das, Saptarshi; Pan, Indranil; Leahy, James J; Kwapinski, Witold

    2016-12-01

    In this paper, multi-layer feed forward neural networks are used to predict the lower heating value of gas (LHV), lower heating value of gasification products including tars and entrained char (LHVp) and syngas yield during gasification of municipal solid waste (MSW) during gasification in a fluidized bed reactor. These artificial neural networks (ANNs) with different architectures are trained using the Levenberg-Marquardt (LM) back-propagation algorithm and a cross validation is also performed to ensure that the results generalise to other unseen datasets. A rigorous study is carried out on optimally choosing the number of hidden layers, number of neurons in the hidden layer and activation function in a network using multiple Monte Carlo runs. Nine input and three output parameters are used to train and test various neural network architectures in both multiple output and single output prediction paradigms using the available experimental datasets. The model selection procedure is carried out to ascertain the best network architecture in terms of predictive accuracy. The simulation results show that the ANN based methodology is a viable alternative which can be used to predict the performance of a fluidized bed gasifier. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Neural network approaches to tracer identification as related to PIV research

    Energy Technology Data Exchange (ETDEWEB)

    Seeley, C.H. Jr.

    1992-12-01

    Neural networks have become very powerful tools in many fields of interest. This thesis examines the application of neural networks to another rapidly growing field flow visualization. Flow visualization research is used to experimentally determine how fluids behave and to verify computational results obtained analytically. A form of flow visualization, particle image velocimetry (PIV). determines the flow movement by tracking neutrally buoyant particles suspended in the fluid. PIV research has begun to improve rapidly with the advent of digital imagers, which can quickly digitize an image into arrays of grey levels. These grey level arrays are analyzed to determine the location of the tracer particles. Once the particles positions have been determined across multiple image frames, it is possible to track their movements, and hence, the flow of the fluid. This thesis explores the potential of several different neural networks to identify the positions of the tracer particles. Among these networks are Backpropagation, Kohonen (counter-propagation), and Cellular. Each of these algorithms were employed in their basic form, and training and testing were performed on a synthetic grey level array. Modifications were then made to them in attempts to improve the results.

  12. On limited fan-in optimal neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.; Makaruk, H.E. [Los Alamos National Lab., NM (United States); Draghici, S. [Wayne State Univ., Detroit, MI (United States). Vision and Neural Networks Lab.

    1998-03-01

    Because VLSI implementations do not cope well with highly interconnected nets the area of a chip growing as the cube of the fan-in--this paper analyses the influence of limited fan in on the size and VLSI optimality of such nets. Two different approaches will show that VLSI- and size-optimal discrete neural networks can be obtained for small (i.e. lower than linear) fan-in values. They have applications to hardware implementations of neural networks. The first approach is based on implementing a certain sub class of Boolean functions, IF{sub n,m} functions. The authors will show that this class of functions can be implemented in VLSI optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan ins. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on neural networks with fan-ins limited to 2. They generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan in values, while relative minimum size solutions can be obtained for fan-ins strictly lower than linear. Finally, a size-optimal neural network having small constant fan-ins will be suggested for IF{sub n,m} functions.

  13. Traffic accident reconstruction and an approach for prediction of fault rates using artificial neural networks: A case study in Turkey.

    Science.gov (United States)

    Can Yilmaz, Ali; Aci, Cigdem; Aydin, Kadir

    2016-08-17

    Currently, in Turkey, fault rates in traffic accidents are determined according to the initiative of accident experts (no speed analyses of vehicles just considering accident type) and there are no specific quantitative instructions on fault rates related to procession of accidents which just represents the type of collision (side impact, head to head, rear end, etc.) in No. 2918 Turkish Highway Traffic Act (THTA 1983). The aim of this study is to introduce a scientific and systematic approach for determination of fault rates in most frequent property damage-only (PDO) traffic accidents in Turkey. In this study, data (police reports, skid marks, deformation, crush depth, etc.) collected from the most frequent and controversial accident types (4 sample vehicle-vehicle scenarios) that consist of PDO were inserted into a reconstruction software called vCrash. Sample real-world scenarios were simulated on the software to generate different vehicle deformations that also correspond to energy-equivalent speed data just before the crash. These values were used to train a multilayer feedforward artificial neural network (MFANN), function fitting neural network (FITNET, a specialized version of MFANN), and generalized regression neural network (GRNN) models within 10-fold cross-validation to predict fault rates without using software. The performance of the artificial neural network (ANN) prediction models was evaluated using mean square error (MSE) and multiple correlation coefficient (R). It was shown that the MFANN model performed better for predicting fault rates (i.e., lower MSE and higher R) than FITNET and GRNN models for accident scenarios 1, 2, and 3, whereas FITNET performed the best for scenario 4. The FITNET model showed the second best results for prediction for the first 3 scenarios. Because there is no training phase in GRNN, the GRNN model produced results much faster than MFANN and FITNET models. However, the GRNN model had the worst prediction results. The

  14. Refining cost-effectiveness analyses using the net benefit approach and econometric methods: an example from a trial of anti-depressant treatment.

    Science.gov (United States)

    Sabes-Figuera, Ramon; McCrone, Paul; Kendricks, Antony

    2013-04-01

    Economic evaluation analyses can be enhanced by employing regression methods, allowing for the identification of important sub-groups and to adjust for imperfect randomisation in clinical trials or to analyse non-randomised data. To explore the benefits of combining regression techniques and the standard Bayesian approach to refine cost-effectiveness analyses using data from randomised clinical trials. Data from a randomised trial of anti-depressant treatment were analysed and a regression model was used to explore the factors that have an impact on the net benefit (NB) statistic with the aim of using these findings to adjust the cost-effectiveness acceptability curves. Exploratory sub-samples' analyses were carried out to explore possible differences in cost-effectiveness. Results The analysis found that having suffered a previous similar depression is strongly correlated with a lower NB, independent of the outcome measure or follow-up point. In patients with previous similar depression, adding an selective serotonin reuptake inhibitors (SSRI) to supportive care for mild-to-moderate depression is probably cost-effective at the level used by the English National Institute for Health and Clinical Excellence to make recommendations. This analysis highlights the need for incorporation of econometric methods into cost-effectiveness analyses using the NB approach.

  15. Applying the Plan-Do-Study-Act (PDSA) approach to a large pragmatic study involving safety net clinics.

    Science.gov (United States)

    Coury, Jennifer; Schneider, Jennifer L; Rivelli, Jennifer S; Petrik, Amanda F; Seibel, Evelyn; D'Agostini, Brieshon; Taplin, Stephen H; Green, Beverly B; Coronado, Gloria D

    2017-06-19

    The Plan-Do-Study-Act (PDSA) cycle is a commonly used improvement process in health care settings, although its documented use in pragmatic clinical research is rare. A recent pragmatic clinical research study, called the Strategies and Opportunities to STOP Colon Cancer in Priority Populations (STOP CRC), used this process to optimize the research implementation of an automated colon cancer screening outreach program in intervention clinics. We describe the process of using this PDSA approach, the selection of PDSA topics by clinic leaders, and project leaders' reactions to using PDSA in pragmatic research. STOP CRC is a cluster-randomized pragmatic study that aims to test the effectiveness of a direct-mail fecal immunochemical testing (FIT) program involving eight Federally Qualified Health Centers in Oregon and California. We and a practice improvement specialist trained in the PDSA process delivered structured presentations to leaders of these centers; the presentations addressed how to apply the PDSA process to improve implementation of a mailed outreach program offering colorectal cancer screening through FIT tests. Center leaders submitted PDSA plans and delivered reports via webinar at quarterly meetings of the project's advisory board. Project staff conducted one-on-one, 45-min interviews with project leads from each health center to assess the reaction to and value of the PDSA process in supporting the implementation of STOP CRC. Clinic-selected PDSA activities included refining the intervention staffing model, improving outreach materials, and changing workflow steps. Common benefits of using PDSA cycles in pragmatic research were that it provided a structure for staff to focus on improving the program and it allowed staff to test the change they wanted to see. A commonly reported challenge was measuring the success of the PDSA process with the available electronic medical record tools. Understanding how the PDSA process can be applied to pragmatic

  16. Finding fossils in new ways: an artificial neural network approach to predicting the location of productive fossil localities.

    Science.gov (United States)

    Anemone, Robert; Emerson, Charles; Conroy, Glenn

    2011-01-01

    Chance and serendipity have long played a role in the location of productive fossil localities by vertebrate paleontologists and paleoanthropologists. We offer an alternative approach, informed by methods borrowed from the geographic information sciences and using recent advances in computer science, to more efficiently predict where fossil localities might be found. Our model uses an artificial neural network (ANN) that is trained to recognize the spectral characteristics of known productive localities and other land cover classes, such as forest, wetlands, and scrubland, within a study area based on the analysis of remotely sensed (RS) imagery. Using these spectral signatures, the model then classifies other pixels throughout the study area. The results of the neural network classification can be examined and further manipulated within a geographic information systems (GIS) software package. While we have developed and tested this model on fossil mammal localities in deposits of Paleocene and Eocene age in the Great Divide Basin of southwestern Wyoming, a similar analytical approach can be easily applied to fossil-bearing sedimentary deposits of any age in any part of the world. We suggest that new analytical tools and methods of the geographic sciences, including remote sensing and geographic information systems, are poised to greatly enrich paleoanthropological investigations, and that these new methods should be embraced by field workers in the search for, and geospatial analysis of, fossil primates and hominins. Copyright © 2011 Wiley-Liss, Inc.

  17. A neural tracking and motor control approach to improve rehabilitation of upper limb movements

    Directory of Open Access Journals (Sweden)

    Schmid Maurizio

    2008-02-01

    Full Text Available Abstract Background Restoration of upper limb movements in subjects recovering from stroke is an essential keystone in rehabilitative practices. Rehabilitation of arm movements, in fact, is usually a far more difficult one as compared to that of lower extremities. For these reasons, researchers are developing new methods and technologies so that the rehabilitative process could be more accurate, rapid and easily accepted by the patient. This paper introduces the proof of concept for a new non-invasive FES-assisted rehabilitation system for the upper limb, called smartFES (sFES, where the electrical stimulation is controlled by a biologically inspired neural inverse dynamics model, fed by the kinematic information associated with the execution of a planar goal-oriented movement. More specifically, this work details two steps of the proposed system: an ad hoc markerless motion analysis algorithm for the estimation of kinematics, and a neural controller that drives a synthetic arm. The vision of the entire system is to acquire kinematics from the analysis of video sequences during planar arm movements and to use it together with a neural inverse dynamics model able to provide the patient with the electrical stimulation patterns needed to perform the movement with the assisted limb. Methods The markerless motion tracking system aims at localizing and monitoring the arm movement by tracking its silhouette. It uses a specifically designed motion estimation method, that we named Neural Snakes, which predicts the arm contour deformation as a first step for a silhouette extraction algorithm. The starting and ending points of the arm movement feed an Artificial Neural Controller, enclosing the muscular Hill's model, which solves the inverse dynamics to obtain the FES patterns needed to move a simulated arm from the starting point to the desired point. Both position error with respect to the requested arm trajectory and comparison between curvature factors

  18. A neural tracking and motor control approach to improve rehabilitation of upper limb movements.

    Science.gov (United States)

    Goffredo, Michela; Bernabucci, Ivan; Schmid, Maurizio; Conforto, Silvia

    2008-02-05

    Restoration of upper limb movements in subjects recovering from stroke is an essential keystone in rehabilitative practices. Rehabilitation of arm movements, in fact, is usually a far more difficult one as compared to that of lower extremities. For these reasons, researchers are developing new methods and technologies so that the rehabilitative process could be more accurate, rapid and easily accepted by the patient. This paper introduces the proof of concept for a new non-invasive FES-assisted rehabilitation system for the upper limb, called smartFES (sFES), where the electrical stimulation is controlled by a biologically inspired neural inverse dynamics model, fed by the kinematic information associated with the execution of a planar goal-oriented movement. More specifically, this work details two steps of the proposed system: an ad hoc markerless motion analysis algorithm for the estimation of kinematics, and a neural controller that drives a synthetic arm. The vision of the entire system is to acquire kinematics from the analysis of video sequences during planar arm movements and to use it together with a neural inverse dynamics model able to provide the patient with the electrical stimulation patterns needed to perform the movement with the assisted limb. The markerless motion tracking system aims at localizing and monitoring the arm movement by tracking its silhouette. It uses a specifically designed motion estimation method, that we named Neural Snakes, which predicts the arm contour deformation as a first step for a silhouette extraction algorithm. The starting and ending points of the arm movement feed an Artificial Neural Controller, enclosing the muscular Hill's model, which solves the inverse dynamics to obtain the FES patterns needed to move a simulated arm from the starting point to the desired point. Both position error with respect to the requested arm trajectory and comparison between curvature factors have been calculated in order to determine the

  19. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Alireza Alemi

    2015-08-01

    Full Text Available Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the

  20. Adaptive, integrated sensor processing to compensate for drift and uncertainty: a stochastic 'neural' approach.

    Science.gov (United States)

    Tang, T B; Chen, H; Murray, A F

    2004-02-01

    An adaptive stochastic classifier based on a simple, novel neural architecture--the Continuous Restricted Boltzmann Machine (CRBM) is demonstrated. Together with sensors and signal conditioning circuits, the classifier is capable of measuring and classifying (with high accuracy) the H+ ion concentration, in the presence of both random noise and sensor drift. Training on-line, the stochastic classifier is able to overcome significant drift of real incomplete sensor data dynamically. As analogue hardware, this signal-level sensor fusion scheme is therefore suitable for real-time analysis in a miniaturised multisensor microsystem such as a Lab-in-a-Pill (LIAP).

  1. Combined Geometric and Neural Network Approach to Generic Fault Diagnosis in Satellite Actuators and Sensors

    DEFF Research Database (Denmark)

    Baldi, P.; Blanke, Mogens; Castaldi, P.

    2016-01-01

    This paper presents a novel scheme for diagnosis of faults affecting the sensors measuring the satellite attitude, body angular velocity and flywheel spin rates as well as defects related to the control torques provided by satellite reaction wheels. A nonlinear geometric design is used to avoid...... that aerodynamic disturbance torques have unwanted influence on the residuals exploited for fault detection and isolation. Radial basis function neural networks are used to obtain fault estimation filters that do not need a priori information about the fault internal models. Simulation results are based...... on a detailed nonlinear satellite model with embedded disturbance description. The results document the efficacy of the proposed diagnosis scheme....

  2. Inductive coupling between overhead power lines and nearby metallic pipelines. A neural network approach

    Directory of Open Access Journals (Sweden)

    Levente Czumbil

    2015-12-01

    Full Text Available The current paper presents an artificial intelligence based technique applied in the investigation of electromagnetic interference problems between high voltage power lines (HVPL and nearby underground metallic pipelines (MP. An artificial neural network (NN solution has been implemented by the authors to evaluate the inductive coupling between HVPL and MP for different constructive geometries of an electromagnetic interference problem considering a multi-layer soil structure. Obtained results are compared to solutions provided by a finite element method (FEM based analysis and considered as reference. The advantage of the proposed method yields in a simplified computation model compared to FEM, and implicitly a lower computational time.

  3. Fluorescent diagnostics of organic pollution in natural waters: A neural network approach

    Energy Technology Data Exchange (ETDEWEB)

    Orlov, Y.V.; Persiantsev, I.G.; Rebrik, S.P. [Nuclear Physics Institute, Moscow (Russian Federation)] [and others

    1995-12-31

    Rapid diagnosis of pollution is one of the key tasks in the field of ecological monitoring of natural and technogeneous environment. One of the promising methods of fluorescent diagnosis of organic pollution of water environment is the registration and analysis of two-dimensional Spectral Fluorescent Signatures (SFS). The neural networks - based system suggested in this paper is intended for solving the problem of detection, identification, and concentration measurement of water environmental pollution. The suggested system uses SFS as input pattern and allows one to build a rapid diagnosis system for ecological monitoring.

  4. A New Robust Training Law for Dynamic Neural Networks with External Disturbance: An LMI Approach

    Directory of Open Access Journals (Sweden)

    Choon Ki Ahn

    2010-01-01

    Full Text Available A new robust training law, which is called an input/output-to-state stable training law (IOSSTL, is proposed for dynamic neural networks with external disturbance. Based on linear matrix inequality (LMI formulation, the IOSSTL is presented to not only guarantee exponential stability but also reduce the effect of an external disturbance. It is shown that the IOSSTL can be obtained by solving the LMI, which can be easily facilitated by using some standard numerical packages. Numerical examples are presented to demonstrate the validity of the proposed IOSSTL.

  5. Simulation-based model checking approach to cell fate specification during Caenorhabditis elegans vulval development by hybrid functional Petri net with extension.

    Science.gov (United States)

    Li, Chen; Nagasaki, Masao; Ueno, Kazuko; Miyano, Satoru

    2009-04-27

    Model checking approaches were applied to biological pathway validations around 2003. Recently, Fisher et al. have proved the importance of model checking approach by inferring new regulation of signaling crosstalk in C. elegans and confirming the regulation with biological experiments. They took a discrete and state-based approach to explore all possible states of the system underlying vulval precursor cell (VPC) fate specification for desired properties. However, since both discrete and continuous features appear to be an indispensable part of biological processes, it is more appropriate to use quantitative models to capture the dynamics of biological systems. Our key motivation of this paper is to establish a quantitative methodology to model and analyze in silico models incorporating the use of model checking approach. A novel method of modeling and simulating biological systems with the use of model checking approach is proposed based on hybrid functional Petri net with extension (HFPNe) as the framework dealing with both discrete and continuous events. Firstly, we construct a quantitative VPC fate model with 1761 components by using HFPNe. Secondly, we employ two major biological fate determination rules - Rule I and Rule II - to VPC fate model. We then conduct 10,000 simulations for each of 48 sets of different genotypes, investigate variations of cell fate patterns under each genotype, and validate the two rules by comparing three simulation targets consisting of fate patterns obtained from in silico and in vivo experiments. In particular, an evaluation was successfully done by using our VPC fate model to investigate one target derived from biological experiments involving hybrid lineage observations. However, the understandings of hybrid lineages are hard to make on a discrete model because the hybrid lineage occurs when the system comes close to certain thresholds as discussed by Sternberg and Horvitz in 1986. Our simulation results suggest that: Rule I

  6. A Supramolecular Gel Approach to Minimize the Neural Cell Damage during Cryopreservation Process.

    Science.gov (United States)

    Zeng, Jie; Yin, Yixia; Zhang, Li; Hu, Wanghui; Zhang, Chaocan; Chen, Wanyu

    2016-03-01

    The storage method for living cells is one of the major challenges in cell-based applications. Here, a novel supramolecular gel cryopreservation system (BDTC gel system) is introduced, which can observably increase the neural cell viability during cryopreservation process because this system can (1) confine the ice crystal growth in the porous of BDTC gel system, (2) decrease the amount of ice crystallization and cryopreservation system's freezing point, and (3) reduce the change rates of cell volumes and osmotic shock. In addition, thermoreversible BDTC supramolecular gel is easy to be removed after thawing so it does not hinder the adherence, growth, and proliferation of cells. The results of functionality assessments indicate that BDTC gel system can minimize the neural cell damage during cryopreservation process. This method will be potentially applied in cryopreservation of other cell types, tissues, or organs and will benefit cell therapy, tissue engineering, and organs transplantation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach

    Directory of Open Access Journals (Sweden)

    Bugatti Alessandro

    2002-01-01

    Full Text Available We focus the attention on the problem of audio classification in speech and music for multimedia applications. In particular, we present a comparison between two different techniques for speech/music discrimination. The first method is based on Zero crossing rate and Bayesian classification. It is very simple from a computational point of view, and gives good results in case of pure music or speech. The simulation results show that some performance degradation arises when the music segment contains also some speech superimposed on music, or strong rhythmic components. To overcome these problems, we propose a second method, that uses more features, and is based on neural networks (specifically a multi-layer Perceptron. In this case we obtain better performance, at the expense of a limited growth in the computational complexity. In practice, the proposed neural network is simple to be implemented if a suitable polynomial is used as the activation function, and a real-time implementation is possible even if low-cost embedded systems are used.

  8. Artificial Neural Network Approach in Laboratory Test Reporting:  Learning Algorithms.

    Science.gov (United States)

    Demirci, Ferhat; Akan, Pinar; Kume, Tuncay; Sisman, Ali Riza; Erbayraktar, Zubeyde; Sevinc, Suleyman

    2016-08-01

    In the field of laboratory medicine, minimizing errors and establishing standardization is only possible by predefined processes. The aim of this study was to build an experimental decision algorithm model open to improvement that would efficiently and rapidly evaluate the results of biochemical tests with critical values by evaluating multiple factors concurrently. The experimental model was built by Weka software (Weka, Waikato, New Zealand) based on the artificial neural network method. Data were received from Dokuz Eylül University Central Laboratory. "Training sets" were developed for our experimental model to teach the evaluation criteria. After training the system, "test sets" developed for different conditions were used to statistically assess the validity of the model. After developing the decision algorithm with three iterations of training, no result was verified that was refused by the laboratory specialist. The sensitivity of the model was 91% and specificity was 100%. The estimated κ score was 0.950. This is the first study based on an artificial neural network to build an experimental assessment and decision algorithm model. By integrating our trained algorithm model into a laboratory information system, it may be possible to reduce employees' workload without compromising patient safety. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Shannon Entropy and Mean Square Errors for speeding the convergence of Multilayer Neural Networks: A comparative approach

    Directory of Open Access Journals (Sweden)

    Hussein Aly Kamel Rady

    2011-11-01

    Full Text Available Improving the efficiency and convergence rate of the Multilayer Backpropagation Neural Network Algorithms is an active area of research. The last years have witnessed an increasing attention to entropy based criteria in adaptive systems. Several principles were proposed based on the maximization or minimization of entropic cost functions. One way of entropy criteria in learning systems is to minimize the entropy of the error between two variables: typically one is the output of the learning system and the other is the target. In this paper, improving the efficiency and convergence rate of Multilayer Backpropagation (BP Neural Networks was proposed. The usual Mean Square Error (MSE minimization principle is substituted by the minimization of Shannon Entropy (SE of the differences between the multilayer perceptions output and the desired target. These two cost functions are studied, analyzed and tested with two different activation functions namely, the Cauchy and the hyperbolic tangent activation functions. The comparative approach indicates that the Degree of convergence using Shannon Entropy cost function is higher than its counterpart using MSE and that MSE speeds the convergence than Shannon Entropy.

  10. Flag Choice Behavior in the Turkish Merchant Fleet: A Model Proposal with Artificial Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Burak KÖSEOĞLU

    2017-07-01

    Full Text Available Shipping companies have to take several strategic decisions about the vessels that perform transportation activities. The most important of these strategic decisions is “Flag Choice”. This decision given by the company is shaped under the light of external and internal factors. In this paper, initially, the factors which affect flag choice decision of shipping companies and ship owners who play an important role to handle Turkish merchant fleet are determined. Then, the relation and association status of the factors which have significant impacts on this decision are displayed with data mining application. Artificial Neural Networks (ANN application is realized with the obtained outputs and a model is proposed for flag selection decision. It is expected that the results of the study provides certain outcomes and guidelines for related organizations dealing with shipping operations as well as suggestions for effective and efficient coordination among the relevant institutions.

  11. New S-box calculation approach for Rijndael-AES based on an artificial neural network

    Directory of Open Access Journals (Sweden)

    Jaime David Rios Arrañaga

    2017-11-01

    Full Text Available The S-box is a basic important component in symmetric key encryption, used in block ciphers to confuse or hide the relationship between the plaintext and the ciphertext. In this paper a way to develop the transformation of an input of the S-box specified in AES encryption system through an artificial neural network and the multiplicative inverse in Galois Field is presented. With this implementation more security is achieved since the values of the S-box remain hidden and the inverse table serves as a distractor since it would appear to be the complete S-box. This is implemented on MATLAB and HSPICE using a network of perceptron neurons with a hidden layer and null error.

  12. Credit risk assessment model for Jordanian commercial banks: Neural scoring approach

    Directory of Open Access Journals (Sweden)

    Hussain Ali Bekhet

    2014-01-01

    Full Text Available Despite the increase in the number of non-performing loans and competition in the banking market, most of the Jordanian commercial banks are reluctant to use data mining tools to support credit decisions. Artificial neural networks represent a new family of statistical techniques and promising data mining tools that have been used successfully in classification problems in many domains. This paper proposes two credit scoring models using data mining techniques to support loan decisions for the Jordanian commercial banks. Loan application evaluation would improve credit decision effectiveness and control loan office tasks, as well as save analysis time and cost. Both accepted and rejected loan applications, from different Jordanian commercial banks, were used to build the credit scoring models. The results indicate that the logistic regression model performed slightly better than the radial basis function model in terms of the overall accuracy rate. However, the radial basis function was superior in identifying those customers who may default.

  13. A Neural Network Approach to Infer Optical Depth of Thick Ice Clouds at Night

    Science.gov (United States)

    Minnis, P.; Hong, G.; Sun-Mack, S.; Chen, Yan; Smith, W. L., Jr.

    2016-01-01

    One of the roadblocks to continuously monitoring cloud properties is the tendency of clouds to become optically black at cloud optical depths (COD) of 6 or less. This constraint dramatically reduces the quantitative information content at night. A recent study found that because of their diffuse nature, ice clouds remain optically gray, to some extent, up to COD of 100 at certain wavelengths. Taking advantage of this weak dependency and the availability of COD retrievals from CloudSat, an artificial neural network algorithm was developed to estimate COD values up to 70 from common satellite imager infrared channels. The method was trained using matched 2007 CloudSat and Aqua MODIS data and is tested using similar data from 2008. The results show a significant improvement over the use of default values at night with high correlation. This paper summarizes the results and suggests paths for future improvement.

  14. Decomposition approach to the stability of recurrent neural networks with asynchronous time delays in quaternion field.

    Science.gov (United States)

    Zhang, Dandan; Kou, Kit Ian; Liu, Yang; Cao, Jinde

    2017-10-01

    In this paper, the global exponential stability for recurrent neural networks (QVNNs) with asynchronous time delays is investigated in quaternion field. Due to the non-commutativity of quaternion multiplication resulting from Hamilton rules: ij=-ji=k, jk=-kj=i, ki=-ik=j, ijk=i(2)=j(2)=k(2)=-1, the QVNN is decomposed into four real-valued systems, which are studied separately. The exponential convergence is proved directly accompanied with the existence and uniqueness of the equilibrium point to the consider systems. Combining with the generalized ∞-norm and Cauchy convergence property in the quaternion field, some sufficient conditions to guarantee the stability are established without using any Lyapunov-Krasovskii functional and linear matrix inequality. Finally, a numerical example is given to demonstrate the effectiveness of the results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Mapping Speech Spectra from Throat Microphone to Close-Speaking Microphone: A Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Yegnanarayana B

    2007-01-01

    Full Text Available Speech recorded from a throat microphone is robust to the surrounding noise, but sounds unnatural unlike the speech recorded from a close-speaking microphone. This paper addresses the issue of improving the perceptual quality of the throat microphone speech by mapping the speech spectra from the throat microphone to the close-speaking microphone. A neural network model is used to capture the speaker-dependent functional relationship between the feature vectors (cepstral coefficients of the two speech signals. A method is proposed to ensure the stability of the all-pole synthesis filter. Objective evaluations indicate the effectiveness of the proposed mapping scheme. The advantage of this method is that the model gives a smooth estimate of the spectra of the close-speaking microphone speech. No distortions are perceived in the reconstructed speech. This mapping technique is also used for bandwidth extension of telephone speech.

  16. Cloud cover and solar disk state estimation using all-sky images: deep neural networks approach compared to routine methods

    Science.gov (United States)

    Krinitskiy, Mikhail; Sinitsyn, Alexey

    2017-04-01

    Shortwave radiation is an important component of surface heat budget over sea and land. To estimate them accurate observations of cloud conditions are needed including total cloud cover, spatial and temporal cloud structure. While massively observed visually, for building accurate SW radiation parameterizations cloud structure needs also to be quantified using precise instrumental measurements. While there already exist several state of the art land-based cloud-cameras that satisfy researchers needs, their major disadvantages are associated with inaccuracy of all-sky images processing algorithms which typically result in the uncertainties of 2-4 octa of cloud cover estimates with the resulting true-scoring cloud cover accuracy of about 7%. Moreover, none of these algorithms determine cloud types. We developed an approach for cloud cover and structure estimating, which provides much more accurate estimates and also allows for measuring additional characteristics. This method is based on the synthetic controlling index, namely the "grayness rate index", that we introduced in 2014. Since then this index has already demonstrated high efficiency being used along with the technique namely the "background sunburn effect suppression", to detect thin clouds. This made it possible to significantly increase the accuracy of total cloud cover estimation in various sky image states using this extension of routine algorithm type. Errors for the cloud cover estimates significantly decreased down resulting the mean squared error of about 1.5 octa. Resulting true-scoring accuracy is more than 38%. The main source of this approach uncertainties is the solar disk state determination errors. While the deep neural networks approach lets us to estimate solar disk state with 94% accuracy, the final result of total cloud estimation still isn`t satisfying. To solve this problem completely we applied the set of machine learning algorithms to the problem of total cloud cover estimation

  17. Sensory neural pathways revisited to unravel the temporal dynamics of the Simon effect: A model-based cognitive neuroscience approach.

    Science.gov (United States)

    Salzer, Yael; de Hollander, Gilles; Forstmann, Birte U

    2017-06-01

    The Simon task is one of the most prominent interference tasks and has been extensively studied in experimental psychology and cognitive neuroscience. Despite years of research, the underlying mechanism driving the phenomenon and its temporal dynamics are still disputed. Within the framework of the review, we adopt a model-based cognitive neuroscience approach. We first go over key findings in the literature of the Simon task, discuss competing qualitative cognitive theories and the difficulty of testing them empirically. We then introduce sequential sampling models, a particular class of mathematical cognitive process models. Finally, we argue that the brain architecture accountable for the processing of spatial ('where') and non-spatial ('what') information, could constrain these models. We conclude that there is a clear need to bridge neural and behavioral measures, and that mathematical cognitive models may facilitate the construction of this bridge and work towards revealing the underlying mechanisms of the Simon effect. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A hybrid Taguchi-artificial neural network approach to predict surface roughness during electric discharge machining of titanium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Sanjeev; Batish, Ajay [Thapar University, Patiala (India); Singh, Rupinder [GNDEC, Ludhiana (India); Singh, T. P. [Symbiosis Institute of Technology, Pune (India)

    2014-07-15

    In the present study, electric discharge machining process was used for machining of titanium alloys. Eight process parameters were varied during the process. Experimental results showed that current and pulse-on-time significantly affected the performance characteristics. Artificial neural network coupled with Taguchi approach was applied for optimization and prediction of surface roughness. The experimental results and the predicted results showed good agreement. SEM was used to investigate the surface integrity. Analysis for migration of different chemical elements and formation of compounds on the surface was performed using EDS and XRD pattern. The results showed that high discharge energy caused surface defects such as cracks, craters, thick recast layer, micro pores, pin holes, residual stresses and debris. Also, migration of chemical elements both from electrode and dielectric media were observed during EDS analysis. Presence of carbon was seen on the machined surface. XRD results showed formation of titanium carbide compound which precipitated on the machined surface.

  19. Artificial Neural Networks, and Evolutionary Algorithms as a systems biology approach to a data-base on fetal growth restriction.

    Science.gov (United States)

    Street, Maria E; Buscema, Massimo; Smerieri, Arianna; Montanini, Luisa; Grossi, Enzo

    2013-12-01

    One of the specific aims of systems biology is to model and discover properties of cells, tissues and organisms functioning. A systems biology approach was undertaken to investigate possibly the entire system of intra-uterine growth we had available, to assess the variables of interest, discriminate those which were effectively related with appropriate or restricted intrauterine growth, and achieve an understanding of the systems in these two conditions. The Artificial Adaptive Systems, which include Artificial Neural Networks and Evolutionary Algorithms lead us to the first analyses. These analyses identified the importance of the biochemical variables IL-6, IGF-II and IGFBP-2 protein concentrations in placental lysates, and offered a new insight into placental markers of fetal growth within the IGF and cytokine systems, confirmed they had relationships and offered a critical assessment of studies previously performed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Net4Care PHMR Library

    DEFF Research Database (Denmark)

    2014-01-01

    The Net4Care PHMR library contains a) A GreenCDA approach for constructing a data object representing a PHMR document: SimpleClinicalDocument, and b) A Builder which can produce a XML document representing a valid Danish PHMR (following the MedCom profile) document from the SimpleClinicalDocument......The Net4Care PHMR library contains a) A GreenCDA approach for constructing a data object representing a PHMR document: SimpleClinicalDocument, and b) A Builder which can produce a XML document representing a valid Danish PHMR (following the MedCom profile) document from the Simple...

  1. Development of a remote sensing algorithm for cyanobacterial phycocyanin pigment in the Baltic Sea using neural network approach

    Science.gov (United States)

    Riha, Stefan; Krawczyk, Harald

    2011-11-01

    Water quality monitoring in the Baltic Sea is of high ecological importance for all its neighbouring countries. They are highly interested in a regular monitoring of water quality parameters of their regional zones. A special attention is paid to the occurrence and dissemination of algae blooms. Among the appearing blooms the possibly toxicological or harmful cyanobacteria cultures are a special case of investigation, due to their specific optical properties and due to the negative influence on the ecological state of the aquatic system. Satellite remote sensing, with its high temporal and spatial resolution opportunities, allows the frequent observations of large areas of the Baltic Sea with special focus on its two seasonal algae blooms. For a better monitoring of the cyanobacteria dominated summer blooms, adapted algorithms are needed which take into account the special optical properties of blue-green algae. Chlorophyll-a standard algorithms typically fail in a correct recognition of these occurrences. To significantly improve the opportunities of observation and propagation of the cyanobacteria blooms, the Marine Remote Sensing group of DLR has started the development of a model based inversion algorithm that includes a four component bio-optical water model for Case2 waters, which extends the commonly calculated parameter set chlorophyll, Suspended Matter and CDOM with an additional parameter for the estimation of phycocyanin absorption. It was necessary to carry out detailed optical laboratory measurements with different cyanobacteria cultures, occurring in the Baltic Sea, for the generation of a specific bio-optical model. The inversion of satellite remote sensing data is based on an artificial Neural Network technique. This is a model based multivariate non-linear inversion approach. The specifically designed Neural Network is trained with a comprehensive dataset of simulated reflectance values taking into account the laboratory obtained specific optical

  2. Predicting carcinogenicity of diverse chemicals using probabilistic neural network modeling approaches

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com [Academy of Scientific and Innovative Research, Council of Scientific and Industrial Research, New Delhi (India); Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001 (India); Gupta, Shikha; Rai, Premanjali [Academy of Scientific and Innovative Research, Council of Scientific and Industrial Research, New Delhi (India); Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001 (India)

    2013-10-15

    Robust global models capable of discriminating positive and non-positive carcinogens; and predicting carcinogenic potency of chemicals in rodents were developed. The dataset of 834 structurally diverse chemicals extracted from Carcinogenic Potency Database (CPDB) was used which contained 466 positive and 368 non-positive carcinogens. Twelve non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals and nonlinearity in the data were evaluated using Tanimoto similarity index and Brock–Dechert–Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) models were constructed for classification and function optimization problems using the carcinogenicity end point in rat. Validation of the models was performed using the internal and external procedures employing a wide series of statistical checks. PNN constructed using five descriptors rendered classification accuracy of 92.09% in complete rat data. The PNN model rendered classification accuracies of 91.77%, 80.70% and 92.08% in mouse, hamster and pesticide data, respectively. The GRNN constructed with nine descriptors yielded correlation coefficient of 0.896 between the measured and predicted carcinogenic potency with mean squared error (MSE) of 0.44 in complete rat data. The rat carcinogenicity model (GRNN) applied to the mouse and hamster data yielded correlation coefficient and MSE of 0.758, 0.71 and 0.760, 0.46, respectively. The results suggest for wide applicability of the inter-species models in predicting carcinogenic potency of chemicals. Both the PNN and GRNN (inter-species) models constructed here can be useful tools in predicting the carcinogenicity of new chemicals for regulatory purposes. - Graphical abstract: Figure (a) shows classification accuracies (positive and non-positive carcinogens) in rat, mouse, hamster, and pesticide data yielded by optimal PNN model. Figure (b) shows generalization and predictive

  3. SCYNet. Testing supersymmetric models at the LHC with neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Bechtle, Philip; Belkner, Sebastian; Hamer, Matthias [Universitaet Bonn, Bonn (Germany); Dercks, Daniel [Universitaet Hamburg, Hamburg (Germany); Keller, Tim; Kraemer, Michael; Sarrazin, Bjoern; Schuette-Engel, Jan; Tattersall, Jamie [RWTH Aachen University, Institute for Theoretical Particle Physics and Cosmology, Aachen (Germany)

    2017-10-15

    SCYNet (SUSY Calculating Yield Net) is a tool for testing supersymmetric models against LHC data. It uses neural network regression for a fast evaluation of the profile likelihood ratio. Two neural network approaches have been developed: one network has been trained using the parameters of the 11-dimensional phenomenological Minimal Supersymmetric Standard Model (pMSSM-11) as an input and evaluates the corresponding profile likelihood ratio within milliseconds. It can thus be used in global pMSSM-11 fits without time penalty. In the second approach, the neural network has been trained using model-independent signature-related objects, such as energies and particle multiplicities, which were estimated from the parameters of a given new physics model. (orig.)

  4. SCYNet: testing supersymmetric models at the LHC with neural networks

    Science.gov (United States)

    Bechtle, Philip; Belkner, Sebastian; Dercks, Daniel; Hamer, Matthias; Keller, Tim; Krämer, Michael; Sarrazin, Björn; Schütte-Engel, Jan; Tattersall, Jamie

    2017-10-01

    SCYNet (SUSY Calculating Yield Net) is a tool for testing supersymmetric models against LHC data. It uses neural network regression for a fast evaluation of the profile likelihood ratio. Two neural network approaches have been developed: one network has been trained using the parameters of the 11-dimensional phenomenological Minimal Supersymmetric Standard Model (pMSSM-11) as an input and evaluates the corresponding profile likelihood ratio within milliseconds. It can thus be used in global pMSSM-11 fits without time penalty. In the second approach, the neural network has been trained using model-independent signature-related objects, such as energies and particle multiplicities, which were estimated from the parameters of a given new physics model.

  5. A Comparative Approach to Hand Force Estimation using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Farid Mobasser

    2012-01-01

    Full Text Available In many applications that include direct human involvement such as control of prosthetic arms, athletic training, and studying muscle physiology, hand force is needed for control, modeling and monitoring purposes. The use of inexpensive and easily portable active electromyography (EMG electrodes and position sensors would be advantageous in these applications compared to the use of force sensors which are often very expensive and require bulky frames. Among non-model-based estimation methods, Multilayer Perceptron Artificial Neural Networks (MLPANN has widely been used to estimate muscle force or joint torque from different anatomical features in humans or animals. This paper investigates the use of Radial Basis Function (RBF ANN and MLPANN for force estimation and experimentally compares the performance of the two methodologies for the same human anatomy, ie, hand force estimation, under an ensemble of operational conditions. In this unified study, the EMG signal readings from upper-arm muscles involved in elbow joint movement and elbow angular position and velocity are utilized as inputs to the ANNs. In addition, the use of the elbow angular acceleration signal as an input for the ANNs is also investigated.

  6. Geothermics and neural networks: A first approach; Geotermia y redes neuronales: Una primera aproximacion

    Energy Technology Data Exchange (ETDEWEB)

    Flores Armenta, Magaly del Carmen [Gerencia de Proyectos Geotermoelectricos de la Comision Federal de Electricidad, Morelia (Mexico); Barragan Orbe, Carlos [Comision Federal de Electricidad, Mexico, D. F. (Mexico)

    1995-09-01

    Neural networks have been hailed as the greatest technological advance since the transistor. They are predicted to be a common household item by the year 2000. This new form of machine intelligence is able to solve problems, without using rules of math, they only need examples to learn from. The first geothermal problem, to solve with this powerful tool, is the prediction of liquid and steam production, as well as the well`s termination in Los Humeros, Puebla, Mexico. The first attempt shows the learning`s capacity of the developed model, and its precision on the predictions that were done. [Espanol] Las redes neuronales se consideran como uno de los mas grandes avances tecnologicos desde la invencion del transistor, de tal forma que se perfilan como una de las herramientas mas comunes del ano 2000. Esta nueva forma de inteligencia artificial es capaz de resolver problemas sin utilizar reglas matematicas, solamente requiere de ejemplos para aprender de ellos. El primer problema geotermico, atacado con esta poderosa herramienta, es la prediccion de las producciones de vapor y liquido, asi como la terminacion de pozos en el campo geotermico de Los Humeros, Puebla, Mexico. Los primeros intentos demuestran la capacidad de aprender del modelo desarrollado, y su certeza en las predicciones realizadas.

  7. Forecasting the Acquisition of University Spin-Outs: An RBF Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Weiwei Liu

    2017-01-01

    Full Text Available University spin-outs (USOs, creating businesses from university intellectual property, are a relatively common phenomena. As a knowledge transfer channel, the spin-out business model is attracting extensive attention. In this paper, the impacts of six equities on the acquisition of USOs, including founders, university, banks, business angels, venture capitals, and other equity, are comprehensively analyzed based on theoretical and empirical studies. Firstly, the average distribution of spin-out equity at formation is calculated based on the sample data of 350 UK USOs. According to this distribution, a radial basis function (RBF neural network (NN model is employed to forecast the effects of each equity on the acquisition. To improve the classification accuracy, the novel set-membership method is adopted in the training process of the RBF NN. Furthermore, a simulation test is carried out to measure the effects of six equities on the acquisition of USOs. The simulation results show that the increase of university’s equity has a negative effect on the acquisition of USOs, whereas the increase of remaining five equities has positive effects. Finally, three suggestions are provided to promote the development and growth of USOs.

  8. A Convolutional Neural Network Approach for Assisting Avalanche Search and Rescue Operations with UAV Imagery

    Directory of Open Access Journals (Sweden)

    Mesay Belete Bejiga

    2017-01-01

    Full Text Available Following an avalanche, one of the factors that affect victims’ chance of survival is the speed with which they are located and dug out. Rescue teams use techniques like trained rescue dogs and electronic transceivers to locate victims. However, the resources and time required to deploy rescue teams are major bottlenecks that decrease a victim’s chance of survival. Advances in the field of Unmanned Aerial Vehicles (UAVs have enabled the use of flying robots equipped with sensors like optical cameras to assess the damage caused by natural or manmade disasters and locate victims in the debris. In this paper, we propose assisting avalanche search and rescue (SAR operations with UAVs fitted with vision cameras. The sequence of images of the avalanche debris captured by the UAV is processed with a pre-trained Convolutional Neural Network (CNN to extract discriminative features. A trained linear Support Vector Machine (SVM is integrated at the top of the CNN to detect objects of interest. Moreover, we introduce a pre-processing method to increase the detection rate and a post-processing method based on a Hidden Markov Model to improve the prediction performance of the classifier. Experimental results conducted on two different datasets at different levels of resolution show that the detection performance increases with an increase in resolution, while the computation time increases. Additionally, they also suggest that a significant decrease in processing time can be achieved thanks to the pre-processing step.

  9. Artificial neural network versus case-based approaches to lexical combination

    Science.gov (United States)

    Dunbar, George L.

    1994-03-01

    Lexical combination presents a number of intriguing problems for cognitive science. By studying the empirical phenomena of combination we can derive constraints on models of the representation of individual lexical items. One particular phenomenon that symbolic models have been unable to accommodate is `semantic interaction'. Medin & Shoben (1988) have shown that properties associated with nouns by subjects vary with the choice of adjective. For example, wooden spoons are not just made of a different material: the phrase is interpreted as denoting a `larger' object. However, the adjective wooden is not generally held to carry implications as to size. We report experimental results showing similar effects across a range of properties for a single adjective in combination with different nouns from a single semantic field. It is this more radical dependence of interpretative features on lexical partners that we term `semantic interaction'. The phenomenon described by Medin and Shoben cannot be accounted for by the Selective Modification model, the most complete model hitherto. We show that a case-based reasoning system could account for earlier data because of the particular examples chosen, but that such a model could not handle semantic interaction. A neural network system is presented that does handle semantic interaction.

  10. Pharmacological approach for targeting dysfunctional brain plasticity: Focus on neural cell adhesion molecule (NCAM).

    Science.gov (United States)

    Aonurm-Helm, Anu; Jaako, Külli; Jürgenson, Monika; Zharkovsky, Alexander

    2016-11-01

    Brain plasticity refers to the ability of the brain to undergo functionally relevant adaptations in response to external and internal stimuli. Alterations in brain plasticity have been associated with several neuropsychiatric disorders, and current theories suggest that dysfunctions in neuronal circuits and synaptogenesis have a major impact in the development of these diseases. Among the molecules that regulate brain plasticity, neural cell adhesion molecule (NCAM) and its polysialylated form PSA-NCAM have been of particular interest for years because alterations in NCAM and PSA-NCAM levels have been associated with memory impairment, depression, autistic spectrum disorders and schizophrenia. In this review, we discuss the roles of NCAM and PSA-NCAM in the regulation of brain plasticity and, in particular, their roles in the mechanisms of depression. We also demonstrate that the NCAM-mimetic peptides FGL and Enreptin are able to restore disrupted neuronal plasticity. FGL peptide has also been demonstrated to ameliorate the symptoms of depressive-like behavior in NCAM-deficient mice and therefore, may be considered a new drug candidate for the treatment of depression as well as other neuropsychiatric disorders with disrupted neuroplasticity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Modeling subjective evaluation of soundscape quality in urban open spaces: An artificial neural network approach.

    Science.gov (United States)

    Yu, Lei; Kang, Jian

    2009-09-01

    This research aims to explore the feasibility of using computer-based models to predict the soundscape quality evaluation of potential users in urban open spaces at the design stage. With the data from large scale field surveys in 19 urban open spaces across Europe and China, the importance of various physical, behavioral, social, demographical, and psychological factors for the soundscape evaluation has been statistically analyzed. Artificial neural network (ANN) models have then been explored at three levels. It has been shown that for both subjective sound level and acoustic comfort evaluation, a general model for all the case study sites is less feasible due to the complex physical and social environments in urban open spaces; models based on individual case study sites perform well but the application range is limited; and specific models for certain types of location/function would be reliable and practical. The performance of acoustic comfort models is considerably better than that of sound level models. Based on the ANN models, soundscape quality maps can be produced and this has been demonstrated with an example.

  12. A Comparative Approach to Hand Force Estimation using Artificial Neural Networks.

    Science.gov (United States)

    Mobasser, Farid; Hashtrudi-Zaad, Keyvan

    2012-01-01

    In many applications that include direct human involvement such as control of prosthetic arms, athletic training, and studying muscle physiology, hand force is needed for control, modeling and monitoring purposes. The use of inexpensive and easily portable active electromyography (EMG) electrodes and position sensors would be advantageous in these applications compared to the use of force sensors which are often very expensive and require bulky frames. Among non-model-based estimation methods, Multilayer Perceptron Artificial Neural Networks (MLPANN) has widely been used to estimate muscle force or joint torque from different anatomical features in humans or animals. This paper investigates the use of Radial Basis Function (RBF) ANN and MLPANN for force estimation and experimentally compares the performance of the two methodologies for the same human anatomy, ie, hand force estimation, under an ensemble of operational conditions. In this unified study, the EMG signal readings from upper-arm muscles involved in elbow joint movement and elbow angular position and velocity are utilized as inputs to the ANNs. In addition, the use of the elbow angular acceleration signal as an input for the ANNs is also investigated.

  13. Image-based quantitative analysis of gold immunochromatographic strip via cellular neural network approach.

    Science.gov (United States)

    Zeng, Nianyin; Wang, Zidong; Zineddin, Bachar; Li, Yurong; Du, Min; Xiao, Liang; Liu, Xiaohui; Young, Terry

    2014-05-01

    Gold immunochromatographic strip assay provides a rapid, simple, single-copy and on-site way to detect the presence or absence of the target analyte. This paper aims to develop a method for accurately segmenting the test line and control line of the gold immunochromatographic strip (GICS) image for quantitatively determining the trace concentrations in the specimen, which can lead to more functional information than the traditional qualitative or semi-quantitative strip assay. The canny operator as well as the mathematical morphology method is used to detect and extract the GICS reading-window. Then, the test line and control line of the GICS reading-window are segmented by the cellular neural network (CNN) algorithm, where the template parameters of the CNN are designed by the switching particle swarm optimization (SPSO) algorithm for improving the performance of the CNN. It is shown that the SPSO-based CNN offers a robust method for accurately segmenting the test and control lines, and therefore serves as a novel image methodology for the interpretation of GICS. Furthermore, quantitative comparison is carried out among four algorithms in terms of the peak signal-to-noise ratio. It is concluded that the proposed CNN algorithm gives higher accuracy and the CNN is capable of parallelism and analog very-large-scale integration implementation within a remarkably efficient time.

  14. Hybrid Forecasting Approach Based on GRNN Neural Network and SVR Machine for Electricity Demand Forecasting

    Directory of Open Access Journals (Sweden)

    Weide Li

    2017-01-01

    Full Text Available Accurate electric power demand forecasting plays a key role in electricity markets and power systems. The electric power demand is usually a non-linear problem due to various unknown reasons, which make it difficult to get accurate prediction by traditional methods. The purpose of this paper is to propose a novel hybrid forecasting method for managing and scheduling the electricity power. EEMD-SCGRNN-PSVR, the proposed new method, combines ensemble empirical mode decomposition (EEMD, seasonal adjustment (S, cross validation (C, general regression neural network (GRNN and support vector regression machine optimized by the particle swarm optimization algorithm (PSVR. The main idea of EEMD-SCGRNN-PSVR is respectively to forecast waveform and trend component that hidden in demand series to substitute directly forecasting original electric demand. EEMD-SCGRNN-PSVR is used to predict the one week ahead half-hour’s electricity demand in two data sets (New South Wales (NSW and Victorian State (VIC in Australia. Experimental results show that the new hybrid model outperforms the other three models in terms of forecasting accuracy and model robustness.

  15. Analyzing psychotherapy process as intersubjective sensemaking: an approach based on discourse analysis and neural networks.

    Science.gov (United States)

    Nitti, Mariangela; Ciavolino, Enrico; Salvatore, Sergio; Gennaro, Alessandro

    2010-09-01

    The authors propose a method for analyzing the psychotherapy process: discourse flow analysis (DFA). DFA is a technique representing the verbal interaction between therapist and patient as a discourse network, aimed at measuring the therapist-patient discourse ability to generate new meanings through time. DFA assumes that the main function of psychotherapy is to produce semiotic novelty. DFA is applied to the verbatim transcript of the psychotherapy. It defines the main meanings active within the therapeutic discourse by means of the combined use of text analysis and statistical techniques. Subsequently, it represents the dynamic interconnections among these meanings in terms of a "discursive network." The dynamic and structural indexes of the discursive network have been shown to provide a valid representation of the patient-therapist communicative flow as well as an estimation of its clinical quality. Finally, a neural network is designed specifically to identify patterns of functioning of the discursive network and to verify the clinical validity of these patterns in terms of their association with specific phases of the psychotherapy process. An application of the DFA to a case of psychotherapy is provided to illustrate the method and the kinds of results it produces.

  16. Wind speed time series reconstruction using a hybrid neural genetic approach

    Science.gov (United States)

    Rodriguez, H.; Flores, J. J.; Puig, V.; Morales, L.; Guerra, A.; Calderon, F.

    2017-11-01

    Currently, electric energy is used in practically all modern human activities. Most of the energy produced came from fossil fuels, making irreversible damage to the environment. Lately, there has been an effort by nations to produce energy using clean methods, such as solar and wind energy, among others. Wind energy is one of the cleanest alternatives. However, the wind speed is not constant, making the planning and operation at electric power systems a difficult activity. Knowing in advance the amount of raw material (wind speed) used for energy production allows us to estimate the energy to be generated by the power plant, helping the maintenance planning, the operational management, optimal operational cost. For these reasons, the forecast of wind speed becomes a necessary task. The forecast process involves the use of past observations from the variable to forecast (wind speed). To measure wind speed, weather stations use devices called anemometers, but due to poor maintenance, connection error, or natural wear, they may present false or missing data. In this work, a hybrid methodology is proposed, and it uses a compact genetic algorithm with an artificial neural network to reconstruct wind speed time series. The proposed methodology reconstructs the time series using a ANN defined by a Compact Genetic Algorithm.

  17. A novel approach for tuberculosis screening based on deep convolutional neural networks

    Science.gov (United States)

    Hwang, Sangheum; Kim, Hyo-Eun; Jeong, Jihoon; Kim, Hee-Jin

    2016-03-01

    Tuberculosis (TB) is one of the major global health threats especially in developing countries. Although newly diagnosed TB patients can be recovered with high cure rate, many curable TB patients in the developing countries are obliged to die because of delayed diagnosis, partly by the lack of radiography and radiologists. Therefore, developing a computer-aided diagnosis (CAD) system for TB screening can contribute to early diagnosis of TB, which results in prevention of deaths from TB. Currently, most CAD algorithms adopt carefully designed morphological features distinguishing different lesion types to improve screening performances. However, such engineered features cannot be guaranteed to be the best descriptors for TB screening. Deep learning has become a majority in machine learning society. Especially in computer vision fields, it has been verified that deep convolutional neural networks (CNN) is a very promising algorithm for various visual tasks. Since deep CNN enables end-to-end training from feature extraction to classification, it does not require objective-specific manual feature engineering. In this work, we designed CAD system based on deep CNN for automatic TB screening. Based on large-scale chest X-rays (CXRs), we achieved viable TB screening performance of 0.96, 0.93 and 0.88 in terms of AUC for three real field datasets, respectively, by exploiting the effect of transfer learning.

  18. NA-NET numerical analysis net

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J. [Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States); Rosener, B. [Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science

    1991-12-01

    This report describes a facility called NA-NET created to allow numerical analysts (na) an easy method of communicating with one another. The main advantage of the NA-NET is uniformity of addressing. All mail is addressed to the Internet host ``na-net.ornl.gov`` at Oak Ridge National Laboratory. Hence, members of the NA-NET do not need to remember complicated addresses or even where a member is currently located. As long as moving members change their e-mail address in the NA-NET everything works smoothly. The NA-NET system is currently located at Oak Ridge National Laboratory. It is running on the same machine that serves netlib. Netlib is a separate facility that distributes mathematical software via electronic mail. For more information on netlib consult, or send the one-line message ``send index`` to netlib{at}ornl.gov. The following report describes the current NA-NET system from both a user`s perspective and from an implementation perspective. Currently, there are over 2100 members in the NA-NET. An average of 110 mail messages pass through this facility daily.

  19. NA-NET numerical analysis net

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J. (Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science Oak Ridge National Lab., TN (United States)); Rosener, B. (Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science)

    1991-12-01

    This report describes a facility called NA-NET created to allow numerical analysts (na) an easy method of communicating with one another. The main advantage of the NA-NET is uniformity of addressing. All mail is addressed to the Internet host na-net.ornl.gov'' at Oak Ridge National Laboratory. Hence, members of the NA-NET do not need to remember complicated addresses or even where a member is currently located. As long as moving members change their e-mail address in the NA-NET everything works smoothly. The NA-NET system is currently located at Oak Ridge National Laboratory. It is running on the same machine that serves netlib. Netlib is a separate facility that distributes mathematical software via electronic mail. For more information on netlib consult, or send the one-line message send index'' to netlib{at}ornl.gov. The following report describes the current NA-NET system from both a user's perspective and from an implementation perspective. Currently, there are over 2100 members in the NA-NET. An average of 110 mail messages pass through this facility daily.

  20. Online model checking approach based parameter estimation to a neuronal fate decision simulation model in Caenorhabditis elegans with hybrid functional Petri net with extension.

    Science.gov (United States)

    Li, Chen; Nagasaki, Masao; Koh, Chuan Hock; Miyano, Satoru

    2011-05-01

    Mathematical modeling and simulation studies are playing an increasingly important role in helping researchers elucidate how living organisms function in cells. In systems biology, researchers typically tune many parameters manually to achieve simulation results that are consistent with biological knowledge. This severely limits the size and complexity of simulation models built. In order to break this limitation, we propose a computational framework to automatically estimate kinetic parameters for a given network structure. We utilized an online (on-the-fly) model checking technique (which saves resources compared to the offline approach), with a quantitative modeling and simulation architecture named hybrid functional Petri net with extension (HFPNe). We demonstrate the applicability of this framework by the analysis of the underlying model for the neuronal cell fate decision model (ASE fate model) in Caenorhabditis elegans. First, we built a quantitative ASE fate model containing 3327 components emulating nine genetic conditions. Then, using our developed efficient online model checker, MIRACH 1.0, together with parameter estimation, we ran 20-million simulation runs, and were able to locate 57 parameter sets for 23 parameters in the model that are consistent with 45 biological rules extracted from published biological articles without much manual intervention. To evaluate the robustness of these 57 parameter sets, we run another 20 million simulation runs using different magnitudes of noise. Our simulation results concluded that among these models, one model is the most reasonable and robust simulation model owing to the high stability against these stochastic noises. Our simulation results provide interesting biological findings which could be used for future wet-lab experiments.

  1. Neural correlates of visualizations of concrete and abstract words in preschool children: A developmental embodied approach

    Directory of Open Access Journals (Sweden)

    Amedeo eD'angiulli

    2015-06-01

    Full Text Available The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization, followed by a four-picture array (a target plus three distractors (part 2: matching visualization. Children were to select the picture matching the word they heard in part 1. Event-Related Potentials (ERPs locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e. < 300 ms was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e. 300-699 ms and late (i.e. 700-1000 ms ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a post-anterior pathway sequence: occipital, parietal and temporal areas; conversely, matching visualization involved left-hemispheric activity following an ant-posterior pathway sequence: frontal, temporal, parietal and occipital areas. These results suggest that, similarly for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying

  2. In search of neural mechanisms of mirror neuron dysfunction in schizophrenia: resting state functional connectivity approach.

    Science.gov (United States)

    Zaytseva, Yuliya; Bendova, Marie; Garakh, Zhanna; Tintera, Jaroslav; Rydlo, Jan; Spaniel, Filip; Horacek, Jiri

    2015-09-01

    It has been repeatedly shown that schizophrenia patients have immense alterations in goal-directed behaviour, social cognition, and social interactions, cognitive abilities that are presumably driven by the mirror neurons system (MNS). However, the neural bases of these deficits still remain unclear. Along with the task-related fMRI and EEG research tapping into the mirror neuron system, the characteristics of the resting state activity in the particular areas that encompass mirror neurons might be of interest as they obviously determine the baseline of the neuronal activity. Using resting state fMRI, we investigated resting state functional connectivity (FC) in four predefined brain structures, ROIs (inferior frontal gyrus, superior parietal lobule, premotor cortex and superior temporal gyrus), known for their mirror neurons activity, in 12 patients with first psychotic episode and 12 matched healthy individuals. As a specific hypothesis, based on the knowledge of the anatomical inputs of thalamus to all preselected ROIs, we have investigated the FC between thalamus and the ROIs. Of all ROIs included, seed-to-voxel connectivity analysis revealed significantly decreased FC only in left posterior superior temporal gyrus (STG) and the areas in visual cortex and cerebellum in patients as compared to controls. Using ROI-to-ROI analysis (thalamus and selected ROIs), we have found an increased FC of STG and bilateral thalamus whereas the FC of these areas was decreased in controls. Our results suggest that: (1) schizophrenia patients exhibit FC of STG which corresponds to the previously reported changes of superior temporal gyrus in schizophrenia and might contribute to the disturbances of specific functions, such as emotional processing or spatial awareness; (2) as the thalamus plays a pivotal role in the sensory gating, providing the filtering of the redundant stimulation, the observed hyperconnectivity between the thalami and the STGs in patients with schizophrenia

  3. Incorporation of iodine into apatite structure: a crystal chemistry approach using Artificial Neural Network

    Science.gov (United States)

    Wang, Jianwei

    2015-06-01

    Materials with apatite crystal structure provide a great potential for incorporating the long-lived radioactive iodine isotope (129I) in the form of iodide (I-) from nuclear waste streams. Because of its durability and potentially high iodine content, the apatite waste form can reduce iodine release rate and minimize the waste volume. Crystal structure and composition of apatite was investigated for iodide incorporation into the channel of the structure using Artificial Neural Network. A total of 86 experimentally determined apatite crystal structures of different compositions were compiled from literature, and 46 of them were used to train the networks and 42 were used to test the performance of the trained networks. The results show that the performances of the networks are satisfactory for predictions of unit cell parameters a and c and channel size of the structure. The trained and tested networks were then used to predict unknown compositions of apatite that incorporates iodide. With a crystal chemistry consideration, chemical compositions that lead to matching the size of the structural channel to the size of iodide were then predicted to be able to incorporate iodide in the structural channel. The calculations suggest that combinations of A site cations of Ag+, K+, Sr2+, Pb2+, Ba2+, and Cs+, and X site cations, mostly formed tetrahedron, of Mn5+, As5+, Cr5+, V5+, Mo5+, Si4+, Ge4+, and Re7+ are possible apatite compositions that are able to incorporate iodide. The charge balance of different apatite compositions can be achieved by multiple substitutions at a single site or coupled substitutions at both A and X sites. The results give important clues for designing experiments to synthesize new apatite compositions and also provide a fundamental understanding how iodide is incorporated in the apatite structure. This understanding can provide important insights for apatite waste forms design by optimizing the chemical composition and synthesis procedure.

  4. The N400 effect during speaker-switch – Towards a conversational approach of measuring neural correlates of language

    Directory of Open Access Journals (Sweden)

    Tatiana Goregliad Fjaellingsdal

    2016-11-01

    400 can effectively be used to study neural correlates of language in conversational approaches including speaker-switches.

  5. Flexible body control using neural networks

    Science.gov (United States)

    Mccullough, Claire L.

    1992-01-01

    Progress is reported on the control of Control Structures Interaction suitcase demonstrator (a flexible structure) using neural networks and fuzzy logic. It is concluded that while control by neural nets alone (i.e., allowing the net to design a controller with no human intervention) has yielded less than optimal results, the neural net trained to emulate the existing fuzzy logic controller does produce acceptible system responses for the initial conditions examined. Also, a neural net was found to be very successful in performing the emulation step necessary for the anticipatory fuzzy controller for the CSI suitcase demonstrator. The fuzzy neural hybrid, which exhibits good robustness and noise rejection properties, shows promise as a controller for practical flexible systems, and should be further evaluated.

  6. Army Net Zero Prove Out. Net Zero Waste Best Practices

    Science.gov (United States)

    2014-11-20

    Anaerobic Digesters – Although anaerobic digestion is not a new technology and has been used on a large-scale basis in wastewater treatment , the...technology and has been used on a large-scale basis in wastewater treatment , the use of the technology should be demonstrated with other...approaches can be used for cardboard and cellulose -based packaging materials. This approach is in line with the Net Zero Waste hierarchy in terms of

  7. A hybrid approach to monthly streamflow forecasting: Integrating hydrological model outputs into a Bayesian artificial neural network

    Science.gov (United States)

    Humphrey, Greer B.; Gibbs, Matthew S.; Dandy, Graeme C.; Maier, Holger R.

    2016-09-01

    Monthly streamflow forecasts are needed to support water resources decision making in the South East of South Australia, where baseflow represents a significant proportion of the total streamflow and soil moisture and groundwater are important predictors of runoff. To address this requirement, the utility of a hybrid monthly streamflow forecasting approach is explored, whereby simulated soil moisture from the GR4J conceptual rainfall-runoff model is used to represent initial catchment conditions in a Bayesian artificial neural network (ANN) statistical forecasting model. To assess the performance of this hybrid forecasting method, a comparison is undertaken of the relative performances of the Bayesian ANN, the GR4J conceptual model and the hybrid streamflow forecasting approach for producing 1-month ahead streamflow forecasts at three key locations in the South East of South Australia. Particular attention is paid to the quantification of uncertainty in each of the forecast models and the potential for reducing forecast uncertainty by using the hybrid approach is considered. Case study results suggest that the hybrid models developed in this study are able to take advantage of the complementary strengths of both the ANN models and the GR4J conceptual models. This was particularly the case when forecasting high flows, where the hybrid models were shown to outperform the two individual modelling approaches in terms of the accuracy of the median forecasts, as well as reliability and resolution of the forecast distributions. In addition, the forecast distributions generated by the hybrid models were up to 8 times more precise than those based on climatology; thus, providing a significant improvement on the information currently available to decision makers.

  8. Multiple levels of impaired neural plasticity and cellular resilience in bipolar disorder: developing treatments using an integrated translational approach.

    Science.gov (United States)

    Machado-Vieira, Rodrigo; Soeiro-De-Souza, Marcio G; Richards, Erica M; Teixeira, Antonio L; Zarate, Carlos A

    2014-02-01

    This paper reviews the neurobiology of bipolar disorder (BD), particularly findings associated with impaired cellular resilience and plasticity. PubMed/Medline articles and book chapters published over the last 20 years were identified using the following keyword combinations: BD, calcium, cytokines, endoplasmic reticulum (ER), genetics, glucocorticoids, glutamate, imaging, ketamine, lithium, mania, mitochondria, neuroplasticity, neuroprotection, neurotrophic, oxidative stress, plasticity, resilience, and valproate. BD is associated with impaired cellular resilience and synaptic dysfunction at multiple levels, associated with impaired cellular resilience and plasticity. These findings were partially prevented or even reversed with the use of mood stabilizers, but longitudinal studies associated with clinical outcome remain scarce. Evidence consistently suggests that BD involves impaired neural plasticity and cellular resilience at multiple levels. This includes the genetic and intra- and intercellular signalling levels, their impact on brain structure and function, as well as the final translation into behaviour/cognitive changes. Future studies are expected to adopt integrated translational approaches using a variety of methods (e.g., microarray approaches, neuroimaging, genetics, electrophysiology, and the new generation of -omics techniques). These studies will likely focus on more precise diagnoses and a personalized medicine paradigm in order to develop better treatments for those who need them most.

  9. A Unified Approach to Adaptive Neural Control for Nonlinear Discrete-Time Systems With Nonlinear Dead-Zone Input.

    Science.gov (United States)

    Liu, Yan-Jun; Gao, Ying; Tong, Shaocheng; Chen, C L Philip

    2016-01-01

    In this paper, an effective adaptive control approach is constructed to stabilize a class of nonlinear discrete-time systems, which contain unknown functions, unknown dead-zone input, and unknown control direction. Different from linear dead zone, the dead zone, in this paper, is a kind of nonlinear dead zone. To overcome the noncausal problem, which leads to the control scheme infeasible, the systems can be transformed into a m -step-ahead predictor. Due to nonlinear dead-zone appearance, the transformed predictor still contains the nonaffine function. In addition, it is assumed that the gain function of dead-zone input and the control direction are unknown. These conditions bring about the difficulties and the complicacy in the controller design. Thus, the implicit function theorem is applied to deal with nonaffine dead-zone appearance, the problem caused by the unknown control direction can be resolved through applying the discrete Nussbaum gain, and the neural networks are used to approximate the unknown function. Based on the Lyapunov theory, all the signals of the resulting closed-loop system are proved to be semiglobal uniformly ultimately bounded. Moreover, the tracking error is proved to be regulated to a small neighborhood around zero. The feasibility of the proposed approach is demonstrated by a simulation example.

  10. Comparative analysis of neural network and regression based condition monitoring approaches for wind turbine fault detection

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar

    2011-01-01

    approach are applied to further real time series containing gearbox bearing damages and stator temperature anomalies.The comparison revealed all three models being capable of detecting incipient faults. However, they differ in the effort required for model development and the remaining operational time...

  11. A neural network multi-task learning approach to biomedical named entity recognition.

    Science.gov (United States)

    Crichton, Gamal; Pyysalo, Sampo; Chiu, Billy; Korhonen, Anna

    2017-08-15

    Named Entity Recognition (NER) is a key task in biomedical text mining. Accurate NER systems require task-specific, manually-annotated datasets, which are expensive to develop and thus limited in size. Since such datasets contain related but different information, an interesting question is whether it might be possible to use them together to improve NER performance. To investigate this, we develop supervised, multi-task, convolutional neural network models and apply them to a large number of varied existing biomedical named entity datasets. Additionally, we investigated the effect of dataset size on performance in both single- and multi-task settings. We present a single-task model for NER, a Multi-output multi-task model and a Dependent multi-task model. We apply the three models to 15 biomedical datasets containing multiple named entities including Anatomy, Chemical, Disease, Gene/Protein and Species. Each dataset represent a task. The results from the single-task model and the multi-task models are then compared for evidence of benefits from Multi-task Learning. With the Multi-output multi-task model we observed an average F-score improvement of 0.8% when compared to the single-task model from an average baseline of 78.4%. Although there was a significant drop in performance on one dataset, performance improves significantly for five datasets by up to 6.3%. For the Dependent multi-task model we observed an average improvement of 0.4% when compared to the single-task model. There were no significant drops in performance on any dataset, and performance improves significantly for six datasets by up to 1.1%. The dataset size experiments found that as dataset size decreased, the multi-output model's performance increased compared to the single-task model's. Using 50, 25 and 10% of the training data resulted in an average drop of approximately 3.4, 8 and 16.7% respectively for the single-task model but approximately 0.2, 3.0 and 9.8% for the multi-task model. Our

  12. Downscaling Transpiration from the Field to the Tree Scale using the Neural Network Approach

    Science.gov (United States)

    Hopmans, J. W.

    2015-12-01

    Estimating actual evapotranspiration (ETa) spatial variability in orchards is key when trying to quantify water (and associated nutrients) leaching, both with the mass balance and inverse modeling methods. ETa measurements however generally occur at larger scales (e.g. Eddy-covariance method) or have a limited quantitative accuracy. In this study we propose to establish a statistical relation between field ETa and field averaged variables known to be closely related to it, such as stem water potential (WP), soil water storage (WS) and ETc. For that we use 4 years of soil and almond trees water status data to train artificial neural networks (ANNs) predicting field scale ETa and downscale the relation to the individual tree scale. ANNs composed of only two neurons in a hidden layer (11 parameters on total) proved to be the most accurate (overall RMSE = 0.0246 mm/h, R2 = 0.944), seemingly because adding more neurons generated overfitting of noise in the training dataset. According to the optimized weights in the best ANNs, the first hidden neuron could be considered in charge of relaying the ETc information while the other one would deal with the water stress response to stem WP, soil WS, and ETc. As individual trees had specific signatures for combinations of these variables, variability was generated in their ETa responses. The relative canopy cover was the main source of variability of ETa while stem WP was the most influent factor for the ETa / ETc ratio. Trees on drip-irrigated side of the orchard appeared to be less affected by low estimated soil WS in the root zone than on the fanjet micro-sprinklers side, possibly due to a combination of (i) more substantial root biomass increasing the plant hydraulic conductance, (ii) bias in the soil WS estimation due to soil moisture heterogeneity on the drip-side, and (iii) the access to deeper water resource. Tree scale ETa responses are in good agreement with soil-plant water relations reported in the literature, and

  13. Artificial neural networks-based approach to design ARIs using QSAR for diabetes mellitus.

    Science.gov (United States)

    Patra, Jagdish C; Singh, Onkar

    2009-11-30

    In this article, in the first part, we propose an artificial neural network-based intelligent technique to determine the quantitative structure-activity relationship (QSAR) among known aldose reductase inhibitors (ARIs) for diabetes mellitus using two molecular descriptors, i.e., the electronegativity and molar volume of functional groups present in the main ARI lead structure. We have shown that the multilayer perceptron-based model is capable of determining the QSAR quite satisfactorily, with high R-value. Usually, the design of potent ARIs requires the use of complex computer docking and quantum mechanical (QM) steps involving excessive time and human judgement. In the second part of this article, to reduce the design cycle of potent ARIs, we propose a novel ANN technique to eliminate the computer docking and QM steps, to predict the total score. The MLP-based QSAR models obtained in the first part are used to predict the potent ARIs, using the experimental data reported by Hu et al. (J Mol Graph Mod 2006, 24, 244). The proposed ANN-based model can predict the total score with an R-value of 0.88, which indicates that there exists a close match between the predicted and experimental total scores. Using the ANN model, we obtained 71 potent ARIs out of 6.25 million new ARI compounds created by substituting different functional groups at substituting sites of main lead structure of known ARI. Finally, using high bioactivity relationship and total score values, we determined four potential ARIs out of these 71 compounds. Interestingly, these four ARIs include the two potent ARIs reported by Hu et al. (J Mol Graph Mod 2006, 24, 244) who obtained these through the complex computer docking and QM steps. This fact indicates the effectiveness of our proposed ANN-based technique. We suggest these four compounds to be the most promising candidates for ARIs to prevent the diabetic complications and further recommend for wet bench experiments to find their potential against

  14. Estimation of seismic quality factor: Artificial neural networks and current approaches

    Science.gov (United States)

    Yıldırım, Eray; Saatçılar, Ruhi; Ergintav, Semih

    2017-01-01

    The aims of this study are to estimate soil attenuation using alternatives to traditional methods, to compare results of using these methods, and to examine soil properties using the estimated results. The performances of all methods, amplitude decay, spectral ratio, Wiener filter, and artificial neural network (ANN) methods, are examined on field and synthetic data with noise and without noise. High-resolution seismic reflection field data from Yeniköy (Arnavutköy, İstanbul) was used as field data, and 424 estimations of Q values were made for each method (1,696 total). While statistical tests on synthetic and field data are quite close to the Q value estimation results of ANN, Wiener filter, and spectral ratio methods, the amplitude decay methods showed a higher estimation error. According to previous geological and geophysical studies in this area, the soil is water-saturated, quite weak, consisting of clay and sandy units, and, because of current and past landslides in the study area and its vicinity, researchers reported heterogeneity in the soil. Under the same physical conditions, Q value calculated on field data can be expected to be 7.9 and 13.6. ANN models with various structures, training algorithm, input, and number of neurons are investigated. A total of 480 ANN models were generated consisting of 60 models for noise-free synthetic data, 360 models for different noise content synthetic data and 60 models to apply to the data collected in the field. The models were tested to determine the most appropriate structure and training algorithm. In the final ANN, the input vectors consisted of the difference of the width, energy, and distance of seismic traces, and the output was Q value. Success rate of both ANN methods with noise-free and noisy synthetic data were higher than the other three methods. Also according to the statistical tests on estimated Q value from field data, the method showed results that are more suitable. The Q value can be estimated

  15. Net Ecosystem Carbon Flux

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Net Ecosystem Carbon Flux is defined as the year-over-year change in Total Ecosystem Carbon Stock, or the net rate of carbon exchange between an ecosystem and the...

  16. A hybrid hardware and software approach for cancelling stimulus artifacts during same-electrode neural stimulation and recording.

    Science.gov (United States)

    Culaclii, Stanislav; Kim, Brian; Yi-Kai Lo; Wentai Liu

    2016-08-01

    Recovering neural responses from electrode recordings is fundamental for understanding the dynamics of neural networks. This effort is often obscured by stimulus artifacts in the recordings, which result from stimuli injected into the electrode-tissue interface. Stimulus artifacts, which can be orders of magnitude larger than the neural responses of interest, can mask short-latency evoked responses. Furthermore, simultaneous neural stimulation and recording on the same electrode generates artifacts with larger amplitudes compared to a separate electrode setup, which inevitably overwhelm the amplifier operation and cause unrecoverable neural signal loss. This paper proposes an end-to-end system combining hardware and software techniques for actively cancelling stimulus artifacts, avoiding amplifier saturation, and recovering neural responses during current-controlled in-vivo neural stimulation and recording. The proposed system is tested in-vitro under various stimulation settings by stimulating and recording on the same electrode with a superimposed pre-recorded neural signal. Experimental results show that neural responses can be recovered with minimal distortion even during stimulus artifacts that are several orders greater in magnitude.

  17. Theoretical approaches to holistic biological features: Pattern formation, neural networks and the brain-mind relation.

    Science.gov (United States)

    Gierer, Alfred

    2002-06-01

    The topic of this article is the relation between bottom-up and top-down, reductionist and holistic approaches to the solution of basic biological problems. While there is no doubt that the laws of physics apply to all events in space and time, including the domains of life, understanding biology depends not only on elucidating the role of the molecules involved, but, to an increasing extent, on systems theoretical approaches in diverse fields of the life sciences. Examples discussed in this article are the generation of spatial patterns in development by the interplay of autocatalysis and lateral inhibition; the evolution of integrating capabilities of the human brain, such as cognition-based empathy; and both neurobiological and epistemological aspects of scientific theories of consciousness and the mind.

  18. Identifying functional connectivity in large-scale neural ensemble recordings: a multiscale data mining approach.

    Science.gov (United States)

    Eldawlatly, Seif; Jin, Rong; Oweiss, Karim G

    2009-02-01

    Identifying functional connectivity between neuronal elements is an essential first step toward understanding how the brain orchestrates information processing at the single-cell and population levels to carry out biological computations. This letter suggests a new approach to identify functional connectivity between neuronal elements from their simultaneously recorded spike trains. In particular, we identify clusters of neurons that exhibit functional interdependency over variable spatial and temporal patterns of interaction. We represent neurons as objects in a graph and connect them using arbitrarily defined similarity measures calculated across multiple timescales. We then use a probabilistic spectral clustering algorithm to cluster the neurons in the graph by solving a minimum graph cut optimization problem. Using point process theory to model population activity, we demonstrate the robustness of the approach in tracking a broad spectrum of neuronal interaction, from synchrony to rate co-modulation, by systematically varying the length of the firing history interval and the strength of the connecting synapses that govern the discharge pattern of each neuron. We also demonstrate how activity-dependent plasticity can be tracked and quantified in multiple network topologies built to mimic distinct behavioral contexts. We compare the performance to classical approaches to illustrate the substantial gain in performance.

  19. Direct analysis of blood serum by total reflection X-ray fluorescence spectrometry and application of an artificial neural network approach for cancer diagnosis*1

    Science.gov (United States)

    Hernández-Caraballo, Edwin A.; Marcó-Parra, Lué M.

    2003-12-01

    Iron, copper, zinc and selenium were determined directly in serum samples from healthy individuals ( n=33) and cancer patients ( n=27) by total reflection X-ray fluorescence spectrometry using the Compton peak as internal standard [L.M. Marcó P. et al., Spectrochim. Acta Part B 54 (1999) 1469-1480]. The standardized concentrations of these elements were used as input data for two-layer artificial neural networks trained with the generalized delta rule in order to classify such individuals according to their health status. Various artificial neural networks, comprising a linear function in the input layer, a hyperbolic tangent function in the hidden layer and a sigmoid function in the output layer, were evaluated for such a purpose. Of the networks studied, the (4:4:1) gave the highest estimation (98%) and prediction rates (94%). The latter demonstrates the potential of the total reflection X-ray fluorescence spectrometry/artificial neural network approach in clinical chemistry.

  20. Experiments and simulation of a net closing mechanism for tether-net capture of space debris

    Science.gov (United States)

    Sharf, Inna; Thomsen, Benjamin; Botta, Eleonora M.; Misra, Arun K.

    2017-10-01

    This research addresses the design and testing of a debris containment system for use in a tether-net approach to space debris removal. The tether-net active debris removal involves the ejection of a net from a spacecraft by applying impulses to masses on the net, subsequent expansion of the net, the envelopment and capture of the debris target, and the de-orbiting of the debris via a tether to the chaser spacecraft. To ensure a debris removal mission's success, it is important that the debris be successfully captured and then, secured within the net. To this end, we present a concept for a net closing mechanism, which we believe will permit consistently successful debris capture via a simple and unobtrusive design. This net closing system functions by extending the main tether connecting the chaser spacecraft and the net vertex to the perimeter and around the perimeter of the net, allowing the tether to actuate closure of the net in a manner similar to a cinch cord. A particular embodiment of the design in a laboratory test-bed is described: the test-bed itself is comprised of a scaled-down tether-net, a supporting frame and a mock-up debris. Experiments conducted with the facility demonstrate the practicality of the net closing system. A model of the net closure concept has been integrated into the previously developed dynamics simulator of the chaser/tether-net/debris system. Simulations under tether tensioning conditions demonstrate the effectiveness of the closure concept for debris containment, in the gravity-free environment of space, for a realistic debris target. The on-ground experimental test-bed is also used to showcase its utility for validating the dynamics simulation of the net deployment, and a full-scale automated setup would make possible a range of validation studies of other aspects of a tether-net debris capture mission.

  1. Assessment of Self-Heating Susceptibility of Indian Coal Seams - A Neural Network Approach

    Science.gov (United States)

    Panigrahi, D. C.; Ray, S. K.

    2014-12-01

    The paper addresses an electro-chemical method called wet oxidation potential technique for determining the susceptibility of coal to spontaneous combustion. Altogether 78 coal samples collected from thirteen different mining companies spreading over most of the Indian Coalfields have been used for this experimental investigation and 936 experiments have been carried out by varying different experimental conditions to standardize this method for wider application. Thus for a particular sample 12 experiments of wet oxidation potential method were carried out. The results of wet oxidation potential (WOP) method have been correlated with the intrinsic properties of coal by carrying out proximate, ultimate and petrographic analyses of the coal samples. Correlation studies have been carried out with Design Expert 7.0.0 software. Further, artificial neural network (ANN) analysis was performed to ensure best combination of experimental conditions to be used for obtaining optimum results in this method. All the above mentioned analysis clearly spelt out that the experimental conditions should be 0.2 N KMnO4 solution with 1 N KOH at 45°C to achieve optimum results for finding out the susceptibility of coal to spontaneous combustion. The results have been validated with Crossing Point Temperature (CPT) data which is widely used in Indian mining scenario. W pracy omówiono możliwości wykorzystania metody elektro-chemicznej zwanej metodą określania potencjału utleniającego w procesie mokrym do określania skłonności węgla do samozapłonu. Dla potrzeb eksperymentu zebrano 78 próbek węgla z trzynastu kopalni w obrębie Indyjskiego Zagłębia Węglowego. Przeprowadzono 936 eksperymentów, w różnych warunkach prowadzenia procesu aby zapewnić standaryzację metody w celu jej szerszego zastosowania. Dla każdej próbki przeprowadzono 12 eksperymentów metodą badania potencjału utleniającego w procesie mokrym. Wyniki skorelowano z własnościami danego węgla przez

  2. A radial basis function neural network approach to determine the survival of Listeria monocytogenes in Katiki, a traditional Greek soft cheese.

    Science.gov (United States)

    Panagou, Efstathios Z

    2008-04-01

    A radial basis function neural network was developed to determine the kinetic behavior of Listeria monocytogenes in Katiki, a traditional white acid-curd soft spreadable cheese. The applicability of the neural network approach was compared with the reparameterized Gompertz, the modified Weibull, and the Geeraerd primary models. Model performance was assessed with the root mean square error of the residuals of the model (RMSE), the regression coefficient (R2), and the F test. Commercially prepared cheese samples were artificially inoculated with a five-strain cocktail of L. monocytogenes, with an initial concentration of 10(6) CFU g(-1) and stored at 5, 10, 15, and 20 degrees C for 40 days. At each storage temperature, a pathogen viability loss profile was evident and included a shoulder, a log-linear phase, and a tailing phase. The developed neural network described the survival of L. monocytogenes equally well or slightly better than did the three primary models. The performance indices for the training subset of the network were R2 = 0.993 and RMSE = 0.214. The relevant mean values for all storage temperatures were R2 = 0.981, 0.986, and 0.985 and RMSE = 0.344, 0.256, and 0.262 for the reparameterized Gompertz, modified Weibull, and Geeraerd models, respectively. The results of the F test indicated that none of the primary models were able to describe accurately the survival of the pathogen at 5 degrees C, whereas with the neural network all fvalues were significant. The neural network and primary models all were validated under constant temperature storage conditions (12 and 17 degrees C). First or second order polynomial models were used to relate the inactivation parameters to temperature, whereas the neural network was used a one-step modeling approach. Comparison of the prediction capability was based on bias and accuracy factors and on the goodness-of-fit index. The prediction performance of the neural network approach was equal to that of the primary

  3. Solving the Weighted Constraint Satisfaction Problems Via the Neural Network Approach

    Directory of Open Access Journals (Sweden)

    Khalid Haddouch

    2016-09-01

    Full Text Available A wide variety of real world optimization problems can be modelled as Weighted Constraint Satisfaction Problems (WCSPs. In this paper, we model this problem in terms of in original 0-1 quadratic programming subject to leaner constraints. View it performance, we use the continuous Hopfield network to solve the obtained model basing on original energy function. To validate our model, we solve several instance of benchmarking WCSP. In this regard, our approach recognizes the optimal solution of the said instances.

  4. A hit and run approach to inducible direct reprogramming of astrocytes to neural stem cells

    Directory of Open Access Journals (Sweden)

    Maria ePoulou

    2016-04-01

    Full Text Available Temporal and spatial control of gene expression can be achieved using an inducible system as a fundamental tool for regulated transcription in basic, applied and eventually in clinical research. We describe a novel hit and run inducible direct reprogramming approach. In a single step, two days post-transfection, transiently transfected Sox2FLAG under the Leu3p-αIPM inducible control (iSox2 triggers the activation of endogenous Sox2, redirecting primary astrocytes into abundant distinct nestin-positive radial glia cells. This technique introduces a unique novel tool for safe, rapid and efficient reprogramming amendable to regenerative medicine.

  5. Net analyte signal based statistical quality control

    NARCIS (Netherlands)

    Skibsted, E.T.S.; Boelens, H.F.M.; Westerhuis, J.A.; Smilde, A.K.; Broad, N.W.; Rees, D.R.; Witte, D.T.

    2005-01-01

    Net analyte signal statistical quality control (NAS-SQC) is a new methodology to perform multivariate product quality monitoring based on the net analyte signal approach. The main advantage of NAS-SQC is that the systematic variation in the product due to the analyte (or property) of interest is

  6. Professional Enterprise NET

    CERN Document Server

    Arking, Jon

    2010-01-01

    Comprehensive coverage to help experienced .NET developers create flexible, extensible enterprise application code If you're an experienced Microsoft .NET developer, you'll find in this book a road map to the latest enterprise development methodologies. It covers the tools you will use in addition to Visual Studio, including Spring.NET and nUnit, and applies to development with ASP.NET, C#, VB, Office (VBA), and database. You will find comprehensive coverage of the tools and practices that professional .NET developers need to master in order to build enterprise more flexible, testable, and ext

  7. Densities and Diel Vertical Migration of Mysis relicta in Lake Superior: A Comparison of Optical Plankton Encounter and Net-based Approaches

    Science.gov (United States)

    In this study, we used data from an OPC, and LOPC, and vertical net tows to estimate densities and describe the day/night vertical distribution of Mysis at a series of stations distributed throughout Lake Superior, and to evaluate the efficacy of using (L)OPC for examining DVM of...

  8. Towards a Standard for Modular Petri Nets

    DEFF Research Database (Denmark)

    Kindler, Ekkart; Petrucci, Laure

    2009-01-01

    When designing complex systems, mechanisms for structuring, composing, and reusing system components are crucial. Today, there are many approaches for equipping Petri nets with such mechanisms. In the context of defining a standard interchange format for Petri nets, modular PNML was defined....... Moreover, we present and discuss some more advanced features of modular Petri nets that could be included in the standard. This way, we provide a formal foundation and a basis for a discussion of features to be included in the upcoming standard of a module concept for Petri nets in general and for high...

  9. NASA Net Zero Energy Buildings Roadmap

    Energy Technology Data Exchange (ETDEWEB)

    Pless, S.; Scheib, J.; Torcellini, P.; Hendron, B.; Slovensky, M.

    2014-10-01

    In preparation for the time-phased net zero energy requirement for new federal buildings starting in 2020, set forth in Executive Order 13514, NASA requested that the National Renewable Energy Laboratory (NREL) to develop a roadmap for NASA's compliance. NASA detailed a Statement of Work that requested information on strategic, organizational, and tactical aspects of net zero energy buildings. In response, this document presents a high-level approach to net zero energy planning, design, construction, and operations, based on NREL's first-hand experience procuring net zero energy construction, and based on NREL and other industry research on net zero energy feasibility. The strategic approach to net zero energy starts with an interpretation of the executive order language relating to net zero energy. Specifically, this roadmap defines a net zero energy acquisition process as one that sets an aggressive energy use intensity goal for the building in project planning, meets the reduced demand goal through energy efficiency strategies and technologies, then adds renewable energy in a prioritized manner, using building-associated, emission- free sources first, to offset the annual energy use required at the building; the net zero energy process extends through the life of the building, requiring a balance of energy use and production in each calendar year.

  10. Prediction Effects of Personal, Psychosocial, and Occupational Risk Factors on Low Back Pain Severity Using Artificial Neural Networks Approach in Industrial Workers.

    Science.gov (United States)

    Darvishi, Ebrahim; Khotanlou, Hassan; Khoubi, Jamshid; Giahi, Omid; Mahdavi, Neda

    2017-09-01

    This study aimed to provide an empirical model of predicting low back pain (LBP) by considering the occupational, personal, and psychological risk factor interactions in workers population employed in industrial units using an artificial neural networks approach. A total of 92 workers with LBP as the case group and 68 healthy workers as a control group were selected in various industrial units with similar occupational conditions. The demographic information and personal, occupational, and psychosocial factors of the participants were collected via interview, related questionnaires, consultation with occupational medicine, and also the Rapid Entire Body Assessment worksheet and National Aeronautics and Space Administration Task Load Index software. Then, 16 risk factors for LBP were used as input variables to develop the prediction model. Networks with various multilayered structures were developed using MATLAB. The developed neural networks with 1 hidden layer and 26 neurons had the least error of classification in both training and testing phases. The mean of classification accuracy of the developed neural networks for the testing and training phase data were about 88% and 96%, respectively. In addition, the mean of classification accuracy of both training and testing data was 92%, indicating much better results compared with other methods. It appears that the prediction model using the neural network approach is more accurate compared with other applied methods. Because occupational LBP is usually untreatable, the results of prediction may be suitable for developing preventive strategies and corrective interventions. Copyright © 2017. Published by Elsevier Inc.

  11. Neural network and Monte Carlo simulation approach to investigate variability of copper concentration in phytoremediated contaminated soils.

    Science.gov (United States)

    Hattab, Nour; Hambli, Ridha; Motelica-Heino, Mikael; Mench, Michel

    2013-11-15

    The statistical variation of soil properties and their stochastic combinations may affect the extent of soil contamination by metals. This paper describes a method for the stochastic analysis of the effects of the variation in some selected soil factors (pH, DOC and EC) on the concentration of copper in dwarf bean leaves (phytoavailability) grown in the laboratory on contaminated soils treated with different amendments. The method is based on a hybrid modeling technique that combines an artificial neural network (ANN) and Monte Carlo Simulations (MCS). Because the repeated analyses required by MCS are time-consuming, the ANN is employed to predict the copper concentration in dwarf bean leaves in response to stochastic (random) combinations of soil inputs. The input data for the ANN are a set of selected soil parameters generated randomly according to a Gaussian distribution to represent the parameter variabilities. The output is the copper concentration in bean leaves. The results obtained by the stochastic (hybrid) ANN-MCS method show that the proposed approach may be applied (i) to perform a sensitivity analysis of soil factors in order to quantify the most important soil parameters including soil properties and amendments on a given metal concentration, (ii) to contribute toward the development of decision-making processes at a large field scale such as the delineation of contaminated sites. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Prediction of Currency Volume Issued in Taiwan Using a Hybrid Artificial Neural Network and Multiple Regression Approach

    Directory of Open Access Journals (Sweden)

    Yuehjen E. Shao

    2013-01-01

    Full Text Available Because the volume of currency issued by a country always affects its interest rate, price index, income levels, and many other important macroeconomic variables, the prediction of currency volume issued has attracted considerable attention in recent years. In contrast to the typical single-stage forecast model, this study proposes a hybrid forecasting approach to predict the volume of currency issued in Taiwan. The proposed hybrid models consist of artificial neural network (ANN and multiple regression (MR components. The MR component of the hybrid models is established for a selection of fewer explanatory variables, wherein the selected variables are of higher importance. The ANN component is then designed to generate forecasts based on those important explanatory variables. Subsequently, the model is used to analyze a real dataset of Taiwan's currency from 1996 to 2011 and twenty associated explanatory variables. The prediction results reveal that the proposed hybrid scheme exhibits superior forecasting performance for predicting the volume of currency issued in Taiwan.

  13. High-precision approach to localization scheme of visible light communication based on artificial neural networks and modified genetic algorithms

    Science.gov (United States)

    Guan, Weipeng; Wu, Yuxiang; Xie, Canyu; Chen, Hao; Cai, Ye; Chen, Yingcong

    2017-10-01

    An indoor positioning algorithm based on visible light communication (VLC) is presented. This algorithm is used to calculate a three-dimensional (3-D) coordinate of an indoor optical wireless environment, which includes sufficient orders of multipath reflections from reflecting surfaces of the room. Leveraging the global optimization ability of the genetic algorithm (GA), an innovative framework for 3-D position estimation based on a modified genetic algorithm is proposed. Unlike other techniques using VLC for positioning, the proposed system can achieve indoor 3-D localization without making assumptions about the height or acquiring the orientation angle of the mobile terminal. Simulation results show that an average localization error of less than 1.02 cm can be achieved. In addition, in most VLC-positioning systems, the effect of reflection is always neglected and its performance is limited by reflection, which makes the results not so accurate for a real scenario and the positioning errors at the corners are relatively larger than other places. So, we take the first-order reflection into consideration and use artificial neural network to match the model of a nonlinear channel. The studies show that under the nonlinear matching of direct and reflected channels the average positioning errors of four corners decrease from 11.94 to 0.95 cm. The employed algorithm is emerged as an effective and practical method for indoor localization and outperform other existing indoor wireless localization approaches.

  14. A neural-network-based approach to the double traveling salesman problem.

    Science.gov (United States)

    Plebe, Alessio; Anile, Angelo Marcello

    2002-02-01

    The double traveling salesman problem is a variation of the basic traveling salesman problem where targets can be reached by two salespersons operating in parallel. The real problem addressed by this work concerns the optimization of the harvest sequence for the two independent arms of a fruit-harvesting robot. This application poses further constraints, like a collision-avoidance function. The proposed solution is based on a self-organizing map structure, initialized with as many artificial neurons as the number of targets to be reached. One of the key components of the process is the combination of competitive relaxation with a mechanism for deleting and creating artificial neurons. Moreover, in the competitive relaxation process, information about the trajectory connecting the neurons is combined with the distance of neurons from the target. This strategy prevents tangles in the trajectory and collisions between the two tours. Results of tests indicate that the proposed approach is efficient and reliable for harvest sequence planning. Moreover, the enhancements added to the pure self-organizing map concept are of wider importance, as proved by a traveling salesman problem version of the program, simplified from the double version for comparison.

  15. Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology

    Science.gov (United States)

    Siegle, Joshua H.; Hale, Gregory J.; Newman, Jonathan P.; Voigts, Jakob

    2014-01-01

    One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is “open” or “closed”: that is, whether or not the system’s schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. PMID:25528614

  16. Neural ensemble communities: open-source approaches to hardware for large-scale electrophysiology.

    Science.gov (United States)

    Siegle, Joshua H; Hale, Gregory J; Newman, Jonathan P; Voigts, Jakob

    2015-06-01

    One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is 'open' or 'closed': that is, whether or not the system's schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Portable Rule Extraction Method for Neural Network Decisions Reasoning

    Directory of Open Access Journals (Sweden)

    Darius PLIKYNAS

    2005-08-01

    Full Text Available Neural network (NN methods are sometimes useless in practical applications, because they are not properly tailored to the particular market's needs. We focus thereinafter specifically on financial market applications. NNs have not gained full acceptance here yet. One of the main reasons is the "Black Box" problem (lack of the NN decisions explanatory power. There are though some NN decisions rule extraction methods like decompositional, pedagogical or eclectic, but they suffer from low portability of the rule extraction technique across various neural net architectures, high level of granularity, algorithmic sophistication of the rule extraction technique etc. The authors propose to eliminate some known drawbacks using an innovative extension of the pedagogical approach. The idea is exposed by the use of a widespread MLP neural net (as a common tool in the financial problems' domain and SOM (input data space clusterization. The feedback of both nets' performance is related and targeted through the iteration cycle by achievement of the best matching between the decision space fragments and input data space clusters. Three sets of rules are generated algorithmically or by fuzzy membership functions. Empirical validation of the common financial benchmark problems is conducted with an appropriately prepared software solution.

  18. SOCIAL NET: A CASE STUDY OF THE UNIVERSITY NET OF POPULAR COOPERATIVES TECHNOLOGICAL INCUBATORS (PCTIS NET FROM THE INTERACTION AMONG THE INCUBATORS

    Directory of Open Access Journals (Sweden)

    Marília Matos Pereira Lopes

    2014-07-01

    Full Text Available The objective of this assignment was to identify if the University Net of Popular Cooperatives Technological Incubators (PCTIs Net is a social net. The research was an exploratory nature study with descriptive character, the technical procedure of the present research was the case study. The questionnaire was applied in 82% of the incubators belonging to the PCTIs Net, and interviews. The information acquired through the questionnaire was gathered and tabulated to compose the characterization of the net incubators and the social analyzer. With the Pajek program was created the social analyzer and the centralizing box. Was performed to compare the results with previous work Rennó et al. (2010 proposed that the same goal using a different approach. Ending the analysis guided by the characteristics of a social net, it was observed that the PCTIs Net is a social net, however it was emphasized that the existing communication is a point where the net needs to be fortified.

  19. Neural Networks

    Directory of Open Access Journals (Sweden)

    Schwindling Jerome

    2010-04-01

    Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  20. A process to estimate net infiltration using a site-scale water-budget approach, Rainier Mesa, Nevada National Security Site, Nevada, 2002–05

    Science.gov (United States)

    Smith, David W.; Moreo, Michael T.; Garcia, C. Amanda; Halford, Keith J.; Fenelon, Joseph M.

    2017-08-29

    This report documents a process used to estimate net infiltration from precipitation, evapotranspiration (ET), and soil data acquired at two sites on Rainier Mesa. Rainier Mesa is a groundwater recharge area within the Nevada National Security Site where recharged water flows through bedrock fractures to a deep (450 meters) water table. The U.S. Geological Survey operated two ET stations on Rainier Mesa from 2002 to 2005 at sites characterized by pinyon-juniper and scrub-brush vegetative cover. Precipitation and ET data were corrected to remove measurement biases and gap-filled to develop continuous datasets. Net infiltration (percolation below the root zone) and changes in root-zone water storage were estimated using a monthly water-balance model.Site-scale water-budget results indicate that the heavily-fractured welded-tuff bedrock underlying thin (<40 centimeters) topsoil is a critical water source for vegetation during dry periods. Annual precipitation during the study period ranged from fourth lowest (182 millimeters [mm]) to second highest (708 mm) on record (record = 55 years). Annual ET exceeded precipitation during dry years, indicating that the fractured-bedrock reservoir capacity is sufficient to meet atmospheric-evaporative demands and to sustain vegetation through extended dry periods. Net infiltration (82 mm) was simulated during the wet year after the reservoir was rapidly filled to capacity. These results support previous conclusions that preferential fracture flow was induced, resulting in an episodic recharge pulse that was detected in nearby monitoring wells. The occurrence of net infiltration only during the wet year is consistent with detections of water-level rises in nearby monitoring wells that occur only following wet years.

  1. NetView: a high-definition network-visualization approach to detect fine-scale population structures from genome-wide patterns of variation.

    Science.gov (United States)

    Neuditschko, Markus; Khatkar, Mehar S; Raadsma, Herman W

    2012-01-01

    High-throughput sequencing and single nucleotide polymorphism (SNP) genotyping can be used to infer complex population structures. Fine-scale population structure analysis tracing individual ancestry remains one of the major challenges. Based on network theory and recent advances in SNP chip technology, we investigated an unsupervised network clustering method called Super Paramagnetic Clustering (Spc). When applied to whole-genome marker data it identifies the natural divisions of groups of individuals into population clusters without use of prior ancestry information. Furthermore, we optimised an analysis pipeline called NetView, a high-definition network visualization, starting with computation of genetic distance, followed clustering using Spc and finally visualization of clusters with Cytoscape. We compared NetView against commonly used methodologies including Principal Component Analyses (PCA) and a model-based algorithm, Admixture, on whole-genome-wide SNP data derived from three previously described data sets: simulated (2.5 million SNPs, 5 populations), human (1.4 million SNPs, 11 populations) and cattle (32,653 SNPs, 19 populations). We demonstrate that individuals can be effectively allocated to their correct population whilst simultaneously revealing fine-scale structure within the populations. Analyzing the human HapMap populations, we identified unexpected genetic relatedness among individuals, and population stratification within the Indian, African and Mexican samples. In the cattle data set, we correctly assigned all individuals to their respective breeds and detected fine-scale population sub-structures reflecting different sample origins and phenotypes. The NetView pipeline is computationally extremely efficient and can be easily applied on large-scale genome-wide data sets to assign individuals to particular populations and to reproduce fine-scale population structures without prior knowledge of individual ancestry. NetView can be used on any

  2. An ensemble micro neural network approach for elucidating interactions between zinc finger proteins and their target DNA.

    Science.gov (United States)

    Dutta, Shayoni; Madan, Spandan; Parikh, Harsh; Sundar, Durai

    2016-12-22

    The ability to engineer zinc finger proteins binding to a DNA sequence of choice is essential for targeted genome editing to be possible. Experimental techniques and molecular docking have been successful in predicting protein-DNA interactions, however, they are highly time and resource intensive. Here, we present a novel algorithm designed for high throughput prediction of optimal zinc finger protein for 9 bp DNA sequences of choice. In accordance with the principles of information theory, a subset identified by using K-means clustering was used as a representative for the space of all possible 9 bp DNA sequences. The modeling and simulation results assuming synergistic mode of binding obtained from this subset were used to train an ensemble micro neural network. Synergistic mode of binding is the closest to the DNA-protein binding seen in nature, and gives much higher quality predictions, while the time and resources increase exponentially in the trade off. Our algorithm is inspired from an ensemble machine learning approach, and incorporates the predictions made by 100 parallel neural networks, each with a different hidden layer architecture designed to pick up different features from the training dataset to predict optimal zinc finger proteins for any 9 bp target DNA. The model gave an accuracy of an average 83% sequence identity for the testing dataset. The BLAST e-value are well within the statistical confidence interval of E-05 for 100% of the testing samples. The geometric mean and median value for the BLAST e-values were found to be 1.70E-12 and 7.00E-12 respectively. For final validation of approach, we compared our predictions against optimal ZFPs reported in literature for a set of experimentally studied DNA sequences. The accuracy, as measured by the average string identity between our predictions and the optimal zinc finger protein reported in literature for a 9 bp DNA target was found to be as high as 81% for DNA targets with a consensus sequence

  3. CPN Tools-Assisted Simulation and Verification of Nested Petri Nets

    Directory of Open Access Journals (Sweden)

    L. W. Dworza´nski

    2012-01-01

    Full Text Available Nested Petri nets (NP-nets are an extension of Petri net formalism within the “netswithin-nets” approach, when tokens in a marking are Petri nets, which have an autonomous behavior and are synchronized with the system net. The formalism of NP-nets allows modeling multi-level multi-agent systems with dynamic structure in a natural way. Currently, there is no tool for supporting NP-nets simulation and analysis. The paper proposes the translation of NP-nets into Colored Petri nets and the use of CPN Tools as a virtual machine for NP-nets modeling, simulation and automatic verification.

  4. Knowledge-Based Aircraft Automation: Managers Guide on the use of Artificial Intelligence for Aircraft Automation and Verification and Validation Approach for a Neural-Based Flight Controller

    Science.gov (United States)

    Broderick, Ron

    1997-01-01

    The ultimate goal of this report was to integrate the powerful tools of artificial intelligence into the traditional process of software development. To maintain the US aerospace competitive advantage, traditional aerospace and software engineers need to more easily incorporate the technology of artificial intelligence into the advanced aerospace systems being designed today. The future goal was to transition artificial intelligence from an emerging technology to a standard technology that is considered early in the life cycle process to develop state-of-the-art aircraft automation systems. This report addressed the future goal in two ways. First, it provided a matrix that identified typical aircraft automation applications conducive to various artificial intelligence methods. The purpose of this matrix was to provide top-level guidance to managers contemplating the possible use of artificial intelligence in the development of aircraft automation. Second, the report provided a methodology to formally evaluate neural networks as part of the traditional process of software development. The matrix was developed by organizing the discipline of artificial intelligence into the following six methods: logical, object representation-based, distributed, uncertainty management, temporal and neurocomputing. Next, a study of existing aircraft automation applications that have been conducive to artificial intelligence implementation resulted in the following five categories: pilot-vehicle interface, system status and diagnosis, situation assessment, automatic flight planning, and aircraft flight control. The resulting matrix provided management guidance to understand artificial intelligence as it applied to aircraft automation. The approach taken to develop a methodology to formally evaluate neural networks as part of the software engineering life cycle was to start with the existing software quality assurance standards and to change these standards to include neural network

  5. Neural network optimization, components, and design selection

    Science.gov (United States)

    Weller, Scott W.

    1991-01-01

    Neural Networks are part of a revived technology which has received a lot of hype in recent years. As is apt to happen in any hyped technology, jargon and predictions make its assimilation and application difficult. Nevertheless, Neural Networks have found use in a number of areas, working on non-trivial and non-contrived problems. For example, one net has been trained to "read", translating English text into phoneme sequences. Other applications of Neural Networks include data base manipulation and the solving of routing and classification types of optimization problems. It was their use in optimization that got me involved with Neural Networks. As it turned out, "optimization" used in this context was somewhat misleading, because while some network configurations could indeed solve certain kinds of optimization problems, the configuring or "training" of a Neural Network itself is an optimization problem, and most of the literature which talked about Neural Nets and optimization in the same breath did not speak to my goal of using Neural Nets to help solve lens optimization problems. I did eventually apply Neural Network to lens optimization, and I will touch on those results. The application of Neural Nets to the problem of lens selection was much more successful, and those results will dominate this paper.

  6. WaveNet

    Science.gov (United States)

    2015-10-30

    Coastal Inlets Research Program WaveNet WaveNet is a web-based, Graphical-User-Interface ( GUI ) data management tool developed for Corps coastal...generates tabular and graphical information for project planning and design documents. The WaveNet is a web-based GUI designed to provide users with a...data from different sources, and employs a combination of Fortran, Python and Matlab codes to process and analyze data for USACE applications

  7. Combined neural network/Phillips-Tikhonov approach to aerosol retrievals over land from the NASA Research Scanning Polarimeter

    Science.gov (United States)

    Di Noia, Antonio; Hasekamp, Otto P.; Wu, Lianghai; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John E.

    2017-11-01

    In this paper, an algorithm for the retrieval of aerosol and land surface properties from airborne spectropolarimetric measurements - combining neural networks and an iterative scheme based on Phillips-Tikhonov regularization - is described. The algorithm - which is an extension of a scheme previously designed for ground-based retrievals - is applied to measurements from the Research Scanning Polarimeter (RSP) on board the NASA ER-2 aircraft. A neural network, trained on a large data set of synthetic measurements, is applied to perform aerosol retrievals from real RSP data, and the neural network retrievals are subsequently used as a first guess for the Phillips-Tikhonov retrieval. The resulting algorithm appears capable of accurately retrieving aerosol optical thickness, fine-mode effective radius and aerosol layer height from RSP data. Among the advantages of using a neural network as initial guess for an iterative algorithm are a decrease in processing time and an increase in the number of converging retrievals.

  8. Coloured Petri Nets

    DEFF Research Database (Denmark)

    Jensen, Kurt

    1991-01-01

    This paper describes how Coloured Petri Nets (CP-nets) have been developed — from being a promising theoretical model to being a full-fledged language for the design, specification, simulation, validation and implementation of large software systems (and other systems in which human beings and...... use of CP-nets — because it means that the function representation and the translations (which are a bit mathematically complex) no longer are parts of the basic definition of CP-nets. Instead they are parts of the invariant method (which anyway demands considerable mathematical skills...

  9. Game Coloured Petri Nets

    DEFF Research Database (Denmark)

    Westergaard, Michael

    2006-01-01

    This paper introduces the notion of game coloured Petri nets. This allows the modeler to explicitly model what parts of the model comprise the modeled system and what parts are the environment of the modeled system. We give the formal definition of game coloured Petri nets, a means of reachability...... analysis of this net class, and an application of game coloured Petri nets to automatically generate easy-to-understand visualizations of the model by exploiting the knowledge that some parts of the model are not interesting from a visualization perspective (i.e. they are part of the environment...

  10. Programming NET Web Services

    CERN Document Server

    Ferrara, Alex

    2007-01-01

    Web services are poised to become a key technology for a wide range of Internet-enabled applications, spanning everything from straight B2B systems to mobile devices and proprietary in-house software. While there are several tools and platforms that can be used for building web services, developers are finding a powerful tool in Microsoft's .NET Framework and Visual Studio .NET. Designed from scratch to support the development of web services, the .NET Framework simplifies the process--programmers find that tasks that took an hour using the SOAP Toolkit take just minutes. Programming .NET

  11. Annotating Coloured Petri Nets

    DEFF Research Database (Denmark)

    Lindstrøm, Bo; Wells, Lisa Marie

    2002-01-01

    -net. An example of such auxiliary information is a counter which is associated with a token to be able to do performance analysis. Modifying colour sets and arc inscriptions in a CP-net to support a specific use may lead to creation of several slightly different CP-nets – only to support the different uses...... a method which makes it possible to associate auxiliary information, called annotations, with tokens without modifying the colour sets of the CP-net. Annotations are pieces of information that are not essential for determining the behaviour of the system being modelled, but are rather added to support...

  12. CCS - and its relationship to net theory

    DEFF Research Database (Denmark)

    Nielsen, Mogens

    1987-01-01

    In this paper we give a short introduction to Milner's Calculus for Communicating Systems - a paradigm for concurrent computation. We put special emphasis on the basic concepts and tools from the underlying "algebraic approach", and their relationship to the approach to concurrency within net...... theory. Furthermore, we provide an operational version of the language CCS with "true concurrency" in the sense of net theory, and a discussion of the possible use of such a marriage of the two theories of concurrency....

  13. ASP.NET web API build RESTful web applications and services on the .NET framework

    CERN Document Server

    Kanjilal, Joydip

    2013-01-01

    This book is a step-by-step, practical tutorial with a simple approach to help you build RESTful web applications and services on the .NET framework quickly and efficiently.This book is for ASP.NET web developers who want to explore REST-based services with C# 5. This book contains many real-world code examples with explanations whenever necessary. Some experience with C# and ASP.NET 4 is expected.

  14. Automated Grading of Gliomas using Deep Learning in Digital Pathology Images: A modular approach with ensemble of convolutional neural networks.

    Science.gov (United States)

    Ertosun, Mehmet Günhan; Rubin, Daniel L

    2015-01-01

    Brain glioma is the most common primary malignant brain tumors in adults with different pathologic subtypes: Lower Grade Glioma (LGG) Grade II, Lower Grade Glioma (LGG) Grade III, and Glioblastoma Multiforme (GBM) Grade IV. The survival and treatment options are highly dependent of this glioma grade. We propose a deep learning-based, modular classification pipeline for automated grading of gliomas using digital pathology images. Whole tissue digitized images of pathology slides obtained from The Cancer Genome Atlas (TCGA) were used to train our deep learning modules. Our modular pipeline provides diagnostic quality statistics, such as precision, sensitivity and specificity, of the individual deep learning modules, and (1) facilitates training given the limited data in this domain, (2) enables exploration of different deep learning structures for each module, (3) leads to developing less complex modules that are simpler to analyze, and (4) provides flexibility, permitting use of single modules within the framework or use of other modeling or machine learning applications, such as probabilistic graphical models or support vector machines. Our modular approach helps us meet the requirements of minimum accuracy levels that are demanded by the context of different decision points within a multi-class classification scheme. Convolutional Neural Networks are trained for each module for each sub-task with more than 90% classification accuracies on validation data set, and achieved classification accuracy of 96% for the task of GBM vs LGG classification, 71% for further identifying the grade of LGG into Grade II or Grade III on independent data set coming from new patients from the multi-institutional repository.

  15. Adaptive control using a hybrid-neural model: application to a polymerisation reactor

    Directory of Open Access Journals (Sweden)

    Cubillos F.

    2001-01-01

    Full Text Available This work presents the use of a hybrid-neural model for predictive control of a plug flow polymerisation reactor. The hybrid-neural model (HNM is based on fundamental conservation laws associated with a neural network (NN used to model the uncertain parameters. By simulations, the performance of this approach was studied for a peroxide-initiated styrene tubular reactor. The HNM was synthesised for a CSTR reactor with a radial basis function neural net (RBFN used to estimate the reaction rates recursively. The adaptive HNM was incorporated in two model predictive control strategies, a direct synthesis scheme and an optimum steady state scheme. Tests for servo and regulator control showed excellent behaviour following different setpoint variations, and rejecting perturbations. The good generalisation and training capacities of hybrid models, associated with the simplicity and robustness characteristics of the MPC formulations, make an attractive combination for the control of a polymerisation reactor.

  16. Net zero water

    CSIR Research Space (South Africa)

    Lindeque, M

    2013-01-01

    Full Text Available Is it possible to develop a building that uses a net zero amount of water? In recent years it has become evident that it is possible to have buildings that use a net zero amount of electricity. This is possible when the building is taken off...

  17. SolNet

    DEFF Research Database (Denmark)

    Jordan, Ulrike; Vajen, Klaus; Bales, Chris

    2014-01-01

    SolNet, founded in 2006, is the first coordinated International PhD education program on Solar Thermal Engineering. The SolNet network is coordinated by the Institute of Thermal Engineering at Kassel University, Germany. The network offers PhD courses on solar heating and cooling, conference...

  18. Self-organization of neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Clark, J.W.; Winston, J.V.; Rafelski, J.

    1984-05-14

    The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (brainwashing) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conducive to the simulation of memory and learning phenomena. 18 references, 2 figures.

  19. Optimisation of Economic Order Quantity Using Neural Networks Approach = Yapay Sinir Ağları Yaklaşımı ile Ekonomik Düzenin Optimizasyonu

    Directory of Open Access Journals (Sweden)

    Osman Nuri UÇAN

    2001-01-01

    Full Text Available In this paper, a Back Propagation-Artificial Neural Network (BP-ANN has been adapted for predicting the required car parts quantities in a real and major auto parts supplier chain. The conventional approach to determine the parts requirements is the Economic Order Quantity (EOQ method. The ability of neural models to learn, particularly their capability of handling large amounts of data simultaneously as well as their fast response time, are the characteristics desired for predictive and forecasting purposes. Here, the actual data obtained from a major auto parts supplier chain, involving a multi-layer system of supplying auto parts to car dealers, have been used to optimise and develop a BP-ANN model. The model has shown promising results in predicting parts orders with high degree of accuracy.

  20. Controlling selective stimulations below a spinal cord hemisection using brain recordings with a neural interface system approach

    Science.gov (United States)

    Panetsos, Fivos; Sanchez-Jimenez, Abel; Torets, Carlos; Largo, Carla; Micera, Silvestro

    2011-08-01

    In this work we address the use of realtime cortical recordings for the generation of coherent, reliable and robust motor activity in spinal-lesioned animals through selective intraspinal microstimulation (ISMS). The spinal cord of adult rats was hemisectioned and groups of multielectrodes were implanted in both the central nervous system (CNS) and the spinal cord below the lesion level to establish a neural system interface (NSI). To test the reliability of this new NSI connection, highly repeatable neural responses recorded from the CNS were used as a pattern generator of an open-loop control strategy for selective ISMS of the spinal motoneurons. Our experimental procedure avoided the spontaneous non-controlled and non-repeatable neural activity that could have generated spurious ISMS and the consequent undesired muscle contractions. Combinations of complex CNS patterns generated precisely coordinated, reliable and robust motor actions.

  1. Pro NET Best Practices

    CERN Document Server

    Ritchie, Stephen D

    2011-01-01

    Pro .NET Best Practices is a practical reference to the best practices that you can apply to your .NET projects today. You will learn standards, techniques, and conventions that are sharply focused, realistic and helpful for achieving results, steering clear of unproven, idealistic, and impractical recommendations. Pro .NET Best Practices covers a broad range of practices and principles that development experts agree are the right ways to develop software, which includes continuous integration, automated testing, automated deployment, and code analysis. Whether the solution is from a free and

  2. Getting to Net Zero

    Energy Technology Data Exchange (ETDEWEB)

    2016-09-01

    The technology necessary to build net zero energy buildings (NZEBs) is ready and available today, however, building to net zero energy performance levels can be challenging. Energy efficiency measures, onsite energy generation resources, load matching and grid interaction, climatic factors, and local policies vary from location to location and require unique methods of constructing NZEBs. It is recommended that Components start looking into how to construct and operate NZEBs now as there is a learning curve to net zero construction and FY 2020 is just around the corner.

  3. Instant Lucene.NET

    CERN Document Server

    Heydt, Michael

    2013-01-01

    Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. A step-by-step guide that helps you to index, search, and retrieve unstructured data with the help of Lucene.NET.Instant Lucene.NET How-to is essential for developers new to Lucene and Lucene.NET who are looking to get an immediate foundational understanding of how to use the library in their application. It's assumed you have programming experience in C# already, but not that you have experience with search techniques such as information retrieval theory (although there will be a l

  4. Deep vector-based convolutional neural network approach for automatic recognition of colonies of induced pluripotent stem cells.

    Directory of Open Access Journals (Sweden)

    Muthu Subash Kavitha

    Full Text Available Pluripotent stem cells can potentially be used in clinical applications as a model for studying disease progress. This tracking of disease-causing events in cells requires constant assessment of the quality of stem cells. Existing approaches are inadequate for robust and automated differentiation of stem cell colonies. In this study, we developed a new model of vector-based convolutional neural network (V-CNN with respect to extracted features of the induced pluripotent stem cell (iPSC colony for distinguishing colony characteristics. A transfer function from the feature vectors to the virtual image was generated at the front of the CNN in order for classification of feature vectors of healthy and unhealthy colonies. The robustness of the proposed V-CNN model in distinguishing colonies was compared with that of the competitive support vector machine (SVM classifier based on morphological, textural, and combined features. Additionally, five-fold cross-validation was used to investigate the performance of the V-CNN model. The precision, recall, and F-measure values of the V-CNN model were comparatively higher than those of the SVM classifier, with a range of 87-93%, indicating fewer false positives and false negative rates. Furthermore, for determining the quality of colonies, the V-CNN model showed higher accuracy values based on morphological (95.5%, textural (91.0%, and combined (93.2% features than those estimated with the SVM classifier (86.7, 83.3, and 83.4%, respectively. Similarly, the accuracy of the feature sets using five-fold cross-validation was above 90% for the V-CNN model, whereas that yielded by the SVM model was in the range of 75-77%. We thus concluded that the proposed V-CNN model outperforms the conventional SVM classifier, which strongly suggests that it as a reliable framework for robust colony classification of iPSCs. It can also serve as a cost-effective quality recognition tool during culture and other experimental

  5. A microarray gene expression data classification using hybrid back propagation neural network

    Directory of Open Access Journals (Sweden)

    Vimaladevi M.

    2014-01-01

    Full Text Available Classification of cancer establishes appropriate treatment and helps to decide the diagnosis. Cancer expands progressively from an alteration in a cell's genetic structure. This change (mutation results in cells with uncontrolled growth patterns. In cancer classification, the approach, Back propagation is sufficient and also it is a universal technique of training artificial neural networks. It is also called supervised learning method. It needs many dataset for input and output for making up the training set. The back propagation method may execute the function of collaborate multiple parties. In existing method, collaborative learning is limited and it considers only two parties. The proposed collaborative function can perform well and problems can be solved by utilizing the power of cloud computing. This technical note applies hybrid models of Back Propagation Neural networks (BPN and fast Genetic Algorithms (GA to estimate the feature selection in gene expression data. The proposed research work examines many feature selection algorithms which are “fragile”; that is, the superiority of their results varies broadly over data sets. By this research, it is suggested that this is due to higherorder interactions between features causing restricted minima in search space in which the algorithm becomes attentive. GAs may escape from such minima by chance, because works are highly stochastic. A neural net classifier with a genetic algorithm, using the GA to select features for classification by the neural net and incorporating the net as part of the objective function of the GA.

  6. Disambiguating bilingual nominal entries against WordNet

    CERN Document Server

    Rigau, G; Rigau, German; Agirre, Eneko

    1995-01-01

    This paper explores the acquisition of conceptual knowledge from bilingual dictionaries (French/English, Spanish/English and English/Spanish) using a pre-existing broad coverage Lexical Knowledge Base (LKB) WordNet. Bilingual nominal entries are disambiguated agains WordNet, therefore linking the bilingual dictionaries to WordNet yielding a multilingual LKB (MLKB). The resulting MLKB has the same structure as WordNet, but some nodes are attached additionally to disambiguated vocabulary of other languages. Two different, complementary approaches are explored. In one of the approaches each entry of the dictionary is taken in turn, exploiting the information in the entry itself. The inferential capability for disambiguating the translation is given by Semantic Density over WordNet. In the other approach, the bilingual dictionary was merged with WordNet, exploiting mainly synonymy relations. Each of the approaches was used in a different dictionary. Both approaches attain high levels of precision on their own, sh...

  7. Why Traditional Expository Teaching-Learning Approaches May Founder? An Experimental Examination of Neural Networks in Biology Learning

    Science.gov (United States)

    Lee, Jun-Ki; Kwon, Yong-Ju

    2011-01-01

    Using functional magnetic resonance imaging (fMRI), this study investigates and discusses neurological explanations for, and the educational implications of, the neural network activations involved in hypothesis-generating and hypothesis-understanding for biology education. Two sets of task paradigms about biological phenomena were designed:…

  8. An Intelligent Approach to Educational Data: Performance Comparison of the Multilayer Perceptron and the Radial Basis Function Artificial Neural Networks

    Science.gov (United States)

    Kayri, Murat

    2015-01-01

    The objective of this study is twofold: (1) to investigate the factors that affect the success of university students by employing two artificial neural network methods (i.e., multilayer perceptron [MLP] and radial basis function [RBF]); and (2) to compare the effects of these methods on educational data in terms of predictive ability. The…

  9. Efficacy of an artificial neural network-based approach to endoscopic ultrasound elastography in diagnosis of focal pancreatic masses

    DEFF Research Database (Denmark)

    Săftoiu, Adrian; Vilmann, Peter; Gorunescu, Florin

    2012-01-01

    By using strain assessment, real-time endoscopic ultrasound (EUS) elastography provides additional information about a lesion's characteristics in the pancreas. We assessed the accuracy of real-time EUS elastography in focal pancreatic lesions using computer-aided diagnosis by artificial neural...

  10. Sensitive quantitative predictions of peptide-MHC binding by a 'Query by Committee' artificial neural network approach

    DEFF Research Database (Denmark)

    Buus, S; Lauemøller, S L; Worning, P

    2003-01-01

    We have generated Artificial Neural Networks (ANN) capable of performing sensitive, quantitative predictions of peptide binding to the MHC class I molecule, HLA-A*0204. We have shown that such quantitative ANN are superior to conventional classification ANN, that have been trained to predict...

  11. A comparative study between nonlinear regression and artificial neural network approaches for modelling wild oat (Avena fatua) field emergence

    Science.gov (United States)

    Non-linear regression techniques are used widely to fit weed field emergence patterns to soil microclimatic indices using S-type functions. Artificial neural networks present interesting and alternative features for such modeling purposes. In this work, a univariate hydrothermal-time based Weibull m...

  12. PatterNet: a system to learn compact physical design pattern representations for pattern-based analytics

    Science.gov (United States)

    Lutich, Andrey

    2017-07-01

    This research considers the problem of generating compact vector representations of physical design patterns for analytics purposes in semiconductor patterning domain. PatterNet uses a deep artificial neural network to learn mapping of physical design patterns to a compact Euclidean hyperspace. Distances among mapped patterns in this space correspond to dissimilarities among patterns defined at the time of the network training. Once the mapping network has been trained, PatterNet embeddings can be used as feature vectors with standard machine learning algorithms, and pattern search, comparison, and clustering become trivial problems. PatterNet is inspired by the concepts developed within the framework of generative adversarial networks as well as the FaceNet. Our method facilitates a deep neural network (DNN) to learn directly the compact representation by supplying it with pairs of design patterns and dissimilarity among these patterns defined by a user. In the simplest case, the dissimilarity is represented by an area of the XOR of two patterns. Important to realize that our PatterNet approach is very different to the methods developed for deep learning on image data. In contrast to "conventional" pictures, the patterns in the CAD world are the lists of polygon vertex coordinates. The method solely relies on the promise of deep learning to discover internal structure of the incoming data and learn its hierarchical representations. Artificial intelligence arising from the combination of PatterNet and clustering analysis very precisely follows intuition of patterning/optical proximity correction experts paving the way toward human-like and human-friendly engineering tools.

  13. Planning long lasting insecticide treated net campaigns: should households' existing nets be taken into account?

    Science.gov (United States)

    Yukich, Joshua; Bennett, Adam; Keating, Joseph; Yukich, Rudy K; Lynch, Matt; Eisele, Thomas P; Kolaczinski, Kate

    2013-06-14

    Mass distribution of long-lasting insecticide treated bed nets (LLINs) has led to large increases in LLIN coverage in many African countries. As LLIN ownership levels increase, planners of future mass distributions face the challenge of deciding whether to ignore the nets already owned by households or to take these into account and attempt to target individuals or households without nets. Taking existing nets into account would reduce commodity costs but require more sophisticated, and potentially more costly, distribution procedures. The decision may also have implications for the average age of nets in use and therefore on the maintenance of universal LLIN coverage over time. A stochastic simulation model based on the NetCALC algorithm was used to determine the scenarios under which it would be cost saving to take existing nets into account, and the potential effects of doing so on the age profile of LLINs owned. The model accounted for variability in timing of distributions, concomitant use of continuous distribution systems, population growth, sampling error in pre-campaign coverage surveys, variable net 'decay' parameters and other factors including the feasibility and accuracy of identifying existing nets in the field. Results indicate that (i) where pre-campaign coverage is around 40% (of households owning at least 1 LLIN), accounting for existing nets in the campaign will have little effect on the mean age of the net population and (ii) even at pre-campaign coverage levels above 40%, an approach that reduces LLIN distribution requirements by taking existing nets into account may have only a small chance of being cost-saving overall, depending largely on the feasibility of identifying nets in the field. Based on existing literature the epidemiological implications of such a strategy is likely to vary by transmission setting, and the risks of leaving older nets in the field when accounting for existing nets must be considered. Where pre-campaign coverage

  14. Planning long lasting insecticide treated net campaigns: should households’ existing nets be taken into account?

    Science.gov (United States)

    2013-01-01

    Background Mass distribution of long-lasting insecticide treated bed nets (LLINs) has led to large increases in LLIN coverage in many African countries. As LLIN ownership levels increase, planners of future mass distributions face the challenge of deciding whether to ignore the nets already owned by households or to take these into account and attempt to target individuals or households without nets. Taking existing nets into account would reduce commodity costs but require more sophisticated, and potentially more costly, distribution procedures. The decision may also have implications for the average age of nets in use and therefore on the maintenance of universal LLIN coverage over time. Methods A stochastic simulation model based on the NetCALC algorithm was used to determine the scenarios under which it would be cost saving to take existing nets into account, and the potential effects of doing so on the age profile of LLINs owned. The model accounted for variability in timing of distributions, concomitant use of continuous distribution systems, population growth, sampling error in pre-campaign coverage surveys, variable net ‘decay’ parameters and other factors including the feasibility and accuracy of identifying existing nets in the field. Results Results indicate that (i) where pre-campaign coverage is around 40% (of households owning at least 1 LLIN), accounting for existing nets in the campaign will have little effect on the mean age of the net population and (ii) even at pre-campaign coverage levels above 40%, an approach that reduces LLIN distribution requirements by taking existing nets into account may have only a small chance of being cost-saving overall, depending largely on the feasibility of identifying nets in the field. Based on existing literature the epidemiological implications of such a strategy is likely to vary by transmission setting, and the risks of leaving older nets in the field when accounting for existing nets must be considered

  15. PhysioNet

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PhysioNet Resource is intended to stimulate current research and new investigations in the study of complex biomedical and physiologic signals. It offers free...

  16. Decoding neural events from fMRI BOLD signal: A comparison of existing approaches and development of a new algorithm

    Science.gov (United States)

    Bush, Keith; Cisler, Josh

    2013-01-01

    Neuroimaging methodology predominantly relies on the blood oxygenation level dependent (BOLD) signal. While the BOLD signal is a valid measure of neuronal activity, variance in fluctuations of the BOLD signal are not only due to fluctuations in neural activity. Thus, a remaining problem in neuroimaging analyses is developing methods that ensure specific inferences about neural activity that are not confounded by unrelated sources of noise in the BOLD signal. Here, we develop and test a new algorithm for performing semi-blind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that treats the neural event as an observable, but intermediate, probabilistic representation of the system’s state. We test and compare this new algorithm against three other recent deconvolution algorithms under varied levels of autocorrelated and Gaussian noise, hemodynamic response function (HRF) misspecification, and observation sampling rate (i.e., TR). Further, we compare the algorithms’ performance using two models to simulate BOLD data: a convolution of neural events with a known (or misspecified) HRF versus a biophysically accurate balloon model of hemodynamics. We also examine the algorithms’ performance on real task data. The results demonstrated good performance of all algorithms, though the new algorithm generally outperformed the others (3.0% improvement) under simulated resting state experimental conditions exhibiting multiple, realistic confounding factors (as well as 10.3% improvement on a real Stroop task). The simulations also demonstrate that the greatest negative influence on deconvolution accuracy is observation sampling rate. Practical and theoretical implications of these results for improving inferences about neural activity from fMRI BOLD signal are discussed. PMID:23602664

  17. TideNet

    Science.gov (United States)

    2015-10-30

    query tide data sources in a desired geographic region of USA and its territories (Figure 1). Users can select a tide data source through the Google Map ...select data sources according to the desired geographic region. It uses the Google Map interface to display data from different sources. Recent...Coastal Inlets Research Program TideNet The TideNet is a web-based Graphical User Interface (GUI) that provides users with GIS mapping tools to

  18. Interaction Nets in Russian

    OpenAIRE

    Salikhmetov, Anton

    2013-01-01

    Draft translation to Russian of Chapter 7, Interaction-Based Models of Computation, from Models of Computation: An Introduction to Computability Theory by Maribel Fernandez. "In this chapter, we study interaction nets, a model of computation that can be seen as a representative of a class of models based on the notion of 'computation as interaction'. Interaction nets are a graphical model of computation devised by Yves Lafont in 1990 as a generalisation of the proof structures of linear logic...

  19. Programming NET 35

    CERN Document Server

    Liberty, Jesse

    2009-01-01

    Bestselling author Jesse Liberty and industry expert Alex Horovitz uncover the common threads that unite the .NET 3.5 technologies, so you can benefit from the best practices and architectural patterns baked into the new Microsoft frameworks. The book offers a Grand Tour" of .NET 3.5 that describes how the principal technologies can be used together, with Ajax, to build modern n-tier and service-oriented applications. "

  20. Weakly Supervised PatchNets: Describing and Aggregating Local Patches for Scene Recognition.

    Science.gov (United States)

    Wang, Zhe; Wang, Limin; Wang, Yali; Zhang, Bowen; Qiao, Yu

    2017-02-09

    Traditional feature encoding scheme (e.g., Fisher vector) with local descriptors (e.g., SIFT) and recent convolutional neural networks (CNNs) are two classes of successful methods for image recognition. In this paper, we propose a hybrid representation, which leverages the discriminative capacity of CNNs and the simplicity of descriptor encoding schema for image recognition, with a focus on scene recognition. To this end, we make three main contributions from the following aspects. First, we propose a patch-level and end-to-end architecture to model the appearance of local patches, called PatchNet. PatchNet is essentially a customized network trained in a weakly supervised manner, which uses the image-level supervision to guide the patch-level feature extraction. Second, we present a hybrid visual representation, called VSAD, by utilizing the robust feature representations of PatchNet to describe local patches and exploiting the semantic probabilities of PatchNet to aggregate these local patches into a global representation. Third, based on the proposed VSAD representation, we propose a new state-of-the-art scene recognition approach, which achieves an excellent performance on two standard benchmarks: MIT Indoor67 (86.2%) and SUN397 (73.0%).